The blog raises questions about how to reduce the risk of memory overflow while processing images in a browser when users pick several images from device storage simultaneously. A common pitfall is browser memory overflow, which often renders the application unresponsive and, in the worst-case scenario, leads to a device reload. I hope to provide some useful examples that help prevent such issues.
A Short Disclaimer
Before we start, I want to point out that I strongly recommend against implementing image conversion on the front-end. Front-end devices are unpredictable: we can’t anticipate each user’s memory capacity, CPU, GPU, or device platform, meaning users will experience different processing times and outcomes. It’s better to leave this task to the backend, where resource parameters are more controllable and consistent.
Why Browser-Based Image Processing
Despite the drawbacks, there are cases where browser-side image processing is necessary. For example, when users need an immediate image preview before upload—particularly when certain image formats, like .heic/.heif from iOS devices, are not natively supported by browsers. In such cases, the application must convert the images before they can be previewed.
Issues with Compressing and Converting in Browser Memory
While a single, lightweight image doesn’t typically cause issues, what happens when users upload multiple large images—say 10 images, each up to 5 or even 10 MB? At the same time, we may want to compress or resize the images for efficiency. Even using well-known libraries like heic2any (for converting .heic files) or browser-image-compression (for resizing and compressing) won’t fully alleviate memory strain.
When dealing with multiple large images, the browser must store them all in memory at once. Additionally, associated operations, like canvas elements, require more memory allocation. Our task becomes reducing memory usage and clearing unnecessary objects on the fly.Initial Solution for Compression and Convertion Images
Lets ask ChatGPT as a co-pilot for code example to implement processing of several chosen images:
import imageCompression from 'browser-image-compression';
import heic2any from 'heic2any';
async function compressImages(filesArray) {
// Create an array of promises to compress images in parallel
const compressionPromises = filesArray.map(file => {
let processedFile;
if (file.type === 'image/heic' || file.type === 'image/heif') {
// Convert HEIC to JPEG or PNG using heic2any
processedFile = heic2any({
blob: file,
toType: 'image/jpeg',
quality: 0.8 // Adjust quality to reduce memory usage
});
} else {
// Compress the image using browser-image-compression
const options = {
maxSizeMB: 1, // Limit to 1 MB
maxWidthOrHeight: 1024, // Resize to a maximum of 1024x1024
useWebWorker: true // Use Web Workers for efficient processing
};
processedFile = imageCompression(file, options);
}
return processedFile;
});
try {
// Wait for all compression tasks to complete
const compressedFiles = await Promise.all(compressionPromises);
return compressedFiles;
} catch (error) {
console.error('Error compressing images:', error);
}
}
This solution is decent! It uses modern JavaScript concepts like declarative programming with array methods and parallel promise execution with Promise.all(). However, these solutions are the main causes of memory overflow. Let’s revise the function a bit.
Strategies for Improving Browser-Based Image Processing
1. Change the Programming Approach:
Instead of declarative iteration through the array, switch to an imperative style. Replace array iteration methods like .map() with a plain for loop. Array methods create closures and add tasks to the browser’s call stack, consuming extra memory.
From this approach:
const compressionPromises = filesArray.map(file => {
// ...
});
To this approach:
for (const file of filesArray) {
// ...
}
Or even this:
for (let i = 0; i < filesArray.length; i++) {
// ...
}
2. Avoid Parallel Processing:
Replace Promise.all() with sequential processing. Promise.all() will attempt to process all files simultaneously, consuming memory equal to the sum of each file’s memory usage. By processing images one at a time, memory overflow risks are minimized. A sequential approach ensures that memory usage is kept under control.
While Promise.all() might finish faster, that speed comes at the cost of reliability. Offering user-friendly UI feedback—like a loader or file processing counter—can keep users informed while preventing memory issues.
Additionally, since Promise.all() rejects immediately if any promise fails, sequential processing allows for better error handling. Rather than restarting the entire process when one file fails, we can handle each failure individually.
3. Clear Memory After Each Step:
Remember to clear memory after every image is processed by revoking Blob URLs and other temporary objects:
URL.revokeObjectURL(URL.createObjectURL(file));
This explicitly tells the browser to free up memory associated with the file, allowing the garbage collector to remove unused objects from memory.
Final Version of Initial Solution
import imageCompression from 'browser-image-compression';
import heic2any from 'heic2any';
async function processImagesSequentially(filesArray) {
const compressedImages = [];
for (const file of filesArray) {
let processedFile;
if (file.type === 'image/heic' || file.type === 'image/heif') {
// Convert HEIC to JPEG or PNG using heic2any
const options = {
blob: file,
toType: 'image/jpeg',
quality: 0.8 // Adjust quality to reduce memory usage
};
processedFile = await heic2any(options);
} else {
// Compress the image using browser-image-compression
const options = {
maxSizeMB: 1, // Limit to 1 MB
maxWidthOrHeight: 1024, // Resize to a maximum of 1024x1024
useWebWorker: true // Use Web Workers for efficient processing
};
processedFile = await imageCompression(file, options);
}
compressedImages.push(processedFile);
// Clear memory by revoking Blob URLs and other temporary objects
URL.revokeObjectURL(URL.createObjectURL(file));
}
return compressedImages;
}
Conclusion
In summary, preventing browser memory overflow during image processing requires careful management of memory usage. The key strategies I’ve outlined above include processing images sequentially, reducing unnecessary memory consumption, and clearing memory after each step. While relying on the user’s browser for image processing isn’t ideal, these techniques can help mitigate risks in situations where it’s necessary.
If you’re looking for help building web applications that work with images, Trailhead can help. Contact us for a free consultation, and we’ll work with you to design solutions that optimize performance, ensure reliability, and prevent common pitfalls like memory overflow.


