I previously had to solve a similar issue, although my solution was to optimize the images on the client-side, which you claimed to be slow. Since my own experience says otherwise, I'm interested in knowing what solutions did you already try.
My solution was:
Read the images on the client-side, using the FileReader API. Seems that it's not necessary.
- Optimize each image by scaling and setting the quality, using the Canvas API. If for some reason this process turns to be slow, you can always use the WebWorkers API to split the load into multiple background processes. The benefit is that performance issues with the optimization process won't affect your UI (won't make it stuck).
- Merge all images into a single sprite, also using Canvas, and save the sprite's metadata (each image: x, y, width, height) in a separate object.
- Extract the sprite's base64 using Canvas
toDataURL
.
- Upload the compressed sprite file to the server along with the sprite's metadata. On the server decompress the file, then split and save it into separate images files according to the metadata.
This should do the trick of utilizing the client, which in most cases will reduce your network use and bandwidth requirements.
Update:
Here's a code example for the client-side code.
You select as many files as you want in using the input field, afterwards only the images are picked, resized, packed a sprite and exported as an object containing the base64 version of the sprite and the metadata about each image within it.
I made the base64 data uri clickable so you'll be able to see the results immediately.
What's missing is the server-side part, in which you take the object and use it. You create an image using the base64 data uri, I suggest using imagemagick, but any such library would do the trick, then you crop your images out, using the library you chose, according to the sprite's metadata and save each image separately (or whatever).
You can do a lot of small but cool optimizations on the client-side to reduce the load on your servers.