Kinda depends:
- how many images per user?
- approx size range per image?
- how many users?
- what sort of concurrency do you expect?
If most of the above numbers are small your method will probably be fine for long enough to get you a good long way, and will at least let you get started.
I know that using MySQL blob storage gets bad press, but that would also be a simple way to get started, and you could shard the database to get some scale-out without having to do any clever coding.
That said ...
If, in your system, you expect users to upload very large numbers of files you might run into limits or performance issues of the filesystem.
If you are hosting on Windows, watch out for the 8.3 filename problem (very slow when the directory gets large), as your filenames will definitely be longer than 8.3 :)
If many people will be uploading/downloading concurrently - say at peak usage periods - you will have to watch out for I/O contention. If you're on a RAID 10 volume you'll get further, and better still with an SSD (but then you'll likely have storage-capacity problems).
Your suggested method won't be the most space efficient if there's any chance that the same images might be uploaded by different people (duplication across many folders), in which case you'd be better off keying by a function of the data (e.g. md5sum) and storing just one copy (yes, then there are management issues with deletes).
If you expect lots of large images from many people you will eventually have to think about scaling the underlying storage. You could maybe partition the data by some function of the {userid} and shard across different volumes or machines. This would also buy you better concurrent throughput.
Another question: will you always be serving out only the original image, or you'll send back re-scaled copies sometimes? You'd probably want to scale once and return the pre-scaled version always, in which case you'd need to take storage of those scaled copies into account too.