I have about 100 video recordings which total around 100 hrs and I need to analyze facial expressions in these using python. However, my own computer is rather slow in this--even if I were to do the analysis by skipping every five frames. What would be the fastest way to analyze these videos? Should I consider a cloud computing solution, and if so, is there any service that you would recommend at an affordable price? Or any other solutions would be more than welcome! Thanks
Asked
Active
Viewed 160 times
0
-
You can use OpenCV to extract frames out of videos like this: https://stackoverflow.com/questions/33311153/python-extracting-and-saving-video-frames/47632941#47632941 then you can put those images in a cache and access the frames from cache using 100 processes (or on as many cores as you have). Python's shared memory could be useful maybe. What is max throughput of each process running on a constant frame on your computer? – huseyin tugrul buyukisik Feb 19 '22 at 17:43
-
It seems like your question is how to get compute power at a cheap rate. I don't think that's a good fit for stackoverflow -- even if a perfect answer were provided, it's not a generally useful question because the answer would only be valid in a specific location, and for a short period of time. – Paul Hankin Mar 03 '22 at 09:35
1 Answers
1
Do you already have the python scripts ready for the "analysis" and they are ready for deployment?
If yes,
then spin up a powerful cloud instance in GCS or AWS. Or if you want to do it even smarter, dockerize your application and use something like AWS Batch or Cloud Run. It's cheap and you pay only what you use for. We actually used that for doing some internal analysis in our teams and found that to be pretty useful. We also did something with python, but it was more for objective quality analysis and such.
If no,
then look for APIs that you can look plug into your workflow directly to do the facial analysis. Something like this.

Adithyan Ilangovan
- 169
- 1
- 9