I have a dataset consist of 260 thousands images that are extracted from several videos. I want to extract features of these images and use them for frame retrieval. I used VGG16 (pretrained on imagenet) that implemented in Keras library with 'avg' pooling in the last convolutional layer. VGG16 gives me a vector consist of 512 number (feature) for each image. The only reason that bothers me is that this scenario is too time-consuming. For my dataset, it took about a day and 6 hours which is too much.
Is this elapsed time normal?
Because of low performance, I switch from VGG16 to DenseNet121 that already implemented in Keras. For this model (now) it took a day and 18 hours to extract features from 33% of my images (about 86000).
I ask again: Is this elapsed time normal?
Is there any way to extract feature faster? Even without using of implemented algorithms?
If you need more clarification, just ask for it. Thank You!
Asked
Active
Viewed 185 times
0

Shahroozevsky
- 343
- 4
- 17
-
It depends on your machine. Are you working on GPU or CPU? Working on GPU can decrease your execution time a lot, because of its capability to do computations in parallel – Nikaido Sep 14 '19 at 14:53
-
You mean that by working with CPU, we don't have parallel computation? What is the duration of extracting feature from an image on GPU or CPU? for VGG16 for example! – Shahroozevsky Sep 14 '19 at 14:56
-
No. Depending on what CPU you have, you can obtain parallel computation, but on GPU is different. There are much more cores on which you can do parallel computation. That's why deep learning is usually used in combination of GPU computing – Nikaido Sep 14 '19 at 14:57
-
Oh my, oh my!! Can you tell me how can I use keras on GPU? – Shahroozevsky Sep 14 '19 at 15:00
-
https://stackoverflow.com/questions/45662253/can-i-run-keras-model-on-gpu – Nikaido Sep 14 '19 at 15:01
-
1Appreciate your feedback. Thank You! – Shahroozevsky Sep 14 '19 at 15:02