I managed to install #DeepDream in my server.
I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.
Any advice ?
I managed to install #DeepDream in my server.
I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.
Any advice ?
Do you run it in a Virtual Machine on Windows or OS X? If so, then it's probably not going to work any faster. In a Virtual Machine (I'm using Docker) you're most of the time not able to use CUDA to render the Images. I have the same problem and I'm going to try it by installing Ubuntu and then install the NVidia drivers for CUDA. At the moment I'm rendering 1080p images which are around 300kb and it takes 15 minutes to do 1 image on an Intel core i7 with 8gb of ram.
Unless you can move to a better workstation/get a GPU, you'll have to do with resizing the image.
img = PIL.Image.open('sky1024px.jpg')
img = np.float32(img.resize( [int(0.5 * s) for s in img.size] ))
Taking 1 minute to process a 100kb image is a sensible turnaround time for #deepdream, and we accept that these renders have an incredibly long baking time. Often, experimental research software will run too slow, hungry for a future of faster computers. That said, there are a couple ways that come to mind about making your setup execute faster.
Thread! Increase thread count with a command line argument. Here's one way to enable multi-threading in Caffe How to enable multithreading with Caffe?
GPU! Install CUDA and switch from CPU rendering to GPU rendering. If your server doesn't have a special GPU, try getting a GPU instance on amazon ec2. https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN)
As a rule of thumb deep learning is hard on both compute and memory resources. A 2gb RAM Core Duo machine is just not a good choice for deep learning. Keep in mind a lot of the people who pioneered this field did much of their research using GTX Titan cards because CPU computation even on xeon servers is prohibitivly slow when training deep learning networks.