0

I am working on an image processing project using an FPGA, but I have run into issues with importing the original image. What would be the best way to convert a compressed image file (.png or .jpeg) into a 3D RGB array in VHDL? I am planning on running this on a Zynq board, so either a software or hardware solution would work.

srohrer32
  • 67
  • 2
  • 9

1 Answers1

1

Best depends on what your constraints are. Do you want minimal effort or highest performance?

If you are running Linux on your Zynq and don't care too much about performance you could just use the Python Imaging Library to open a file and extract an array of RGB pixel values. I have a testbench which does this for the OpenCores JPEG encoder here: https://github.com/chiggs/oc_jpegencode/blob/master/tb/test_jpeg_top.py

If you don't want to run Python then you'll need to do a little more work. See answers to this question regarding how to open PNG/JPG from C.

Obviously you'll need to consider how to transfer the array to/from the PL fabric. If you have a DMA controller in your PL that would be the best mechanism to use. Extract the RGB array into an area of memory and then tell the DMA controller in the PL to pull the array into a block RAM, which your VHDL can then process.

Community
  • 1
  • 1
Chiggs
  • 2,824
  • 21
  • 31