I was looking at the examples in Logi-apps and saw the image processing example. Can it work with raspberry pi camera? Reading the code, it wasn't clear to me. thanks
The image processing app requires the use of a camera connected directly to the FPGA. We have used the LOGI Cam for this app, which can be purchased from us directly for $24. It uses a OV7670 camera module with a parallel interface. Unfortunately E14 has chosen not to carry the LOGI Cam based upon it not having FCC/CE certification. We are in the process of seeking a solution to having another distributor carry the Cam in the future. You can email us directly at support@valentfx.com if you would like to purchase one directly.
The Rpi cam uses a CSI interface with LVDS which requires dedicated hardware on the RPi to process. Technically you could send frames to the FPGA from the RPi for processing, but it would be very slow and defeat the purpose of directly interfacing to the camera.
@mjones thanks for taking time to respond. The reason I ask is this. I'm working on a personal project to build racing robots using machine vision/computer vision. I tried OpenCV with raspi and picamera and found it too slow. I only need it to process 24 frames per second with 640 x 480 color images. Here's a blog entry that describes my experiments.
Basically, the track has a horizontal black line to make it easy for computer vision to scan. The results would be fed to another program that handles navigation and motor control. Dumb question, what is the theoretical max transfer rate between RPi and LOGiPi? Each image is about 240K
@woolfel Sounds like a great job for the FPGA. It looks like you forgot to post your blog link
@jpiat specializes in image processing and has done a lot of work with the LOGI boards. He is probably the best person to take on this question as there are lot of variables regarding the bandwidth. The main factors on whether you are implementing the wishbone interface and the SPI clock rate you are using. I believe that it is pretty easy to achieve 3MB/s running wishbone with SPI running @ 32Mhz (jpiat can confirm). I think that more recent drivers support higher speed stable clk rates that @jpiat can comment on.
the typical way to use the FPGA to process frames, is to direclty connect the camera to the FPGA and not transfer the whole images to the processors but only high level feature to the processor. For example you would run the corner detection on the FPGA and transfer only the corners locations to the processor to minimize the bandwidth. The FPGA in that case can easily process the frames at 60 or more frame per second for VGA frames and transferring only corner coordinates, takes very little bandwidth.
If you want to capture images with the rapsberry-pi, camera then send the image to the FPGA and get results back, the SPI transfer speed will become a limiting factor and you won't be able to reach a very high FPS (SPI @32MHz can be close to 3MB/s but i have slower results with the Pi2 and latest raspbian). I'am working on getting SPI to work at high rates these days and it seems that to get transfer rates close to the theoretical rate, you need to write a custom driver.
I have done experiments with OpenCV on the Pi2 and i got fairly good results with OpenCV compiled with NEON extensions.
thanks for taking time to respond, and providing interesting details. I'll have to read it again and digest it slowly.
In terms of cost, the LOGi camera is the same, but the way it connects isn't ideal. I would probably have to make some custom cables so the camera can be mounted in the right place on a racing robot chassis. Has anyone written alternate drivers for SPI that work with LOGi Pi?
I'm doing the project for fun and hopefully teach high school kids more advanced CS theory. all of it will be open source so that people can freely share.
Comments
Basically, the track has a horizontal black line to make it easy for computer vision to scan. The results would be fed to another program that handles navigation and motor control. Dumb question, what is the theoretical max transfer rate between RPi and LOGiPi? Each image is about 240K
thanks
peter
http://electronsfree.blogspot.com/2015/01/experiments-with-computer-vision.html
the images are color. Here is a sample image from my github
https://github.com/woolfel/BrickPiJava/blob/master/projects/JavaScanner/samples/image1-small.jpg
thanks
peter
the typical way to use the FPGA to process frames, is
to direclty connect the camera to the FPGA and not transfer the whole
images to the processors but only high level feature to the processor.
For example you would run the corner detection on the FPGA and transfer
only the corners locations to the processor to minimize the bandwidth.
The FPGA in that case can easily process the frames at 60 or more frame
per second for VGA frames and transferring only corner coordinates,
takes very little bandwidth.
If you want to capture images with
the rapsberry-pi, camera then send the image to the FPGA and get results
back, the SPI transfer speed will become a limiting factor and you
won't be able to reach a very high FPS (SPI @32MHz can be close to 3MB/s
but i have slower results with the Pi2 and latest raspbian). I'am
working on getting SPI to work at high rates these days and it seems
that to get transfer rates close to the theoretical rate, you need to
write a custom driver.
I have done experiments with OpenCV on the Pi2 and i got fairly good results with OpenCV compiled with NEON extensions.
Regards,
Jonathan Piat
thanks for taking time to respond, and providing interesting details. I'll have to read it again and digest it slowly.
In terms of cost, the LOGi camera is the same, but the way it connects isn't ideal. I would probably have to make some custom cables so the camera can be mounted in the right place on a racing robot chassis. Has anyone written alternate drivers for SPI that work with LOGi Pi?
I'm doing the project for fun and hopefully teach high school kids more advanced CS theory. all of it will be open source so that people can freely share.
thanks
peter