I am going to enter my last year of Electronic Engineering, so for our final year projects, we submitted proposals. First our project was Human Following Robot using 'GPS & Xbee', by subtraction of frames we would have a lock-on on our user and know his position & then follow him one or many units would be placed on the robot and a single unit would be put in the pocket of the user, but anyhow it was rejected, now our teachers have said to us research on making a Human Following Robot with Image Processing using FPGA`s, I have searched and have found that LOGi FPGA is available for use with either Raspberry Pi and BeagleBone. So my main question is that is this project feasible can it be made, and can Kinect or other cameras be interfaced with the LOGi i have seen only the LOGi cams having model no. OV---, can other cameras be also interfaced with it. Also please suggest some good projects related to robotics. Thanks
Comments
do you have any idea of the algorithm/method you want to use for this purpose ? Human following would require human detection (usually performed through classification or by using special tags) and target tracking. Running this task on the logi-bone/pi would require to understand how you want to partition the processing between the FPGA and the processor. The FPGA could handle the pixel classification and the processor can be in charge of the tracking part. Image segmentation/classification such as boosting could be particularly suited for the FPGA.
Your best take on getting something to work, is to start prototyping on a PC with OpenCV and then decide which part of your algorithm you want to move to the FPGA.
Regards,
Jonathan Piat
Comparison of FPGA vs GPU are often made, but they make little sense because while the FPGA performance comes at high development cost, the resulting architecture can be very close to the optimal case (in every way, area, power consumption, latency, throughput ...) while a GPU is a general purpose parallel machine and will most of the time be over-sized for a given computation or will limit performance application because of their memory sub-system (not very good at random access) and cache management (FPGA can be used to design application specific cache management).
FPGA are very good at pixel level operations (frame level operations only depends on the ability to store frames, hence memory), so they can perform all the image oriented application for the human detection and then send whatever is extracted from the image (classification results, marker detections ...) to the CPU that can do the tracking part (Kalman is a good candidate for that task). Sending the frame to a GPUfor computation will result in a high latency, preventing the robot from being agile (fast and reactive).