LOGI - Image Processing - Project

From ValentFx Wiki
Jump to: navigation, search

Contents

Project Overview

Applications in image processing technology are becoming more and more prevalent with the availability of low cost high performance electronics systems.  The addition of image processing algorithms to such systems enables applications such as object tracking and detection, autonomous movement, ambient change recognition, etc.  Image processing is known to be very data processing intensive based on the amount of data needed to be processed and the speed which the data needs to be processed.  FPGA technology handles both of these requirements very well, which makes FPGA’s, especially in conjunction with high performance embedded systems, a very powerful tool for implementing a vast array of applications.

This project demonstrates the use of the logi camera module with the logi platforms. In this project the camera output is converted to a parallel YUV stream and passes through a series of state of the art filters as follows.

Gaussian filter

This filter applies a bluring on the image (contrast changes are smoothed). In this implementation a 3x3 kernel is used.

See the linked wikipedia articled referrenced for specifics on the Gaussian filter.

Sobel Filter

The Sobel operator is used in image processing, particularly within edge detection algorithms. Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Sobel operator is either the corresponding gradient vector or the norm of this vector. The Sobel operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high frequency variations in the image. [2]

See the linked wikipedia articled referrenced for specifics on the Sobel Filter.

This filter computes the absolute value of the 2D gradient in the image. This filter outputs high values (showed as white) when there is a fast change of brightness in the image and output low values (showed as black) when there is little change between pixels in the image.

Harris Corner Detector

This operator computes the harris response over all of the pixel of the image. The Harris operator is used to detect corners in the image. Corner are very useful for computer vision because they are stable in both direction.

The ouptut of the filters is directed thruogh a video switch controlled by the Logi-board switches to select which filter output to transfer. The output of this switch is then fed into a FIFO that the raspberry-pi/beaglebone can then read.

See the linked wikipedia articled referrenced for specifics on the Harris Filter.

HW/SW details regarding the project/component

Figure: FPGA Architecture Block Diagram of LOGI Machine Vision Demo

Figure: LOGI-Cam plugged into PMOD Port 3 and 4

Functional Overview of Image Processing Demo used in the LOGI App

The LOGI image processing demo App is intended to be a plug and play demo of machine vision using the LOGI boarrds.  Machine vision applications are can be very data processing intensive, which makes them a perfect candiate for an FPGA.  The demo capture image data, image processes and streams the data using mpjeg streamer to a networked remote PC.  The user can view the video stream by simply opening a web browser and entering the ip address of the connected bone/pi followed by port :8080.  

Running the Image Processing Demo - Step by Step Guide

getting logi-apps

The demo resides within the logi-apps.  So the easiest way to run the demo is the download and run the demo using the instructions found on the logi-apps wiki page.  In short you can download all of the logi-apps by running "git clone -b logipi https://github.com/fpga-logi/logi-apps.git " .  You can then navigate to the virtual component demo directory and run "sudo ./make_demo" to load the bit file and display basic instructions for the edu demo.  

Step by Step

1) Plug in the LOGI-Cam into your Raspberry Pi or BeagleBone.  If using the Raspberry Pi ensure that you have plugged in the module into PMOD ports 3 and 4 as shown in the image above.  

2) Ensure that you have logi-apps installed.  logi-apps typically resides in the user root directory unless unless the user custom installed it.  "cd ~/logi-apps/imgproc_app/"

3) Run the make_demo script: "sudo ./make_demo.sh".  Enter the camera module type that you are using with the LOGI-Cam (7670 is the default).  Note that there are up to 6 types of Omnivision cameras that are compatible with the LOGI-Cam.  The first time the script is run, it will check to make sure the mjpeg streamer libraries are up to date and properly installed.  The script will  then load the bitstream and begin the demo.  You will see the LED blinking indicating that the FPGA was properly loaded.

4) In order to view the video you will to connect to the mpjeg host installed on the RPi or BBB.  You can connect to mjpeg by opening a web browser and entering the IP address of you connected RPI or BBB followed such as "IPAddress"/8080/Stream.html  and push enter.  This will connect to the mjpeg http service runnin the RPi or BBB and you will be able to view the video stream of the processed images from the FPGA.  

You should now see live streaming image running on the mjpg service as follows.

Figure: Mjpegstreamer streaming logi processed images

5) You can now press PB1 (RPi) or use onboard DIP SW (BBB) to toggle through the different image processing modes.  

The FPGA Image  Processing Modes are normal, gaussian, sobel, harris.  See the description of each of these filters in the project overview section.  You can also see the "FPGA Architecture Block Diagram of LOGI Machine Vision Demo" which shows how the architecture is structured within the FPGA.  

Note that in order to get the best visual results on the Sobel and the Harris filter, ensure that the camera is well focuses.  The filters depend on the differential pixel results and these will be minimized if the camera is out of focus.  

Note we are currently using a grayscale and the viewable image is relatively low resolution.  However the FPGA is processing the images at full VGA and downscaling them in order to simplify sending the images through mjpgstreamer.  Also, the images are are being buffered within the BRAM of the FPGA so to simplifiy the implementation so that there is no need for the external SDRAM.  The current implementation can be modified to support color and full frame image processing as needed. 

Where To Go From Here

References

[1] http://en.wikipedia.org/wiki/Gaussian_blur

[2] http://en.wikipedia.org/wiki/Sobel_filter

[3] http://en.wikipedia.org/wiki/Corner_detection

Quick implementation overview using the LOGI-Pi - Youtube

Full walk through demoing the Image processing Demo - Youtube

Running the Image processing demo using the Logi-Apps (download and run)

Blog post when running the demo on LOGI-Mark1 and the BB

Sobel filter on LCD - Youtube

    • Github

    • Videos

    • Logi-apps

    • blog

    • Other referenced resources

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox