Interactive vision programming

Here is very interesting project by MIT . running this on logibone . one can interactively program vision pipeline using web browser .
we will try to run it from logibone. http://vblocks.media.mit.edu/

it should also be possible to integrate simpleCV integration also. or with opencv integration .

Comments

  • Hi @SquareOne  Very interesting.  It is not clear to me how that any of the machine vision algorithms run on anything other than a CPU though?  How would this run on an FPGA?
  • Hi,

    no sure this can easily be interfaced with the FPGA as there is a good chance this uses WebGL or OpenGL and take advantage of the browser interpreter (cannot be offloaded to the FPGA easily). SimpleCV seems like a better candidate as its based on Python but still would require quite some work. The main problem when using the FPGA for vision-processing is that the pixel processing flow in the FPGA is set at compile time (can be configured a bit through switches) because you cannot easily generate a bitstream at run-time. What can be done is to design the pixel processing pipeline and then decide what to execute on the FPGA and what to execute on the processor. There is no tool for this for now, but this is one direction we want to evolve our architecture editor (skeleton).

    A good workflow could be to use vblocks to prototype the application, then export a representation of the processing and map different portion of it to  code and hardware. From what i see the vblocks site only allows to export an URL that represent the image processing pipeline, but i'am not sure there is other exporter available ...

    Regards,

    Jonathan Piat
Sign In or Register to comment.