Avatar

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

jpiat

About

Username
jpiat
Joined
Visits
507
Last Active
Roles
Member, Administrator, Moderator
Points
60
Badges
0
  • Configuring Logi-Bone using Non-OS environment

    Let me know if you encounter any problem with this, i'd be happy to help debugging the code.
    mhada
  • Problem booting Beaglebone Black with LogiBone installed

    Great ! You messages you see are the one from the new loader and you can see that your board was identified as a LOGIBONE_R1.5.

    Don't hesitate to ask on this forum if you encounter any problem, we will be happy to help.
    woody
  • Hidden device tree location

    Hi,

    to get auto loading of the device-tree overlay for a given beaglebone cape, the device-tree must reside in the kernel itself and not in lib/firmware. This is why the device-tree configuration does not appear in lib/firmware. To change the deveice-tree configuration, you must re-compile the kernel with your patch deice-tree configuration and then just update the kernel file (uImage) on your distribution filesystem.

    A way is to disable auto-loading of the cape by wiping clear the identification eeprom of the cape and then load you own device-tree configuration (dtb) manually. To do this you can take the init_eeprom script from logi-tools (https://github.com/fpga-logi/logi-tools/blob/master/init_logibone/init_eeprom.sh) and replace :

    cat data.eeprom > /sys/bus/i2c/drivers/at24/1-0054/eeprom

    with

    dd if=/dev/zero of=/sys/bus/i2c/drivers/at24/1-0054/eeprom bs=1 count=256


    This should zero the eeprom and allow you to load your device tree overlay manually. When you have a working overlay i advise to recompile the kernel with it and restore the eeprom content.

    Regards,

    Jonathan Piat
    cybin
  • Human Following Robot with implementation of FPGA`s

    You are right in saying that a GPU as more horsepower (and easier ti use development tools) than a FPGA, but they have lower power/performance ratio than what a FPGA can achieve.

    Comparison of FPGA vs GPU are often made, but they make little sense because while the FPGA performance comes at high development cost, the resulting architecture can be very close to the optimal case (in every way, area, power consumption, latency, throughput ...) while a GPU is a general purpose parallel machine and will most of the time be over-sized for a given computation or will limit performance application because of their memory sub-system (not very good at random access) and cache management (FPGA can be used to design application specific cache management).

    FPGA are very good at pixel level operations (frame level operations only depends on the ability to store frames, hence memory), so they can perform all the image oriented application for the human detection and then send whatever is extracted from the image (classification results, marker detections ...) to the CPU that can do the tracking part (Kalman is a good candidate for that task). Sending the frame to a GPUfor computation will result in a high latency, preventing the robot from being agile (fast and reactive).
    hzk17
  • Human Following Robot with implementation of FPGA`s

    Hi,
    do you have any idea of the algorithm/method you want to use for this purpose ? Human following would require human detection (usually performed through classification or by using special tags) and target tracking. Running this task on the logi-bone/pi would require to understand how you want to partition the processing between the FPGA and the processor.  The FPGA could handle the pixel classification and the processor can be in charge of the tracking part. Image segmentation/classification such as boosting could be particularly suited for the FPGA.

    Your best take on getting something to work, is to start prototyping on a PC with OpenCV and then decide which part of your algorithm you want to move to the FPGA.

    Regards,

    Jonathan Piat
    hzk17