Goals


LiDAR not detecting objects in gazebo

The problem here was that the ray plugin of the sensor was set to non-gpu which work within physics but the objects are displayed within rendering. Changing the plugin to set the laser to gpu_ray fixed the issue.

Left camera image

Left camera image

LiDAR scan

LiDAR scan

LiDAR-Camera fusion

With the purpose of combining LiDAR’s kinematic information with the camera’s object classification, the first guess was to compare the bounding boxes centroids. In the picture below, the red and the blue dots represent the Yolo’s and the LiDAR’s bounding box centroid, respectively. This approach has a problem when there’s object sobreposition in the view. For example, in the image the truck’s lidar centroid (top blue dot) is closer to the pedestrian’s yolo centroid (botom red dot) which leads to the wrong result.

LiDAR obstacle detection

LiDAR obstacle detection

LiDAR clusters projected in the image

LiDAR clusters projected in the image

Centroid comparison

Centroid comparison

The solution to tackle this problem was to use the Intersection over Union (IoU) metric which represents the ratio of the intersection of two bounding boxes areas to their combined areas. In the following image the lidar and yolo (orange boxes) bounding boxes are matching and giving the respective IoU.

Untitled