The idea behind the utilization of Jetson Xavier AGX for inference was to take the GPU load out of the PC so that it could perform the subsequent tasks smoother. First approach was to test the same model that was working well on the PC which was yolov7.torchscript. The result was was slower which was not intended. Next, yolov8s and yolov8n were tested with tensorRT instead of Pytorch, since tensorRT is known for having the best results when performing inference on a Jetson device. It was faster but the detections were not good. Finally, there was an attempt to convert the yolov7 model into tensorRT. The model was converted successfully but there were problems loading it in python so that it could be integrated in ROS later. As it was costing too much time, the decision was to not use the Jetson device.
To speed up the process and take some GPU load off the PC, some measures were taken:
The first result of the Occupancy prediction model is displayed in the video below. In this video, the probabilistic model was utilized in real-time, with all processing seamlessly occurring in the background, including stitching and inference.
Emissão de vídeo de 17-03-2024 19:47:00.webm
Here are a few things to consider (current state):
3D POV