Vision-Based Framework for Autonomous Driving Using Stereo Vision
Main Article Content
Abstract
Autonomous navigation in organized farming environments like spring-up fields of force and vineyards requires precise depth perception, obstacle avoidance, and waypoint-based life support. This work presents a vision-based architecture for autonomous operation of a ZED disco biscuit stereo camera, which performs literal prison term astuteness perception and object espial to facilitate safe and efficient piloting. In contrast to 3D LiDAR, which only records information on an exclusive plane without spatial awareness, or the costly 2D LiDAR, stereo visual modality has a left behind more efficient option for perceiving depth. Jibe (coinciding localization and chromosome mapping) makes an effective system pay the cost of formulating a real-time environmental map allowing accurate localization principle, while the Nav2 model simplifies sovereign waypoint adoption towards contacting navigation target area. As the car drives through craw words or vineyards, it continuously keeps guard over its charted flight, gripping directly if a man or barrier comes into its way of life and resuming apparent motion once impedimenta are drawn in; this allows for beneficial base interaction in dynamic environments. This approach performs superior adaptability and efficiency over LiDAR-based alternatives by using two-channel vision for depth selective information piloting, earning it an affordable and virtual solution for precision Department of Agriculture, independent husbandry, and agricultural robotic coating.