Autonomous Driving Using NVIDIA Jetson TX1

Driving has become an essential part of our daily life; going to work, getting errands done and getting to the nearest hospital in case of emergency. We are in an era where getting to places is faster than ever but the sheer speed of our commutes is not the only arena where we are innovating, scientists are working to develop self-driving (autonomous) cars which will be independent of human drivers. This is going to revolutionize the way we travel, making it possible for everyone to get around easily and safely, regardless of our ability to drive.

Aged or visually impaired people wouldn’t have to give up their independence. Time spent commuting could be time spent doing what you want to do. Deaths from traffic accidents—over 2 million worldwide every year—could be reduced dramatically, especially since 94% of accidents in the U.S. involve human error.

An autonomous car can sense its environment and navigate without human input. We are developing a prototype for an autonomous car which can perform the 4 most common featured involved in autonomous driving; lane detection, traffic sign detection, traffic signal detection and obstacle detection.

The video captured by camera and distance information of obstacles obtained from ultrasonic sensors will be fed to NVIDIA Jetson TX1 which will perform the necessary computations on this data to control the car. The datasets obtained from our own environment will be used to train our algorithms. This way the car can easily travel in the controlled environment.



Instance-Level Segmentation for Autonomous Driving with Deep Densely Connected MRFs [paper] [code]

Posted in Computer Vision, Projects.