IIIT Hyderabad develops smart self-driving car with advanced navigation features

Hyderabad: IIIT Hyderabad (International Institute of Information Technology, Hyderabad) has developed an innovative self-driving car prototype that combines advanced technologies for navigation and planning. The project, led by Prof. Madhava Krishna, features a fully electric autonomous vehicle capable of driving across large areas while avoiding collisions and responding to natural language commands to reach specific destinations.
Key Technologies Behind the Self-Driving Car
The vehicle is equipped with a range of sensors, including a 3D LIDAR (Light Detection and Ranging) system, depth cameras, GPS, and an Attitude and Heading Reference System (AHRS), which helps the car understand its orientation in space. These sensors work together to create a 3D map of the environment using SLAM (Simultaneous Localization and Mapping) technology, enabling the car to determine its location as it moves. The LIDAR-based real-time state estimation ensures that the car is aware of its position at all times.
A standout feature of the car is its ability to understand natural language commands. For example, it can follow instructions like “Take a right next to the white building” or “Stop near the football field,” making it capable of following directions in a way that mirrors human navigation.
Open-Vocabulary Navigation: A Human-Like Approach
One of the most novel aspects of IIIT Hyderabad’s self-driving car is its Open Set Navigation capability. Unlike traditional autonomous vehicles that rely on precise, pre-mapped data or GPS, this system allows the car to navigate using real-world landmarks, much like how humans rely on familiar environmental features.
Rather than depending on exact GPS coordinates, the vehicle can follow commands like “Go towards the bench” or “Take a right by the underpass,” even if these landmarks aren’t mapped in advance. This approach is powered by deep learning models that enable the car to understand both visual data (from cameras and LIDAR) and natural language instructions, making the vehicle more adaptable to new and dynamic environments.
By integrating open-vocabulary landmarks, the car can recognize and navigate based on general descriptions, allowing it to handle areas that are constantly changing or don’t have predefined maps.
Solving the Challenge of Safe, Collision-Free Planning
Another critical challenge for autonomous vehicles is ensuring safe navigation, especially in complex environments with unexpected obstacles. IIIT Hyderabad’s car uses differentiable planning to address this challenge. Unlike traditional systems that use fixed routes or simple decision-making algorithms, this approach allows the car to plan its movements dynamically, adjusting in real time to avoid obstacles or conflicts.
The system combines vision-language models (VLMs), which process both the vehicle’s visual data and natural language commands. This enables the car to make real-time decisions based on instructions, such as “park behind the red car.” If the initial suggested location is not feasible, the vehicle’s custom planner adjusts the decision, ensuring the car parks safely without violating real-world constraints.
The Future of Autonomous Driving at IIIT Hyderabad
This advanced system not only allows the self-driving car to navigate effectively through complex environments but also makes it more flexible in handling unexpected situations. Prof. Madhava Krishna and his team believe that the combination of language understanding, real-time mapping, and AI-driven planning could lead to smarter, more reliable autonomous vehicles.
This article was initially published in the October edition of TechForward Dispatch.