How does the platform work?
The platform collects information from multiple high dynamic range (HDR) cameras and vehicle Controller Area Network (CAN), then processes it on the vehicle using patented AI models for Driver and Occupant Monitoring, Forward Collision and Lane Departure Warnings, Blind Spot Detection, and Moving-Off Information Systems, all running on a real-time QNX/Linux edge system.
Beyond this, the platform uses a ‘federated learning architecture’ ensuring both data-protection and real-time learning for the AI. Improved AI models are then deployed back over-the-air (OTA) to all vehicles, and insights are delivered to fleets, insurers, OEMs, and mapping partners via cloud APIs.
Talking about the success at CES, Nisarg Pandya, CEO and Founder of drivebuddyAI, said, “The response we received at CES has been tremendous. Attendees visiting our booth were fascinated by our innovation. Our platform has already demonstrated a reduction of over 70% in accidents across large commercial fleet deployments. Underlying technology is validated by leading global regulators, including the Automotive Research Association of India under India’s AIS-184 standard and certified in the European Union’s General Safety Regulation (GSR) 2144 and European New Car Assessment Programme (EURO NCAP) 2026 protocol. The platform is also ready to meet the requirements of the National Highway Traffic Safety Administration and the Federal Motor Carrier Safety Administration (USA), and other global markets. We look forward to taking this technology to more markets.”
Building on the platform's proven performance, drivebuddyAI remains committed to scaling its AI-driven mobility solutions worldwide, aiming to make transportation safer, smarter, and more efficient for fleets, cities, and communities.