Written by Eran Ofir, CEO of Imagry

Photo: Creative Comms

When Henry Ford was asked what his customers would have wanted around the time he was perfecting his Model T automobile, he said “faster horses.” Ford saw something his customers did not. He bet that motor cars would do everything that faster horses would do, and more. When his customers saw the Model T, they proved him right, and it changed the world.

We are at a similar juncture today, more than 100 years later. Autonomous vehicles (AVs) will change the world. AVs, if designed properly, will be many times safer than the average human driver. They will provide more accessibility and independence to people with disabilities, to the elderly, and to children. They’ll dramatically reduce drunk-driving related fatalities. Autonomous vehicles don’t drink, they don’t get tired, and they’re not distracted by cell phones.

 

Because of recent significant technological advances in artificial intelligence, sensors, and computer processing power, autonomous driving is actually much closer to reality than many have come to believe.

The problem we face in autonomous driving is that the world is diverse. It is impractical to compose specific software code to cope with every possible scenario that might arise on the road. We see this software flow chart method in current ADAS architecture, used by most companies designing autonomous driving software today. It employs a serial, rule-based approach, which is limited in scope and lacking in flexibility. Furthermore, almost every single company developing autonomous driving solutions tries to achieve L3+ autonomous driving by simply adding more and more features to the existing software framework, a method destined for failure.

Most of the industry is trying to make faster horses. Instead, I propose that what they need is an entirely new framework for the 21st century, one that complements the idea of the software-defined vehicle (SDV), which is the result of moving from hardware-based to software-centric design.

Imagry’s software solution for L3/L4 autonomous driving uses deep neural networks and artificial intelligence (AI) to teach the vehicle, using supervised learning techniques, to mimic the behavior of a skilled human driver and address driving decisions on-the-fly. This is where neural networks can play a key role, with their ability to process data in a parallel fashion. As a result, the vehicle can adapt to situations which have neither been seen nor navigated previously (as opposed to the linear fashion of motion planning used by rule-based solutions). The adaptation in the Imagry method is done by producing a motion plan based on a combination of scenarios that the neural network has already learned, just like humans do.

Imagry developed a software stack that uses regular camera feeds to perceive, in real-time, the immediate environment around the vehicle. Several deep neural networks process the video feeds from the cameras, resulting in a perception map that is fed to Imagry’s second software stack which handles the motion planning phase. Photo: Imagry

Imagry developed a software stack that uses regular camera feeds to perceive, in real-time, the immediate environment around the vehicle. Several deep neural networks process the video feeds from the cameras, resulting in a perception map that is fed to Imagry’s second software stack which handles the motion planning phase.

Make no mistake, though, there are no shortcuts with this method! It takes years to train neural networks to drive autonomously, and that is exactly what we have been doing at Imagry for over five years now. During that time, autonomous vehicles using our software solution have been operating in three different countries (the U.S., Europe, and Israel), using supervised learning techniques to hone our technology. Our solution is HD-mapless, thereby avoiding expensive and complex mapping, localization, and communication issues. It is hardware-agnostic, providing a platform for easy integration into various vehicles and settings that make it easily deployable and scalable. Last but not least, because it adapts to new environments and situations on-the-fly, it is location-independent. Roll-out is scalable to new locations, worldwide, based on fast, small-scale localized adaptation optionally performed as an over-the-air software upgrade to the system.

Cars that can think for themselves will bring us to the autonomous-driving future we have all been envisioning.

It seems that ADAS developers are ignoring the lessons of history. At Imagry, we acknowledge the relevance of Henry Ford’s wisdom. We believe there is a better way to deliver L3/L4 autonomous driving than by adding patches to improve upon the existing solution. Cars that can think for themselves will bring us to the autonomous-driving future we have all been envisioning.

Watch how Imagry’s mapless autonomous driving technology navigates the narrow streets and unexpected traffic situations in downtown Haifa, Israel.

Want to meet the team at Imagry? Come and meet them at MOVE AMERICA 2023 in Austin, TX September 26th-27th. Get your tickets now for just $795 whilst offer lasts. 

Want to keep seeing which stories are sparking the mobility community? Sign up to our MOVEMNT newsletter for weekly updates every Thursday 8am