Humans are driving now more than ever – over the last 20 years, the number of cars produced have increased by over 50%. And, in 2019 alone, over 90 million vehicles were produced worldwide. With more and more vehicles taking the road each year, the rate of car accidents has been subsequently increasing over time. Because of this, people are suffering from dire consequences both on the individual and the macro level. Annually, over a million people die as a result of vehicle crashes globally. And, because car accidents involve heavy machinery moving at high speeds, another 20 to 50 million people suffer from injury or disability as a result of vehicle collisions.
In addition to car accidents, traffic inefficiency is also a major problem faced by our society today. The normal commuter will lose 54 hours of productivity a year due to driving to work. And when taking into account other reasons to drive, on average, an individual in America will lose 100 hours of productivity due to congestion. All of this contributes to an economic cost of almost a trillion dollars every year. Aside from these stats, looking at non-quantitative consequences, traffic can also negatively impact a person’s emotional state and cognitive ability.
Delving into the mechanisms behind both the lack of road safety and traffic efficiency – both of these issues stem from the fact that human drivers are prone to error because they do not always act timely and rationally. People have a non-zero reaction time that often prevents them from making a safe maneuver in a quick manner. Also, drivers often do not abide by the speed limit, causing congestion that can have a negative ripple effect for miles. Individuals are also prone to being distracted when behind the wheel, and this lack of focus poses a threat to the safety of themselves and other road occupants. Fortunately, the auto industry is experiencing a shift to autonomous vehicles (AV). Because humans will no longer function as drivers – only as passengers – this evolution is primed to optimize road safety and traffic efficiency to enhance mobility.
However, today, there is nothing on the market that is capable of supporting full autonomy. Traditional car OEMs, which are manufacturing companies, horizontally source and integrate solutions that can only enable partial autonomy. These platforms are constrained, because they are based on legacy technology, such as a GPU. Because they are not purpose-built to process the plethora of information surrounding the car, these products cannot enable AVs. Tesla, a technology company at its core, is internally developing an AI vision solution from the ground up for its cars – this product is supposed to be fitted for the monumental self-driving task. However, although their platform is miles ahead of what traditional car manufacturers are implementing into their vehicles, it still cannot enable full autonomy, and, as a result, humans are still required to drive their cars.
As you drive, to correctly process the plethora of visual data around you, your brain uses almost 100 billion neurons to produce extreme efficiency: a data center level of compute while consuming less power than a lightbulb. To mimic this, AVs need to be equipped with a platform that has at least 75 tera-operations-per-second (TOPS) of compute for every watt of power consumption. This unsolved optimization problem highlights a massive barrier to full vehicle autonomy that is faced by the industry: the visual perception problem.
A novel, purpose-built platform must be developed to solve the visual perception problem outlined above to facilitate successful self-driving. Unlike incumbent solutions in partially autonomous vehicles today, it cannot be constrained by the limits of legacy technology – new innovations must come to fruition that are meant to support immense compute with a low power budget. By meeting the necessary efficiency constraint, this solution will enable autonomous vehicles to flawlessly perceive its surroundings. Subsequently, with this platform allowing AVs to become a reality, the human factor is removed. As a result, the number of car accidents and amount of time lost due to traffic congestion will dramatically decrease.
Car OEMs must turn to a third-party company to horizontally source this class of vision perception platform. The solution must be purpose-built to solve the visual perception problem. By leveraging key innovations in math, AI, and ASIC architecture + design, the chip should produce a data center level of compute while consuming miniscule power. These unmatched capabilities must enable an AV to see a traffic light hundreds of meters away and interpret all the surrounding visual cues in just a few milliseconds. Through these abilities, this platform will make autonomous vehicles a reality. With humans only acting as passengers, traffic safety and efficiency will subsequently improve, transforming the roads into a better place.
To learn more about Recogni, check out www.recogni.com