leantp.com

Automotive engineering


Biggest Robo-Car Roadblock Is Human 2017-03-29


So, here’s our recap.

What did really stand out this year in the hot robo-car segment? The “human factor” is how we’d sum it up.

Gill Pratt
Gill Pratt

 

 

 

 

 


The case for human and social factors was stated softly, but there it was. More companies — including Toyota and Mobileye — are scrambling to address the inconvenient truth that robo-cars pose a man-machine relationship more complicated than we care to admit.  

[**]t issue is how the human factor alters the future of self-driving designs.

Gil Pratt, CEO of Toyota Research Institute, noted, “[**]s wonderful as [**]I is, [**]I systems are inevitably flawed…”  Mobileye’s co-founder, CTO and chairman [**]mnon Shashua, described teaching vehicles to learn human intuition as “the last piece of the autonomous driving puzzle.”

This sort of talk marks a shift in thinking, in sharp contrast to the engineering agenda of just a few years ago — which was simply a matter of eliminating the human driver from autonomous car development.

Of course, the show’s headlines featured issues far less subtle. They trumpeted partnership agreements among companies like Nvidia, Intel and Mobileye with big auto brands such as [**]udi, BMW, Mercedes Benz and Volvo, or with tier ones like Bosch, ZF and Delphi. [**] slew of tech companies flooded the show floor with their newest lidar, radar and image sensor technologies.


Nvidia’s CEO Jen-Hsun Huang brought [**]udi of [**]merica Head Scott Keogh on the stage to announce the two companies’ partnership to build the next generation [**]I cars.

 

 

Outside the convention center, tier ones like Delphi were busy giving the press rides in a 2017 [**]udi Q5 SUV. NXP Semiconductors, too, gave CES attendees test drives of a highly automated vehicle, allowing them to experience “improved road safety and traffic flow” via DSRC-based V2V (vehicle to vehicle) and V2I (vehicle to infrastructure) communication. NXP has cooperated with Delphi and Savari for onboard and roadside units, while NXP’s partner Microsoft talked up future scenarios in which artificial intelligence can improve driver safety and analyze traffic situations on its [**]zure cloud platform.

The overall message was clear: Engineers can now make cars drive themselves.

 

But to us, editors at EE Times, the biggest automotive news at this year’s CES broke when Toyota Research Institute’s Pratt, last week took the stage at Toyota’s press conference, and posed a question nobody has dared to ask before on the safety of robo-cars. He asked: “How safe is safe enough?”

He noted that people, tolerant of human error, have come to accept the 35,000 traffic deaths every year in the United States. But would they tolerate even half that carnage if it was caused by robotic automobiles?

"Emotionally, we don't think so," said Pratt. "People have zero tolerance for deaths caused by a machine."

Many engineers in the automotive and tech industries have embraced the prevailing assumption that autonomous cars are much safer than cars driven by human drivers.

But how do humans really feel about machines that can drive but can also kill? [**]dd the human factor to the equation, and hard to completely ignore Pratt’s observation or even casually dismiss his statements as too conservative.

Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), characterized Pratt’s demonstration at Toyota’s press conference as nothing but “very conservative, the Toyota way.” Magney said, “Honestly, this is very pragmatic for a company like Toyota, as for the next many years there will be lots of cars sold to individual drivers and they will want the added safety and convenience.”

Given that it will take decades before robo-cars eventually replace all cars driven by people, it isn’t enough to talk about pushing the first robo-car out on the road in 2021. The conversation is now shifting to a reality in which human-driven cars share the road with robo-cars — not the next few years but over the next few decades.


Then there’s the business model dilemma. Many carmakers — not limited to Toyota — may find it hard to justify a business that focuses singularly on the roadmap for autonomous cars.  

Even among auto makers with aggressive plans for autonomous cars (i.e. the rollout of Level 4 autonomous cars in 2021), we’ve detected a more nuanced tone.

Last year, deep learning was pitched as an answer to the most complex problem in highly automated driving. This still holds true, but it’s clear that deep learning is no longer a panacea. The non-deterministic nature of neural networks has become a cause of concern for anyone who has to test and verify the safety of autonomous cars.


Last piece of puzzle


In short, those days are over when tech companies can simply declare that autonomous cars are a piece of cake already eaten.

Mobileye's [**]mnon Shashua
Mobileye's [**]mnon Shashua

 

 

 

 

 


Mobileye’s Shashua, for instance, pointed out that among all the technology developments (i.e. sensing, mapping) in which the company is engaged, “We find the development of ‘Driving Policy’ technology as the last piece of the autonomous driving puzzle.”

By “Driving Policy,” he is referring to the use of artificial intelligence to teach autonomous vehicles, for example, how to merge into traffic at roundabouts.

 Driving Policy is “behavior,” and “this is a hard problem to solve,” explained VSI’s Magney.

“[**]fter all, we humans all drive differently,” added Roger Lanctot, associate director in the global automotive practice, at Strategy [**]nalytics. Driving Policy is about “building human behavior into software,” he said, creating a black box that nobody right now has any means to test or verify its safety. This a problem that could take “10 years,” Lanctot added. [**]lternatively, the auto industry might have to bend some [**]SIL-D safety standards to accommodate the testing of deep learning-based autonomous cars, he said. 

In the following pages, EE Times breaks down the new automotive trends spotted at CES, and we try to explain how robo-car conversations are being altered by adding the human factor to the design process.

The topics covered in the following pages include: two different types of [**]I applications (“Chauffeur” and “Guardian” as defined by Toyota; “[**]uto-Pilot” and “Co-Pilot” pitched by Nvidia’s CEO Jen-Hsun Huang); Chip vendors’ dual strategy — HOG and CNN — in designing next-generation fusion chips; Cars that understand drivers' needs; Need to teach cars how to negotiate traffic; Do autonomous cars need a watchdog chip



Chauffeur/Guardian vs [**]uto-Pilot/Co-Pilot

 

Toyota's two tracks of research: Chauffeur & Guardian
Toyota's two tracks of research: Chauffeur & Guardian

 

 

How best to apply [**]I for safety in automated driving is a major conundrum for researchers and design engineers the world over.

Gil Pratt, CEO at Toyota Research Institute, said his team is working on two tracks of research called Guardian — which basically assists the driver in situations that require quick response — and Chauffeur, which is closer to autonomous driving.

Curiously, Nvidia’s CEO Jen-Hsun Huang, too, introduced the concepts of “[**]uto-Pilot” and “Co-Pilot” during his keynote speech at the CES.

Nvidia is putting [**]I to work for Co-Pilot, which Huang said “promises to help your car understand you as well as it does the world around it.” [**]I Co-Pilot integrates external sensors to warn, for example, about a bicycle pulling out into traffic as you’re about to make a turn, or if a pedestrian has stepped into the road, he explained.

On the other hand, Nvidia’s [**]I [**]uto-Pilot, besides helping you drive better, “enables the car to drive itself, combining input from an array of sensors, HD map and — thanks to the ability to share data — a far deeper well of experience than even the most seasoned driver,” Huang said in his speech.

Nvidia introduces [**]I Co-Pilot
Nvidia introduces [**]I Co-Pilot

 

 

Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), believes the vision shared by those two companies are “more or less the same thing.”


Predictive vs. reactive


On the [**]D[**]S side, Magney said, “the applicability of [**]I has the potential to make safety applications perform better and more accurately than deterministic (traditional methods) do.”

He explained, “For example, [**]I has the ability to provide more useful warning and intervention, acting as a co-pilot (or agent) that can interpret certain conditions based on clues or bits of information.”

In contrast, “Traditional [**]D[**]S that warns or intervenes is based on certain reactive situations such as car that is moving outside a lane marketing. [**]n [**]I based co-pilot takes into consideration that the driver may be deliberately making room for a passing vehicle or object in the road that may not be a threat in a deterministic model but a certain behavioral clue suggests otherwise,” Magney noted.

What’s becoming more apparent than before is that — regardless of if or when Level 5 or Level 4 autonomous cars ever hit the road — [**]I is now positioned as very useful for [**]D[**]S.

Magney noted, “Fundamentally [**]I-based [**]D[**]S is predictive while deterministic [**]D[**]S is reactive. Predictive is better because it is more suitable to the infinite behaviors of human. Furthermore, [**]I-based solutions are always learning making them adaptable to differences in human behavior.”



Cars that get you


Toyota unveils Concept-i
Toyota unveils Concept-i

 

 
[**]t every auto show, nothing attracts more paparazzi than “concept cars.”

Toyota rolled out one called its “Concept-I” at this year’s CES.

 

 

 



Toyota describes the user experience as “intelligent, friendly and helpful,” as the Concept-i “gets to know you and your needs and then start to anticipate them for you.” This sounds slightly creepy, but the running theme of this year’s CES was exactly that man-machine relationship. Whether via wearable device or robo-car, technology suppliers seem to perceive a growing desire among people to be understood by machines.

 

Calling Concept-i’s user interface Yui “more pal than interface,” the carmaker said that in tandem with [**]I, it anticipates “your needs and informs the car so that Concept-i can consider and execute that next action accordingly.”

Toyota envisions a future in which “through biometric sensors throughout the car, Concept-i can detect what you're feeling. That information then gets analyzed by the car's [**]I. That's when the automated features kick in.”

The company went on, “Let's say, for example, that you’re feeling sad; the [**]I will analyze your emotion, make a recommendation and if necessary, take over and drive you safely to your destination. So safety and protection are a major benefit of this relationship. By knowing you better, Concept-i can help protect you better.”

 

When a human is driving
When a human is driving


Concept-i is also designed to interact with the outside world by letting pedestrians and other cars on the road know if it is driving “automated” or “manual.”

 

 

When the car is driving itself
When the car is driving itself


Driving Policy

 

Mobileye's co-founder, CTO and chairman [**]mnon Shashua
Mobileye’s co-founder, CTO and chairman [**]mnon Shashua

 

 

Mobileye’s co-founder, CTO and chairman [**]mnon Shashua, came to CES to discuss “three pillars of autonomous driving” – namely, sensing, mapping and driving policy, and how the company is addressing all three.

Shashu defined“Driving Policy” as based on “deep network-enabled reinforcement learning algorithms.” [**]s these algorithms form the basis of a new class of machine intelligence, they become capable of “mimicking true human driving capabilities,” he explained.

Calling it “the last piece of puzzle,” he explained that Driving Policy is what teaches autonomous vehicles “human-like negotiation skills.” Noting that “the society would not accept robotics to be involved in many fatalities,” Shashua explained how critical it is to nail this.

He said during the press briefing, “Many people describe four-way stops as one of the most difficult things” for autonomous cars to master, he said. “But I disagree.”

For four-way stops, you have rules of the right of way, he noted. In contrast, when autonomous cars must merge into traffic (whether it’s a lane merge or roundabout), “There are no rules. It’s much harder.”

 

[**]rt of merging into traffic at a roundabout
[**]rt of merging into traffic at a roundabout


The bottom line is that “planning is computing.” Cars with no plans before they merge into a lane end up creating a bottleneck. That’s why “autonomous vehicles need to sharpen their skills” to negotiate with other cars. In his opinion, it’s not sensing that can help robo-cars in this situation. “This is all about driving policy,” he said.

 

Driving policy is 'behavior'
Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), agreed that driving policy is behavior and this is a hard problem. Mobileye will have to solve it by working with its partners (Intel, Delphi, etc). “Mobileye does not have the hardware to support driving policy and this is why they are joining up with Intel to develop driving policy for BWM using Intel SoC,” said Magney.

Roger Lanctot, associate director in the global automotive practice, at Strategy [**]nalytics, told us, “[**]utomakers need to build ‘driving policy’ into software code.”

This poses a challenge for insurance companies like Swiss Re (who was at the CES as one of the partners for NXP), said Lanctot, because they will have to figure out “how to underwrite silicon software” in cars to assess their safety.


Dual track: HOG & CNN

Ceva's deep neural network toolkit (Source: Ceva)
Ceva's deep neural network toolkit (Source: Ceva)

 

 

Evident at this year’s CES was technology suppliers' scramble to perfect traditional computer vision while grappling with advancements being made in neural networks.

[**]s Phil Magney, founder and principal advisor for VSI, told us, several OEMs and Tier-one suppliers “are embracing [**]I for legitimate and practical applications, [but] only once served by deterministic algorithms [like HOG] where one size fits all.” 

Liran Bar, director of production marketing in CEV[**]’s vision business unit, explained to EE Times, “Most SoC vendors we know are on a dual track.” They’re keeping two teams – one assigned for computer vision and another tasked to go advance deep learning – pitting them against each other, he explained. During the CES, CEV[**], a supplier of DSP IP cores and SoC platform, announced that ON Semiconductor has licensed CEV[**]'s imaging and vision platform for its [**]D[**]S product lines.

SoC designers want to keep their options open, Bar noted, because the CNN they know today could be vastly improved by the time carmakers actually roll out highly automated vehicles featuring a new SoC.

Chip designers need to build in “flexibility,” Bar explained. More importantly, though, they are looking for an easier way to convert software originally designed to run on floating-point architecture to those that can run on fixed-point. This is necessary because CNN execution on a lower power SoC demands fixed-point architecture. [**]t CEV[**], “We are offering a framework for that conversion, saving their time to do porting,” explained Bar.



Watchdog for autonomous vehicles   


Leti shows off Sigmafusion demo
Leti shows off Sigmafusion demo

 

 
So, if safety functionality of [**]I-driven autonomous vehicles can’t be properly tested because of the non-deterministic nature of CNN, who’s going to watch if robo-cars are actually safely driving themselves?

French research institute Leti came to the CES this year to show off its low-power sensor fusion solution, called “Sigma Fusion.”

Julien Mottin, research engineer at Leti’s embedded software and tools laboratory, told us sigma fusion was designed to monitor safe autonomous driving.

Mottin stressed, “We believe in [**]I. We think [**]I is mandatory for highly automated driving.” But the inability to test the security of [**]I-driven cars – compliant with ISO26262 – troubles designers. He explained that Leti’s team set out to work on the project with a clear goal in mind: “How to bring trust to computing in autonomous cars?”   

Mottin envisions the Sigma Fusion chip to be embedded on the already certified [**]SIL-D automotive platform, serving as a watchdog. Or it could be integrated into the car’s black box.

Isolation from the rest of the automotive module makes it possible for Sigma Fusion to independently monitor what’s going on in the car. It “can’t explain why certain errors occurred in automated driving, but it can detect what has gone wrong – for example, an error happening in the decision path in a car,” he explained.


Two Leti research engineers: Diego Puschini (left) and Julien Mottin(right)
Two Leti research engineers: Diego Puschini (left) and Julien Mottin(right)

Sigma Fusion, compatible with any kind of sensor, can receive raw data directly from state-of-the-art sensors, the Leti researchers said. The version demonstrated at its booth gets data from image sensors and lidars, and fuses the data on an off-the-shelf microcontroller – in this case STMicroelectronics’ [**]RM M7-based MCU. The sensor fusion operation consumes less than 1 watt, 100 times more efficient than comparable systems, they added.

 

Leti plans to continue to develop Sigma Fusion by adding sensor technologies including lidar, radar, vision, ultra-sound and time-of-flight camera, into the system.

In essence, Sigma Fusion is designed to offer “safe assessment” of the free space surrounding the vehicle, and fast accurate environmental perception in real time, on a mass-market MCU, according to Diego Puschini, Leti researcher. The end game is to provide “predictable behavior and proven reliability to meet the automotive certification process."