Robotic and moving eyes on self-driving cars keep pedestrians away, reduce accidents: Tokyo University AI study

Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study conducted at the University of Tokyo (Todai) in Japan.

Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road as the moving vehicle is there. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian or away, the participants were able to make safer or more efficient choices.

The Gazing Car

The cart was fitted with robotic eyes which could be moved in any direction, controlled by one of the research team. The windshield was covered to give the impression that there was no driver inside. Self-driving vehicles seem to be just around the corner. Whether they’ll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.

While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at Todai have turned their attention to a more “human” concern of self-driving technology.

“There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians. So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.

The four scenarios. In the experiment, participants had to decide whether or not the cart had noticed them and was going to stop. The images show the first-person view of a participant. In (a) the cart is paying attention to the participant (safe to cross); in (b) the cart is not paying attention to the participant (unsafe to cross); and in (c) and (d) the participant doesn’t know.

One key difference with self-driving vehicles is that drivers may become more of a passenger, so they may not be paying full attention to the road, or there may be nobody at the wheel at all. This makes it difficult for pedestrians to gauge whether a vehicle has registered their presence or not, as there might be no eye contact or indications from the people inside it.

So, how could pedestrians be made aware of when an autonomous vehicle has noticed them and is intending to stop? Like a character from the Pixar movie Cars, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers called it the “gazing car.” They wanted to test whether putting moving eyes on the cart would affect people’s more risky behavior, in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.

The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and was going to keep driving. When the cart had eyes, the eyes would either be looking towards the pedestrian (going to stop) or looking away (not going to stop).

Video of VR experience:

Participants played out the scenario 40 times each, as if they were crossing a road on the University of Tokyo campus.

As it would obviously be dangerous to ask volunteers to choose whether or not to walk in front of a moving vehicle in real life (though for this experiment there was a hidden driver), the team recorded the scenarios using 360-degree video cameras and the 18 participants (nine women and nine men, aged 18-49 years, all Japanese) played through the experiment in VR.

They experienced the scenarios multiple times in random order and were given three seconds each time to decide whether or not they would cross the road in front of the cart. The researchers recorded their choices and measured the error rates of their decisions, that is, how often they chose to stop when they could have crossed and how often they crossed when they should have waited.

“The results suggested a clear difference between genders, which was very surprising and unexpected,” said Project Lecturer Chia-Ming Chang, a member of the research team. “While other factors like age and background might have also influenced the participants’ reactions, we believe this is an important point, as it shows that different road users may have different behaviors and needs, that require different communication ways in our future self-driving world.

“In this study, the male participants made many dangerous road-crossing decisions (i.e., choosing to cross when the car was not stopping), but these errors were reduced by the cart’s eye gaze. However, there was not much difference in safe situations for them (i.e., choosing to cross when the car was going to stop),” explained Chang. “On the other hand, the female participants made more inefficient decisions (i.e., choosing not to cross when the car was intending to stop) and these errors were reduced by the cart’s eye gaze. However, there was not much difference in unsafe situations for them.”

Ultimately the experiment showed that the eyes resulted in a smoother or safer crossing for everyone.

Real life role-play

The researchers imagined the scenario of someone wanting to cross the road in a hurry when not at a traffic light or crosswalk.

But how did the eyes make the participants feel? Some thought they were cute, while others saw them as creepy or scary. For many male participants, when the eyes were looking away, they reported feeling that the situation was more dangerous. For female participants, when the eyes looked at them, many said they felt safer.

“We focused on the movement of the eyes but did not pay too much attention to their visual design in this particular study. We just built the simplest one to minimize the cost of design and construction because of budget constraints,” explained Igarashi. “In the future, it would be better to have a professional product designer find the best design, but it would probably still be difficult to satisfy everybody. I personally like it. It is kind of cute.”

The team recognizes that this study is limited by the small number of participants playing out just one scenario. It is also possible that people might make different choices in VR compared to real life. However, “Moving from manual driving to auto driving is a huge change. If eyes can actually contribute to safety and reduce traffic accidents, we should seriously consider adding them. In the future, we would like to develop automatic control of the robotic eyes connected to the self-driving AI, which could accommodate different situations,” said Igarashi.

UK roads to be decked up to welcome self-driving cars in 2025

As the UK government is gearing up for self-driving cars from next year with a 100 million pound ($118 million) investment, vehicles with self-driving features will become a common sight by 2025.

The government is planning a new legislation which will allow for the safe wider roll-out of self-driving vehicles by 2025. “This enables the UK to take full advantage of the emerging market of self-driving vehicles — which could create up to 38,000 jobs and could be worth an estimated 42 billion pounds,” the UK government said in a statement.

The government’s vision for self-driving vehicles will also include 34 million pounds for research to support safety developments and prepare a more detailed legislation taking into consideration the possible performance of self-driving cars in poor weather conditions. Also how they interact with pedestrians, other vehicles, and cyclists will be looked into.

The government has confirmed 20 million pounds, as part of the overall 100 million pound, to help kick-start commercial self-driving services and enable businesses to grow and create jobs in the UK, following an existing 40 million pound investment.

The government said that self-driving vehicles could revolutionise public transport and passenger travel, especially for those who do not drive, better connect rural communities and reduce road collisions caused by human error. In future, these could be extended to provide tailored on-demand links from rural towns and villages.

 

First white-box testing model finds thousands of errors in self-driving cars

Researchers from Lehigh University and Columbia University have created DeepXplore, the first efficient testing approach for deep learning platforms used in self-driving cars, malware-detection and other systems.

How do you find errors in a system that exists in a black box?

That is one of the challenges behind perfecting deep learning systems like self-driving cars. Deep learning systems are based on artificial neural networks that are modeled after the human brain, with neurons connected together in layers like a web. This web-like neural structure enables machines to process data with a non-linear approach–essentially teaching itself to analyze information through what is known as training data.

When an input is presented to the system after being “trained”–like an image of a typical two-lane highway presented to a self-driving car platform–the system recognizes it by running an analysis through its complex logic system. This process largely occurs in a black box and is not fully understood by anyone, including a system’s creators.

Any errors also occur in a black box, making it difficult to identify them and fix them. This opacity presents a particular challenge to identifying corner case behaviors. A corner case is an incident that occurs outside normal operating parameters. A corner case example: a self-driving car system might be programmed to recognize the curve in a two-lane highway in most instances. However, if the lighting is lower or brighter than normal, the system may not recognize it and an error could occur. One recent example is the 2016 Tesla crash which was caused in part…

Shining a light into the black box of deep learning systems is what Yinzhi Cao of Lehigh University and Junfeng Yang and Suman Jana of Columbia University–along with the Columbia Ph.D. student Kexin Pei–have achieved with DeepXplore, the first automated white-box testing of such systems. Evaluating DeepXplore on real-world datasets, the researchers were able to expose thousands of unique incorrect corner-case behaviors. They will present their findings at the 2017 biennial ACM Symposium on Operating Systems Principles (SOSP) conference in Shanghai, China on October 29th in Session I: Bug Hunting.

“Our DeepXplore work proposes the first test coverage metric called ‘neuron coverage’ to empirically understand if a test input set has provided bad versus good coverage of the decision logic and behaviors of a deep neural network,” says Cao, assistant professor of computer science and engineering.

In addition to introducing neuron coverage as a metric, the researchers demonstrate how a technique for detecting logic bugs in more traditional systems–called differential testing–can be applied to deep learning systems.

“DeepXplore solves another difficult challenge of requiring many manually labeled test inputs. It does so by cross-checking multiple DNNs and cleverly searching for inputs that lead to inconsistent results from the deep neural networks,” says Yang, associate professor of computer science. “For instance, given an image captured by a self-driving car camera, if two networks think that the car should turn left and the third thinks that the car should turn right, then a corner-case is likely in the third deep neural network. There is no need for manual labeling to detect this inconsistency.”

The team evaluated DeepXplore on real-world datasets including Udacity self-driving car challenge data, image data from ImageNet and MNIST, Android malware data from Drebin, and PDF malware data from Contagio/VirusTotal, and production quality deep neural networks trained on these datasets, such as these ranked top in Udacity self-driving car challenge.

Their results show that DeepXplore found thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails) in 15 state-of-the-art deep learning models with a total of 132, 057 neurons trained on five popular datasets containing around 162 GB of data.

The team has made their open-source software public for other researchers to use, and launched a website, DeepXplore, to let people upload their own data to see how the testing process works.

More neuron coverage

According to a paper to be published after the conference (see preliminary version here), DeepXplore is designed to generate inputs that maximize a deep learning (DL) system’s neuron coverage.

The authors write: “At a high level, neuron coverage of DL systems is similar to code coverage of traditional systems, a standard metric for measuring the amount of code exercised by an input in a traditional software. However, code coverage itself is not a good metric for estimating coverage of DL systems as most rules in DL systems, unlike traditional software, are not written manually by a programmer but rather is learned from training data.”

“We found that for most of the deep learning systems we tested, even a single randomly picked test input was able to achieve 100% code coverage–however, the neuron coverage was less than 10%,” adds Jana, assistant professor of computer science.

The inputs generated by DeepXplore achieved 34.4% and 33.2% higher neuron coverage on average than the same number of randomly picked inputs and adversarial inputs (inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake) respectively.

Differential testing applied to deep learning

Cao and Yang show how multiple deep learning systems with similar functionality (e.g., self-driving cars by Google, Tesla, and Uber) can be used as cross-referencing oracles to identify erroneous corner-cases without manual checks. For example, if one self-driving car decides to turn left while others turn right for the same input, one of them is likely to be incorrect. Such differential testing techniques have been applied successfully in the past for detecting logic bugs without manual specifications in a wide variety of traditional software.

In their paper, they demonstrate how differential testing can be applied to deep learning systems.

Finally, the researchers’ novel testing approach can be used to retrain systems to improve classification accuracy. During testing, they achieved up to 3% improvement in classification accuracy by retraining a deep learning model on inputs generated by DeepXplore compared to retraining on the same number of randomly picked or adversarial inputs.

“DeepXplore is able to generate numerous inputs that lead to deep neural network misclassifications automatically and efficiently,” adds Yang. “These inputs can be fed back to the training process to improve accuracy.”

Adds Cao: “Our ultimate goal is to be able to test a system, like self-driving cars, and tell the creators whether it is truly safe and under what conditions.”