4

How to ensure the safety of Self-Driving Cars: Part 2/5

 2 years ago
source link: https://medium.com/@olley_io/how-to-ensure-the-safety-of-self-driving-cars-part-2-5-b4eafb067534
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How to ensure the safety of Self-Driving Cars: Part 2/5

There’s an ongoing battle between LiDAR, Automotive Radar, and Cameras (and a few others too) for the title of the self-driving car’s “eyes:”

Figure 1: McKinsey&Company Evaluation of Autonomous Vehicle SensorsFigure 2: National Instruments Visual of ADAS Sensors

But self-driving car do more than just “see” the world. The cars also are equipped with sensors onboard that can tell the vehicle more about the surrounding world and itself. These sensors tell how fast the car is moving; the G-force acceleration experienced by the vehicle in forward-back, side-to-side, and up-and-down directions; the current steering angle; and lots more. It is a combination of these perception systems (camera, LiDAR, Radar) and sensor systems (GPS, IMU, Wheel Speed, etc) that make up the inputs to the “sense” block of the self-driving car AV stack.

Part 2: How well can a self-driving car sense the world?

Often included in the “sense” block of the AV Stack is the integration of all of the sensors, which let’s the vehicle make determinations about the environment. Examples include “there’s a pedestrian coming out of the bushes on the left side at 3 mph towards the vehicle,” or “it’s nighttime and raining,” or even “the vehicle is driving up a 10% hill while turning at a 3-degree angle.” The integration of sensors together is called “sensor fusion,” and the determination of what is going on in the environment is called many things, but commonly referred to as “Scene Understanding.”

There is a huge industry focus on developing this layer of the AV Stack. Engineers want the car to be able to see and understand the world with the “intelligence” of humans. Some of the most brilliant people are working on software algorithms that fall under the “Artificial Intelligence,” “Machine Learning,” and “Deep Learning” buckets to allow the car understand what it sees:

Figure 3: AI, Curt Hopkins, Managing Editor, Hewlett Packard Labs

So, with all of these sensors and algorithms, how can we be sure that everything’s working correctly? We break each component or some grouping of the components into their inputs and outputs and verify they are doing what they were intended to do. We run tons of tests to get a bunch of data, and then run statistics on that data to prove with some confidence, that the component or group functions correctly.

Below we will break down each of the components and determine how we verify them.

Cameras

Most camera testing is done at the company that makes the camera. A camera is fundamentally a sensor that grabs a bunch of color points in space and arranges them into an image, often referred to as an image array. This image array is converted into a digital signal and is passed along to the hardware that does sensor fusion and scene understanding. Camera technology is fairly mature, and the processes for verifying that the camera converts the right picture to digital lines is well understood, so it should not be an area of concern for autonomous vehicles.

LiDAR

LiDAR systems for autonomous vehicles are relatively new, with the first player, Velodyne, only really demonstrating this capability in 2005 at the first DARPA Grand Challenge. LiDAR technology is quickly evolving, with the goal to make the LiDAR sensor economical and compact. With this technology shift, companies making LiDAR systems are having to adjust the ways they verify their systems.

LiDAR is a laser-light point-and-shoot methodology for sensing the world. A transmitter spits out a bit of light, waits for that light to bounce off an object, and since it knows how fast light travels, can determine how far away that object is by determining the time that’s passed between sending out that light and receiving it. LiDAR units can broaden their field of view by using a bunch of lasers that spin around in a circle, or more recently, as a stationary bunch of lasers that spread out along a field of view, called “Solid State LiDAR.” After all the light is received, the LiDAR system sends an array of direction and distance information back to the hardware for sensor fusion and scene understanding, referred to as a “point cloud.”

In order to verify the LiDAR system acts appropriately, an engineer can setup an artificial scene with predetermined objects a known distance away and verify the results of the LiDAR. More advanced test methodologies involve having another light source feed in light to the LiDAR being tested with a time-based pattern that represents a known field of view, and then the Engineer can compare the results of the LiDAR with the known, simulated environment. This type of testing is called “Hardware in the Loop” since there is a test system that simulates a known stimulus to the hardware under test, and the feedback from that hardware goes back to the test system, making a “loop.”

There are many more ways engineers verify the correct functionality of the LiDAR system, including stress testing the unit at various weather conditions, and ensuring that it functions appropriately with different electrical signals going to and from the unit. In all, this is can be a very involved procedure, and engineers picking LiDAR systems for their self-driving cars should do their research before selecting a unit. Suppliers will provide data on the life expectancy, accuracy, and failure expectations of their units, but engineers putting LiDAR systems in vehicles must perform their own safety tests as well.

Radar

Radar has been around forever. It is similar to LiDAR in that it is a “point-and-shoot” technology, but uses radio, or electromagnetic, waves to do this. Radar lends itself well to long-distance object detection but is not typically very precise.

So how do you test this thing? Well, it’s just like LiDAR, but since the RADAR technology is less expensive and better understood, some companies are already creating tools for this purpose:

Figure 4: National Instruments Vehicle Radar Test System

Again, engineers need to work with Radar suppliers to make sure they stringently test their devices and that those engineers again retest the unit once it’s onboard their vehicle.

Vehicle Sensors

Vehicle sensors have been built into cars for quite some time, but only since 1993 did the International Organization of Standardization (ISO) determine that the way a sensor talks to a car will be through a digital 2-wire protocol developed by Robert Bosch Gmbh called the “CAN Bus:”

Figure 5: CAN Bus, CSS Electronics

Sensors that sit on the CAN bus are plentiful. They include accelerometers, Internal Measurement Units (IMUs), Vehicle speed sensors, wheel sensors, joint angle sensors, tire pressure, and many, many more. The ISO (ISO 11898) standard is what ensures that the makers of these sensors are verifying their sensors before they ship to a customer.

If you’re retrofitting a vehicle for automation, you’ll need to plug into that CAN bus and make sure you’re able to decipher the signals and send your own. After all, the vehicle must read these signals to operate appropriately. In a “Drive By Wire” (DBW) vehicle, there are no manual, only digital, connections between the accelerator, brake, or steering wheel and the engine and wheels. The CAN bus is what communicates the intentions of the driver to the vehicle.

If you’re building an autonomous vehicle from the ground up, you’ll need to ensure the appropriate selection and mounting of sensors. This must also be verified by driving the vehicle or simulating a drive with HIL testing, and then analyzing the results from the sensors. Same goes for any additional sensors added to an existing vehicle.

What if one of the sensors is off?

Here’s where the engineers again need to step in. Their algorithms onboard the sense layer must do a sanity check of the sensors at some predetermined interval. Adjustments should be made if necessary. There should also be some redundancy in the sensors.

If one of the sensors malfunctions or disconnects, or if your vehicle is struck and a camera moves, what happens? Well, if the system “self-calibrates,” its sensors, this should catch some of these issues. Otherwise, the system may just need to send a command to the rest of the software stack that a sensor is malfunctioning, and the rest of the AV stack can decide what to do.

Engineers need to make sure that the decision on how to handle a malfunctioning sensor is correct. Like how LiDAR is tested, an engineer can send simulated signals to the sensor fusion hardware that represent a failure of a sensor and see how the system responds (HIL). Even before that happens though, the engineer can send simulated data in software to the segment of code on a development environment to see how that code responds. This method of testing is called “Software-in-the Loop” or SIL, testing because the program that’s testing the code sends data to the software being tested and gets a response back, again making a “loop.”

All these tests are run under various conditions and a ton of data is produced. That data is categorized, labeled, and analyzed to provide a statistical determination about how well the vehicle recognized the sensor failure and how it responded.

Scene Understanding: Static or Semi-Static Objects

Ah more software running on hardware to test! The software for scene understanding can be quite complex and can even be a “black box” to many engineers developing it. Regardless, it is up to these engineers to be sure that it’s objectively safe.

An engineer testing scene understanding will verify the software at many times during development. They can even split parts of the scene, such as first checking “is there an object” then “what is that object” and even then “what does that mean for me?”

Thousands of simulations with images, LiDAR data, and Radar data can be fed in software to the scene understanding to check that the scene is correct. Often, this requires a set of “Training data” where the result is already well known (that’s a dog). A bunch of data is then analyzed, and again a statistical probability that the scene understanding was correct is provided.

Engineers can take this one step further by simulating camera, LiDAR, and Radar signals to the sensors on the vehicle and testing if the scene understanding system got the scene correct. This is the HIL approach.

To test scene understanding, engineers need tons of images and point clouds. A single one of these images takes up significant space, so a car operating in real-time would fill 4 TB of data in an hour and a half, equivalent to 250 Million pages of paper, or 83 days of watching DVDs straight (source).

Figure 6: Intel Car Data (Source)

As you can imagine, managing all this data for testing a vehicle is a big challenge. But engineers are working through this and can provide stats on just how good their scene understanding algorithms are. This should inspire the public.

Scene Understanding: Dynamic and Real-Time Objects

This is just like static objects, but now you need multiple back-to-back images and point cloud information to understand how objects are moving in space. So not only do you need to correctly identify objects, you need to know how they’re moving and where they will likely be next based on physics and reasoning. This can be especially challenging.

Just like for static data, engineers must simulate environments that are dynamic with SIL and HIL and prove the scene understanding made the right prediction. You now need series of images over time and you need to test this quite stringently, because head-on-collisions with another moving body can be deadly.

Luckily, engineers are figuring this one out too, but they need more and more data. For some of these challenges, the algorithms that engineers use are not fully mature, but there’s daily progress on that front. This is one the public should be keenly aware of.

Scene Understanding: Vehicle-Road Scenario

OK, so now you’re confident the robot “driver” of the vehicle sees the road correctly. What else does it need to do? Well, it needs to figure out the scenario of the car in space at any given time. You as a driver do this all the time. You can easily tell if you’re going up or down a hill, if you’re in a turn or going straight, or if the roads are covered in snow or clear. More complex things you may pick up on are being sucked into the back of your chair, or forced towards the windshield, or swaying off to the side based on the Gs the vehicle is imparting on you.

A vehicle can figure out all these thing by sensor fusion. It can read the linear acceleration from the IMU and tell the angle of the car and how fast it’s pitching forward/back, rolling side-to-side, or yawing through a turn.

Figure 7: SAE Axis System

A combination of perception and acceleration information can tell the inclination or bank of the road, even dips and crests. Perception and wheel speed vs vehicle speed allows the vehicle to guess the coefficient of friction between the road and tires (albeit, this one can be quite challenging).

Since we trust the sensors already, we test the ability to understand the road by simulating data to the software that represents certain road conditions (SIL), by sending simulated signals to the sensors that represents road conditions (HIL), and even by putting the vehicle on a jig, called a chassis dynamometer, and verifying the results the system spits out:

Figure 8: Meidensha Chassis Dyno (Source)

For this one, there is no ISO standard, and the Society of Automotive Engineers (SAE) have not recommended an approach to take to guarantee that the vehicle knows itself. Many autonomous vehicle companies are relying on only the perception and GPS map information to provide this information. Vehicle makers will need to get better at this in the future to ensure the safety of the vehicles, and this will be especially evident when we discuss path planning.

The Hardware that hosts the Stack

There’s another ongoing battle about the appropriate hardware to host all this software described above. Some of the many players in the game are the CPU, GPU, FPGA, ASIC, TPU, IPU, MCU, etc. They have their tradeoffs, and some of them can be loosely described by this image:

Figure 9: Silicon Alternatives (Source)

In today’s (2018) world of self-driving-car prototypes, we see most cars built using a combination of CPUs and GPUs. Though in the future, this will likely be some combination of the hardware contenders:

Figure 10, 11, 12: Adrian Colyer (Source)

So, what needs to be tested in hardware? Well, in the images above, you see a metric called “Latency” and a metric called “Power.” Latency is how long it took to for the software on the hardware to make a decision. You want to minimize this. “Power” is how much electrical energy it took to make that decision. You want to minimize this as well, because more power consumption means you can drive the car less distance, whether it’s a gas or electric vehicle. Certain decisions are higher priority than others, as we’ll also discuss in Part 3 and Part 4. For example, you need to know if there’s an emergency scenario immediately, but you may only need to check on the temperature of the outside air every couple of seconds, since temperature changes much more slowly.

You test both latency and power by giving the hardware a “load,” or some task to do while you watch it perform. You measure how much voltage and current that task consumes, and you multiply them together to get Power. You also benchmark how long it took each task to complete and you log that too.

Latency can be a double-edged sword though. You may have two pieces of hardware where one runs much faster than the other 90% of the time, but 10% of the time runs slower. The other piece of hardware always runs at the exact same time through all tests. The amount of variation in latency is called determinism. What you need for a mission-critical task is a low-latency, deterministic system. You can offload non-mission-critical items to things with higher latency and/or non-deterministic systems, ideally that consume the minimal power possible.

So, an engineer must make the right decision on the hardware selection and test it themselves to make sure they get the response they need while consuming the smallest amount of power. Lots of data again!

Conclusion

So where does this leave us? Well, it should be clear that some combination of SIL, HIL, and real-world testing is required to make sure the sensing system works appropriately. It should be also clear that to do this requires massive amounts of data, a ton of time, and a bunch of tools to help the engineers navigate all this. Some of the tests are standardized, others are not. For us to be sure the vehicle senses the world correctly, we’re going to rely on this process improving over time where each element is objectively better than a human driver.

Read the Rest of the Series: How to ensure the safety of Self-Driving Cars

Part 1 — Introduction

Part 2 — Sensing

Part 3 — Planning

Part 4 — Acting

Part 5 — Conclusion


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK