Models play an important role in developing software for autonomous systems like self-driving cars; they are used to simulate and verify behavior, document the system, and generate code. Jonathan Sprinkle, Associate Professor of Electrical and Computer Engineering at the University of Arizona, gave a keynote on developing software for self-driving cars at the GOTO Amsterdam 2016 conference.
Sprinkle stated that where software for self-driving cars initially was monolithic, it is now changing to become composable software where components are assembled to built the needed functionality. Also where previously lots of data from sensors like radar, GPS and cameras was combined into one map using sensor fusion, this is now changing towards composable perception. This means that the car no longer assumes the entire world around it is static (like a map), but rather that it perceives the things around it, and can infer how those things may move, based on adding new kinds of sensors.
Composable software can be modeled using processes that have input and output and feed into each other. Such process models focus on the functionality, and thus are less useful for non-functional behavior, argues Spinkle:
While functional behaviors may be (fairly) easy to test, nonfunctional behaviors rarely compose—and these behaviors are the ones that must be guaranteed for complex cyberphysical systems that interact with humans.
A self-driving car can be considered a cyber-physical system which combines computation, communication and control. It’s a network of interacting elements. This combination makes the system expensive to build and test, since the computation and communication requirements make it likely that adding a new component to the system can change how other systems communicate, and the time it takes to make computation.
Sprinkle showed examples of how model-predictive control is used to control trajectories of a car at runtime to avoid collision with an obstacle. If calculation is too slow you will collide, since the moment of correction falls past the point where you meet the obstacle. If you use a simpler model with lower accuracy that can be calculated faster, you might hit the obstacle if the resulting calculation is not precise enough. You have to take computation time into account when calculating tracks and deciding to intervene, and you have to know how wrong a model that is used can be in order to calculate corrections. All of this has to be done in real time.
Sensors that are currently used in self-driving cars are very expensive. We need to get cheaper sensors said Sprinkle, which may not be able to do everything we need but can still do a lot of the things we don’t like to do. Self-driving can be considered on a continuum, which starts with things like self parking, adaptive cruise control via freeway driving, up to fully autonomous driving the car under all circumstances.
InfoQ interviewed Sprinkle to get more details on how people decide to give control to autonomous systems, how to model software that is used in autonomous systems and the benefits that modeling brings, using test data to validate that the software that drives a car is working properly and techniques for writing reliable code.
InfoQ: You talked about giving control to autonomous systems. How do people decide how much and which control they want to give to such systems?
Jonathan Sprinkle: Many folks actually don’t realize how much control they are currently delegating for systems that are already around us — but not moving. For example, a modern heating system will keep our living space reasonably close to our desired set point: we control the set point, but the system manages it for us. When systems start to move, then long-term interaction is what we need to get comfortable when we are not in control. Most of us are comfortable now with cruise-control for a vehicle because we have employed it enough. However — it’s a big leap to go from cruise control to hands-off bystander in a pedestrian area. This is why most autonomous capabilities are being rolled out by task, instead of leaping out fully-formed like Athena from the head of Zeus.
InfoQ: How can you model software that is used in autonomous systems?
Sprinkle: The most easy to understand software tasks are related to planning and control. In planning, high-level objectives like a destination are transformed into other high-level concepts, like a desired route, and then that route is transformed into desired speed and path. Along the way, these concepts are modified to account for things discovered locally (e.g., changing lanes, crowded road) into trajectories with desired velocities. Software models for these concepts can in some cases be construed like a UML state model, for example, where switches between driving modes are taken based on locally-discovered obstacles and state. The role of a sequence model is also relevant, where messages received from various sensors trigger changes in state. These software components are typically event-triggered, meaning that a user provides desired inputs and the system responds, or a sensor detects something and the system responds. Thus it is no surprise that reactive models serve best.
In the case of control, most software can be modeled as a component (functional block) where inputs are transformed into outputs. These task are typically time-triggered, where regardless of the availability of new data from the system, the component should respond. This is necessary to ensure that all blocks can be scheduled, and that a deluge of data will not overwhelm a system resulting in delayed action. It also serves as a safety switch, so that if a sensor goes offline, the controller will realize it has failed and can take action. Here, block-diagram style models such as those seen in Simulink are great to provide behaviors of the desired system.
InfoQ: What are the benefits of software modeling?
Sprinkle: Models provide a tremendous ability to reason about system behavior without regard to exact inputs. We can perform checks for liveness (or deadlock), and also do scheduling of system components, but leveraging data found in the models. For developers, the use of models provides a unique way to document the system during development: especially if code generation is used to provide the final output artifacts.
InfoQ: Can you give examples of how you use test data to validate that the software that drives a car is working properly?
Sprinkle: A great validation example is to check whether (when driving under human control), if the autonomous controllers make reasonable decisions with respect to, say, desired velocity. In terms of verification, where we ensure that the system does not violate constraints, we typically verify that the model is correct using techniques from theory, and then we rely on code generation to ensure that the output system reflects that model.
InfoQ: Are there any other techniques for writing reliable code that you can recommend?
Sprinkle: Code reliability (when written by hand) requires a rigorous process and rigorous checks with regression data and desired outputs. This is why I do urge folks in the area of human-in-the-loop system to consider code synthesis from models where possible, since best practices in reliability and robustness due to losing connectivity or change in rate can be accounted for by new developments in the generated code, without needing to modify the overall logic of the system. Of course, if the logic is wrong, then no amount of coding brilliance will help!
Research on self-driving cars is done at the University of Arizona, using a full-sized Ford Escape equipped with sensors and hardware. Software is simulated before using it in the vehicle using domain specific modeling. Data is collected during experiments and from real driving. Through this approach, it is possible to uncover design and integration issues before testing on the vehicle. Data from previous tests is used to ensure that unsafe decisions are not made, where previous decisions were made before. Research results are published on the CAT Vehicle website.