When control systems fail …

Mar 28, 2019 | Hot Topics

We were driving along, quite safely or so I thought. Nothing to see here, just piloting down the middle of our lane, about to merge onto the highway. There were a few cars in the road, but none close when the car started going nuts. Beeping inside the cabin, flashing indicators in the instrument panel. Then, quiet. My husband kept driving (it had all stopped, after all) while I pulled out the manual to try to figure out what the car was telling us. Turns out, the car thought we were about to hit someone or something in front of us. But there was absolutely nothing in range so what had happened?

For us, it was frightening but not life-threatening. Control of the car wasn’t taken away from us and we were able to proceed as expected onto the highway. But: what if we had bought the next model up in the range, with the automatic braking system?

All this was a preamble to a post on control systems and the risk we’ve all seen over the last few months when they take control from drivers and, in horrifying cases, pilots. Let’s be clear: I know no more about the Boeing story than has been published in the New York Times and Wall Street Journal, so this isn’t really about Boeing, just inspired by it.

What’s a control system? Anything that takes input, usually from a sensor, applies software to that data and then does something in response. A toilet tank is a very simple control system: the float rises as water fills the tank, and shuts the inlet valve to stop it from over-filling. Your home thermostat is an electric system and may involve software: it senses the temperature and turns on the furnace or air conditioner to change the ambient temp back to the desired temp. In the case of my car, a sensor thought there was something in front of us and applied its response: to alert us to danger.

What can go wrong? Many things. When our car wigged out, we decided it had to be road grit giving false positive. It hasn’t happened again so, phew. But one permanent fix could be to place more than one sensor, so the software would ping both to see if readings matched before taking any corrective action.

It could also be a software flaw, where a reading leads to an inappropriate action. According to the New York Times, that’s what likely happened in Boeing’s case.

“Before a meeting with more than 200 pilots and airline executives at its factory in Renton, Wash., Boeing, for the first time, publicly laid out its proposed updates to the software as well other changes to the 737 Max that it hopes will get the plane flying again. The changes would give pilots more control over the system and make it less likely to be set off by faulty data, two issues at the center of the investigations into the crashes.”

More time and less faulty data. How are control systems designed? Usually well before there’s a 3D CAD model, engineers create the system of inputs and desired outcomes using 0D and 1D tools. They figure out which car, plane or sub-system elements can be measured with sensors and how those elements can be actuated, if needed, to get back to operating parameters. They design a system of sensors, actuators, logic controllers and software to meet the design’s overall objectives. Then, in a detailed phase, they figure out the interfaces between systems and, finally, create test cases to make sure that the control system works as intended. It’s not easy or simple, and requires both expertise and sophisticated technology.

Another concern, which may not play into the Boeing situation since the 737MAXes are relatively new, is that control systems are very specific: this sensor gives this reading leading to this response. All of that presupposes a certain model of sensor, a range of readings and a response system that’s the same as when control system was designed. And that’s very unlikely in an aging overall system, like an older airplane where parts have been swapped out over the years and new ones added. Since this varies from plane to plane, it’s unlikely the control system could be modified specifically for each, so it’s up to the human part of the control system to supervise and override, if necessary.

Is there a PLMish message here? Absolutely. As PLMs manage more and more in-service systems, they can alert developers to unforeseen combinations of sensors and control software. They can oversee control system verification and validation programs and scenarios to anticipate and manage risk. But they cannot force training or other system upgrades that’s up to the humans in charge.

The Boeing 737 MAX situation is horrible on many levels. Sympathies, of course, to the families of those lost in crashes and also to the control system designers whose systems didn’t prevent these crashes. But of concern to this PLMer is this will set back autonomous driving and similar efforts to wrest control of complex systems from humans. If we cannot trust the control systems, how can we trust the cars? My family voted “not ready yet” when we bought our last car. When will that trust grow?

The title image is of cherry blossoms because nothing else seemed respectful enough from Flickr.