You build the machine to perform a particular task, and then you optimize your machine according to this metric. But its behavior is an open-ended aspect. And it’s an unknown quantity. There are behaviors that manifest themselves across different timescales. So [when you’re building it] maybe you focus on short timescales, but you can only know that long-timescale behavior once you deploy these machines.
-- Iyad Rahwan in Quantamagazine
In Silicon Valley, we make a fairly basic widget that goes into an extremely complicated system. Though far from trivial, the engineering of the widget is a known entity, incorporating half a million lines of code with a number of discrete electronic components that provide input from the outside environment. The functionality of our widget is simple, but the details are in how the higher-level system interacts with it (and vice versa).
This is systems engineering, and it is a daunting task. Sure, you can model it. Simulate it. Test it in a representative environment. But there is nothing that can fully recreate the functioning system, because there are quite likely more unknown variables, than known. So what is currently going-on in the autonomous vehicle space (not where my widget goes, but the complexity is similar) is seemingly endless data collection - exponentially more than occurs at Boeing or Airbus.
I think Iyad is on to something, with his re-characterization of systems engineering as a behavioral science. Perhaps we need to zoom-out, a bit, and get a larger view of how complex systems interact with their environment. I have no incite into Google's autonomous vehicle group (Waymo) but I would be extremely curious to see how Google AI and Waymo are interacting, and if that interaction is shortening the bug discovery loop (that "long timescale behavior" Iyad refers to).