Follow up on AI & Self-Driving Cars
People smarter than me, who have done more research on the topic, have written some cool things
In the time since I published my piece on open vs. closed systems, a few really interesting pieces have come to my attention that say what I said, but more completely, with more research. I’m going to focus on just one from the WSJ, by the inimitable Christopher Mims. This passage sums it up quite nicely
Today’s AIs are quite good at things like teaching themselves to stay within lines on a highway. The next step up is rule-based learning and reasoning (i.e., what to do at a stop sign). After that, there’s knowledge-based reasoning. (Is it still a stop sign if half of it is covered by a tree branch?) And at the top is expert reasoning: the uniquely human skill of being dropped into a completely novel scenario and applying our knowledge, experience and skills to get out in one piece.
And I actually think he put it even more succinctly and accurately in this tweet:
The “phase change” he describes is what I think of as the leap from “closed” reasoning (how to behave in environments that have been well defined by the training data) to “open” reasoning (how to reason from first principles in arbitrary environments). The algorithms we use today are strictly capable of generating ever-increasing performance in “closed” environments, and ever-larger training datasets allow us to push the boundaries of what’s inside the boundaries of this environment. But it cannot help us on the other side of the boundary, and there’s a lot on the other side.
Take this anecdote for example:
A computer vision system is constantly mistaking traffic lights being transported on the highway as actual traffic lights. One wonders what would have happened had those lights been on, and set to red.
In today’s paradigm, the only hope that self-driving cars have to being able to accurately handle this situation is to gather enough examples of this situation -- traffic lights being transported on the back of a truck on the highway -- to be able to reliably train the neural network to not recognize it as a traffic light in these circumstances.
In reality, though, these novel situations are impossible to completely guard against.
To compensate, makers of “self-driving” cars have in some cases gone the opposite tack, instead making an ultramodern version of a train, which instead of being guided by rails, is guided by extremely high quality 3D maps of very specific routes. But these “self-driving cars” would presumably be even more tethered to the specific route than others.
As before, I will note that while this phase change may seem completely insurmountable to us today, it is probably no more insurmountable-seeming than recognizing birds in images seemed to be, when XKCD put out this comic, in 2014.
Some more technical reading, if you’re so inclined:
Rethinking the maturity of artificial intelligence in safety-critical settings