There’s been a lot of handwringing recently about how we don’t really understand deep neural networks. The MIT Technology Review even published an article with the sensationalist headline “ The Dark Secret at the Heart of AI. ” It head the subheading “No one really knows how the most advanced algorithms do what they do. That could be a problem.”
Well sure it could be a problem, but let’s get one important point out of the way: no one really knows how people do what they do, yet we have them do all sorts of things every day. People drive cars. People diagnose diseases. Does it matter that we don’t know how they do it?
And no one really knows how existing large scale software systems work either. By that I don’t mean that we don’t understand small components of these systems. I mean no one can grasp the whole in one go and understand how it works. We can follow pieces at a time but their interaction is extremely complex. How does Google work when you run a search query?
I am not trying to dismiss the “black box” nature of neural networks, but I think instead of being alarmist we need to think about what it is we are actually worried about and what to do about it.
It all comes down to understanding failure modes and guarding against them.
For instance, human doctors make wrong diagnoses. One way we guard against that is by getting a second opinion. Turns out we have used the same technique in complex software systems. Get multiple systems to compute something and act only if their outputs agree. This approach is immediately and easily applicable to neural networks.
Other failure modes include hidden biases and malicious attacks (manipulation). Again these are no different than for humans and for existing software systems. And we have developed mechanisms for avoiding and/or detecting these issues, such as statistical analysis across systems.
There is always more work to be done to improve how we can detect and avoid failure in systems. So let’s do this work for neural networks and we will be able to use them effectively. As an important aside: I am pretty sure that our work with neural networks will result in better understanding down the line of how they work *and* of how humans work.