While the advancements and merits of present day deep learning systems are undeniable, there are also a lot hype, overstatement and general BS regarding the subject. In the race of flashy headlines and clickbait, a lot of publications, even technical one, tend to overestimate the present-day capabilities of the technology and understate its limitation and caveats. One amusing post on a message board dedicated to AI described a problem of analytic geometry and ask for whether there is a solution using deep learning; the question was eventually given a solution which has nothing to do with neural or convolutional nets, but rather with solving a linear system of equations. Sometimes, I really think that the reason why people get excited about deep learning is because they imagine this is somehow a shortcut to laziness and universal basic income, which someone else will find and everybody will profit of.
In this post, I’ll write down some of the milestones I consider should be achieved before artificial brains really start threatening 90% of the jobs.
Education: Math and Computer Literacy
Making semi-autonomous, adaptive, learning machines practical requires a lot of work. And it’s not the kind of work required by repetitive jobs or by the railroad revolution in the 19th century US. No. It’s the type of incremental, oftentimes frustrating, mostly boring research work. No immediate hockey-stick returns. No IPOs. No getting-rich-quick. And this research has to do with math, computer science, electronics, neurology – the kind of subjects which usually aren’t cool at parties. In order to achieve the timelines of having machine do 60-80% of the work in 20-30 years, a lot more of the active population needs to be engaged in this area of work and research. Developers, mathematicians and data scientists need to become a lot more widespread if we’re going to have the workforce capable of reaching such ambitious objective.
Programming and IT need to become a lot more seriously taken as a core competency in functional literacy, from the same ages as reading and writing. The future of our civilisation is not necessarily un dolce far niente for humans while machines do all the work. More likely, it’s a symbiotic future, where human get to do the cooler, more challenging, more value-adding parts of jobs, while machines take care of the rest. Therefore, not being able to interface with machines effectively 20-30 years from now will be the equivalent of illiteracy. If in the ’80 and ’90, studying IT or programming made you the exception, in the 2030s and 2040s NOT having such knowledge that will label you as below-average. As such, before preparing for 340-day vacations, parents should take steps to educate their children for a tomorrow where blue collar is synonymous with unable to properly interface with machines.
And one more thing: teaching students how to translate English to code is not enough. They need to be taught analytical and critical thinking, the powers and the constraints of logic and the tools of math, be it the principles of calculus which sit at the foundation of neural nets (see gradient descent) or the structure of algebra (which show you that there is fundamentally no difference between weather reading, images and stock market ticks).
The Brain-Machine Interface
If we are to be able to interface with machines, so as to instruct them, to teach them and to make them work for us more efficiently, the keyboard will not longer cut it. Present-day input devices, whether they are touch screens or keyboards or voice recognition bots only allow ingesting about 10-20 bytes of information per second. By comparisons to machine-to-machine communication ranging in the Gbps, that’s practically nil.
The best depiction of what we imagine to be a solution to this problem is what the half-visionary-half-conman Elon Musk labelled neural lace. It’s a fancier, more elegant name for the cable that Matrix characters jammed into the backs of their head. Basically, it needs to be higher bandwidth interface between biologic and digital brain.
Of course, we are not yet sure were it would be attached, how it would be installed, what the dangers would be and how often you’d need to have lace-replacement surgery. What we can say is that both deep learning and the symbiosis with machines in general would benefit immensely even by having such an interface which bares tens of kilobytes per second.
Purpose-Built Physical Brain
I’m not talking about brains-in-jars, funny as that may be. I’m talking about the fact the we are currently using general-purpose processors (either CPUs or GPUs) for solving problems related to deep learning.
While undoubtedly, GPUs give a boost to training neural nets, here are a few things they don’t do:
- They keep data storage (memory) and processing (processor(s)) separate, which makes the system consume about 60-80% of energy by moving data around between processor registers and memory.
- They only allow for performing operations between linear structures in one processing heartbeat.
- They have relatively low physical parallel connectivity between processing cores.
- They don’t change their circuitry (what is connected to what) over time. In other words, they lack the equivalent of neuroplasticity. Let’s call it circuit-plasticity: the ability of a machine to change the state of its components together with the links/connection between said components as a result of executing its program.
One of the solutions for (1) is what is called a memristor, an electronic circuit which can both store a state (like a memory) and do operations with stored data (like a processor) at the same time. You know, kind of like neurons tend to do both. This would allow us to simulate neural architectures with more speed and more fidelity for the natural model.
Of course, that leaves us with the challenge of (2), (3) and (4) – how can we make a physical machines that rewires itself over time? Well, maybe 3D printing has something to do with it.
So let’s say we have these call integrated circuits with networks of millions or even billion of memristors. For the time being, let’s say that we can’t imagine how they could rewire themselves at runtime. But the next best thing would be to have a piece of software which can automate the physical design and the 3D printing of a memristor integrated circuit that fits your design of a neural network. The result would be a microchip with a network of components which are at the same time processor and memory. Such structure would more closely imitate the architecture of the brain, although lacking property (4): plasticity.
My point is the next best thing would be to have a cheap, repeatable and widely available process of printing out memristor integrated circuits, which are purpose-built for speech, vision, abstract thinking, data indexing or analysis. The turning point will be when printing one such component would be in the 10-200$ range, while the hardware needed, would be in the 200-5000$ range. While this may sound a bit far fetched, do a search for “nano 3D printing”; the existing technology is incipient, but it clearly demonstrates the potential.
Being the role model
Have you ever noticed that in 9 cases out of 10 the super-AI that is depicted in sci-fi movies tend to misbehave, become malevolent and try to wipe out the human race? Maybe this is because having a movie about man and machine living is harmony would be more boring than that time Jennifer Lopez thought she was an actress. But maybe there’s a deeper reason.
I’m by no means a Bible-reading fellow, but I do believe that a creator will inevitably bestow part of its nature upon its creation. And I do believe that deep-down-inside we are afraid of having a moral replica of us, just with more analytic, storage and cognitive power. I’m not even talking about the sort of morality that would turn a super-computer into a Skynet that would start a nuclear war. No, I don’t like exaggerations. I’m just talking about having the sort of bias that turned Microsoft’s Tay into a racist asshole within 12 hours into everyday automation – maybe when filing your taxes or requesting a loan or passing customs of a country. Here’s a small hint towards what could go wrong.
AI is not dangerous so much by the power of its self-awareness and thirst for human blood, but by inheriting our weakness, our biases, our lack of perspective. By being fed limited, truncated, partial, incomplete or incorrect data. By humans. In a way, the other danger of AI which does not have to do with job displacement, is the same as by amused when your kid learns to swear but then suddenly being ashamed and faking surprise when s/he does it in public.
Before worrying about the Matrix or Skynet, we need to worry about the responsibility we take when training a semi-autonomous entity we will delegate some of our decision and actions to. This may be as simple as making sure we present a cross-cultural array of faces when training a face recognition system or as complex as trying to define the morality of the trolley problem in examples and code.