Artificial Intelligence (AI) is like Fusion, its always 30 years away. The dream of building computers that are as intelligent (if not more) than we are is still a long way off, but there have been some recent changes that make several experts in the field more bullish that maybe this time, it really is just another 30 years away.
The two biggest technological changes are computing power and big data technologies. As discussed in my last blog on Machine Learning (ML), ML has been around for 30 years as a branch of AI itself, but is only now starting to get mainstream traction as our ability to train the algorithms on huge data sets is becoming viable. This renaissance in ML in recent years, has given us Google’s driver-less car, IBM’s Watson that won Jeopardy against the worlds best human players and iPhone’s digital personal assistant, Siri.
However AI has the potential to take all these advances much further. If ML gives machines the ability to act intelligent by using statistical algorithms, AI will enable machines to actually become intelligent, allowing machines to actually learn and understand data by themselves.
The One Algorithm Hypothesis
Andrew Ng, one of the worlds leading experts in AI (you can take his Standford University course on Machine Learning here) was skeptical about the potential of AI until a few years ago when he heard about the “one algorithm” hypothesis. Previously, AI experts and scientists assumed the brain was a highly complex system of individual sub-systems. For example, the part of the brain for seeing, was different from the part of the brain for hearing, there were different ‘algorithms’ for each part of the brain specialized at doing certain tasks, which together made up the brain.
However experiments on mice told a different story. When they cut the optic nerve of a mouse and connected it to another part of the brain that was for hearing, within weeks the mouse could see again. Essentially the brain had reprogrammed itself so that the part of the brain for hearing was able to see. This is the basis of the one algorithm hypothesis. That essentially our brains are just one ‘algorithm’ repeated across the entire brain, but with each part trained to solve a different problem (like seeing or hearing).
If this hypothesis proves to be correct as some experiments are indicating, then instead of computer scientists like Andrew Ng having to spent their entire lives trying to figure out the 1000’s of different algorithms the brain uses for all its different functions (as was previously assumed), they just need to discover that one algorithm that can be trained for each function like speech, hearing, seeing, movement etc. This would be an much easier problem to solve and creates the possibility that within our lifetimes we could actually simulate a human brain on computers, finally giving us the AI science fiction has imagined.
Real Intelligence Vs. Simulated Intelligence
ML is an extremely powerful technology by itself that gives the impression that machines are really intelligent, by driving cars, responding to voice commands on our phones or winning Jeopardy. The reality is, the algorithms used to make computers intelligent today are actually ‘dumb’. They are trained on data to do very specific tasks, but don’t adapt well to new tasks or inputs without humans around to help. At most, they simulate the act of being intelligent, like driving a car, but don’t have the intelligence to adapt by themselves in new situations, or invent and design the car they’re driving.
For example, high-frequency trading algorithms use ML to predict changes and make profitable trades on the stock market for investment banks. However when a new situation is encountered, like the ‘flash crash’ of 2010, the algorithms have no intelligence to try and understand what’s going on, and humans still need to step in to correct the mistakes.
The reason for this is as human’s we have a conceptual model of the world, we aren’t just trained to trade stocks or drive cars, but have an understanding of human behavior, economics, the laws of physics, and what most people would call just plain common sense. This means when we encounter a new situation like a stock crash, or new condition on the road, we’re able to apply our ‘instinctive’ model of the world, and apply experience from other disciplines, to make smart guesses on what we should do next.
In order for AI to be as intelligent as humans, they too will need to have a conceptual model of the world, with knowledge across a whole range of areas, so they can make smart guesses on what action to take when they encounter new situations. This will require not just discovering the ‘one algorithm’ our brains use and simulating that, but also teaching the algorithm like a new born human, so that it also has the knowledge and ‘memories’ it needs to understand the world like a human.
Until ‘real intelligent’ AI becomes a reality, in the short term human’s will have to fill the gap, supervising the ML algorithms and applying our common sense and model of the world to adapt the algorithms to new situations. However, computers will always be better than us at processing and finding information in large data sets quickly.
The fact of the matter is humans are terrible at making decisions when faced with large amounts of constantly changing data. Our brains easily get overwhelmed, which is why many people talk about ‘Information Overload’. I once heard that apparently we now receive more new information in one day than we did in a lifetime 200 years ago. We are surrounded by adverts, TV, radio, feeds, be it Twitter feeds, Facebook feeds, online news, web pages, and my personal hate, email. Our world has changed, but our brains are pretty much the same as they were 100,000 years ago, evolution compared to technology is painfully slow. In fact some people have noted a rise in autism over the past 10 years, which they believe is the brain trying to evolve to be better at information processing. Its no secret that a lot of Silicon Valley is on the autistic spectrum, and that the ‘nerds’ are quickly becoming the powerful elite of today’s modern world. The ability to process large amounts of information quickly is a competitive advantage, and unfortunately you’re at the mercy of your genes and IQ.
In the short term, AI will be used to augment our intelligence, to make any of us smarter, while still relying on our intelligence to provide the creativity and understanding computers will lack for a while. This is called ‘Augmented Intelligence’ as software enables us to process more information, get insights and understanding quicker, enabling us as humans to make smarter decisions.
AIs Impact on the Enterprise
By far the leader in this space is Google, who has more data, and probably more data scientists, working on this field than any other company right now. If you want to see the future of AI, just look at Google’s grand vision – a brain for the internet that will one day will be able to be asked and respond to questions about almost anything as a human would (quite literally like the on-board computer on Star Trek).
You could imagine this concept being applied to the board room of any Enterprise, not just using public data like Google, but also crawling and learning from all the private data an Enterprise has, so that when the Executive team needs to make a decision, the can get all the information they need to make a smart decision from both public and private data.
In the short term (i.e. < 10 years) however, the majority of Enterprise software will focus on acting intelligent, using ML, and augmenting our own intelligence so we can act smarter and make better decisions. Early examples of this are the social media feeds that selectively filter out posts so we only get the ones we’re interested in, reducing the information overload we have today, while (hopefully!) giving us only the information we need. Another example is the analytics tools Enterprises are using to find correlations in their data so they can make smarter decisions.
The big question is however, if computers become as, if not more, intelligent than us, then is there even a need for human’s at all to run an Enterprise in the future? Could the ultimate, most competitive Enterprise, be one that creates and grows by itself, with humans merely acting as shareholders in the company? I’m sure someone will try to create this type of Enterprise, and it will eventually come down to whether our creativity and willingness to take risks against all better judgement, will be our competitive advantage against the machines, as portrayed in many a Hollywood movie.
I suspect however, that if the one algorithm hypothesis proves right, humans will not have a monopoly on creativity and foolhardiness as computers become more intelligent, and we could very well end up living in a world entirely run by machines.
In my next blog I will investigate the impact of these trends on jobs and the future of work. To get notified when its posted please follow this blog or my Twitter feed.