It's Time To Demystify Machine Learning – Forbes
The hype machine is cranked up to an 11 on the topic of machine learning (sometimes called artificial intelligence, though I don’t call it that because AI is not really intelligence and there’s nothing artificial about it). Machine learning will either empower the world or take it over, depending on what you read. But before you get swept away by the gust of hot air coming from the technology industry, it’s important to pause in order to put things into perspective. Maybe just explaining it in reasonable terms will help.
Shortly after the first caveman figured out how to make fire, the second caveman wanted to learn how to make fire, too. However, he didn’t — and couldn’t — check out a book from the local library or take a three-credit college class. Instead, he watched caveman one make fire, tried to do it himself, failed, was corrected and then did it again until he got it right. Fundamentally, this is how all humans have ever learned anything — by watching, trying, failing, correcting and repeating.
Think about this from a modern perspective. If you were to drop your phone and crack the screen, you would probably go directly to YouTube and search “how to replace an iPhone screen.” After watching the video, if you felt like it might be within your capability, you might go to Amazon and order a replacement screen kit (though you should probably order two because you are going to mess up one of them.) When the box arrives, you go back to YouTube and watch the same video and try to match what the person onscreen is doing. If you succeed, you’ve now completed a very technical task that you never learned in school or took a class to do. And you probably didn’t think you’d be doing said task when you got up that morning.
This process is how we’ve always gone about learning something, and it accounts for the popularity of social channels like YouTube, which is mostly humans watching other humans do things and relaying their experiences and lessons.
But curiously, this is not how we’ve been using computers. What we’ve been doing is collecting all of the data, storing it and then developing programs to process the data into information. The early history of computing was automating manual tasks — taking data and “processing” it, first by collecting all of the data (big data) into data warehouses and lakes and pools and then using software we call analytics tools to study it. The trouble is that analytics software doesn’t really analyze anything — it just slices and dices data and displays it on a screen or a report for some human to figure out what it means.
Fortunately, this era is largely over, because now it’s about harnessing knowledge from data versus only processing it.
What we are doing is teaching computers to learn the same way we do. We send sets of data to very powerful machine learning software that is built into cloud platforms like Google and IBM. Now, we ask the machine to figure out what the patterns are and what the data means.
Of course, the machine gets it wrong, but then the task is to correct the model and do it again. After multiple iterations, the model becomes better and better, almost like a pixelated photograph becomes sharper each time more data is sent to fill it in. In this way, you can send huge amounts of data through the model and it will it respond rapidly with great insight. The objective is a “human assist” to chew through lots of data very rapidly and then advise the human operators about the insights that can be gleaned by repeatedly applying a model created by the data itself.
This idea that computers have cognitive power is relatively new in human history. Human cognitive power has increased somewhat since year zero as we live longer and have better schooling. But starting around 1950, the amount of computer cognitive power started coming up from the zero mark. By the year 2008, it was estimated that the total amount of cognitive power in a computer was roughly equal to 50% of the brain of a mouse — not the one attached to your computer, but the rodent.
There was also an interesting moment back in 2010 when AI expert Ray Kurzweil predicted that by 2020, the total amount of computer cognitive power will equal roughly one human brain. And 25 years beyond that (depending on what you read) lies a moment ominously referred to as the “singularity,” where the total computing cognitive power will roughly equal all human brains. Of course, that’s a pure-play concept that doesn’t consider edge cases like electronically assisted human brains.
If you plot the rise of computing cognitive power on a timeline scale starting with the beginning of civilization, it’s so fast in human history that it appears to be a vertical wall. We are only beginning to figure out the impact of this on humanity, and that’s what scares so many about the topic. But it’s important to understand what it is before you can understand what it isn’t, which will the topic of my next post.