The Basic Stuff on Machine Learning

Share This

By now anyone who reads virtually any trade magazine has been hearing incessantly about how machine learning is going to transform their industry in profound ways. Marketers will be able to read potential customers’ minds, farms will produce unprecedented yields, doctors will be able to stem diseases before they begin to form. And of course, we’ve all heard how machine learning will eventually take our jobs. It may very well be said of machine learning that there never have been so many wild predictions made about something which the majority of the public knows so little. So what exactly is machine learning? And what can we reasonably expect in the next ten years? And of course the question that has been plaguing us all: will the machines rise up and destroy us?

 

Will the real Pinocchio please stand?

Machine learning, at its essence, is a form of AI that involves the attempt to get computers to perform actions without being explicitly programmed to do so. It’s divided into two basic categories of algorithms. The first is supervised learning, which involves “training” the algorithm with data to develop the ability to recognize certain patterns within data, in order to categorize or predict. For example, if you wanted to write a program that recognized photos of Pinocchio, you might feed it a thousand pictures, 500 with Pinocchio, and 500 with other random people, indicating “Pinocchio”, or “Not Pinocchio” for each observation. With each observation, it would learn a little bit more, eventually being able to distinguish between Pinocchio and other long-nosed characters like Scrooge or Gonzo from The Muppets. This is similar to how you might train a toddler to recognize categories of things–the difference, of course, being that the algorithm can ingest training data much more quickly. We see the applications of this all over the place, from voice recognition to types of fraud detection that involve pattern recognition, with regression, Support Vector Machines, Bayes and decision trees being popular algorithm types.

 

A second category of machine learning is known as “unsupervised learning”. It’s a bit more esoteric, and it’s sometimes easier to understand by first explaining how it differs from supervised learning. If supervised learning is compared to explaining to someone how to navigate from LA to NYC, and then turning them loose and letting them make the trip, unsupervised learning is more like a Lewis and Clark expedition, where you send them out to map the landscape. Unsupervised learning algorithms such as k-means are often used to categorize data into groups according to similarities. When your 2-year old child goes rummaging through your kitchen, he or she is engaging in unsupervised learning as it makes sense of a host of shiny new objects.

 

Will the long-nosed talking wooden puppet please stand?

Another way to think about the difference between supervised and unsupervised learning is that in supervised learning, the you already know what the data is and you’re telling the application as much. Back to the Pinocchio example, you already know what our wooden friend looks like–you’ve labeled the data. In unsupervised learning, the data isn’t labeled–you’re turning the algorithm loose, in a sense, and letting it swim around in a dataset until it starts to make sense of it. So if you gave it 1000 pictures of characters–500 of them different representations of Pinocchio, and 500 of them other random characters, none of them who were the same–and you asked the algorithm to identify the character that was the same, the percentage of photos of “Pinocchio” photos correctly lumped in the same category would demonstrate the success of your algorithm.

 

From this starting point, machine learning starts to branch off into more complex, and very exciting territories which combine and build on various aspects of supervised and unsupervised learning. Deep learning, for instance, attempts to mimic the human brain by layering one abstraction on top of another. Think of, for example, how a child first learns to distinguish between an animal and a stuffed animal, and then later learns to distinguish between a dog and a cat, a mammal and a reptile, and on to more complex classifications (which eventually form the basis of more complex decisions, like whether or not to pet that dog that’s foaming at the mouth).   

 

Will the machines rise up and destroy us?

With the likes of Stephen Hawking, Elon Musk and Mark Zuckerberg heatedly debating this, I’m not going to offer any opinion on the matter. Maybe they will, or perhaps they won’t. But, it is informative to look at where we currently stand in our progress toward intelligent machines (which may or may not rule the world).

 

Arend Hintze of Michigan State came up with 4 classifications of AI, including ‘reactive machines’, where algorithms react to data they’re being fed but have no ability to contextualize; ‘limited memory’, where past experiences begin to inform decisions; ‘theory of mind’ where computers start to recognize that others have beliefs, desires and intentions; and ‘self-awareness’ where computers become sentient beings–like C3PO, HAL or, heaven forbid, Skynet.  

 

So where are we today? As far as we know, we’re still in the “Limited Memory” stage. How long will it be until we cross the next chasm? It’s impossible to say, but we do see things progressing fast. Arguably, the Turing Test was passed a few years ago by the Eugene Goostman program a few years ago (though this claim has been heatedly contested). What is incontestable, however, is that corporations and governments are pouring billions of dollars into research on how this technology can be applied to just about every sector of business and life. Clearly, we are about to see some pretty amazing things from machine learning. Anticipate that before too long, machine learning algorithms will not just be able to recognize Pinocchio, but also to spot whether or not he’s lying even before his nose grows!