Week 31 - What I Need to Know about the Difference between Artificial Intelligence, Machine Learning, and Deep Learning

If you're actually interested in learning about this topic, check out this article from Nvidia that explains the exact same thing, but better.

Ah, artificial intelligence. The word that brings to mind immense potential, futuristic technology, and Skynet. At least, that's what most people think. Artificial intelligence, is, at its core, self-defining. It is a system that allows computers to learn from experience and complete tasks that humans can.

While we've been hearing AI thrown around a lot lately, it's not a very new term. It was first introduced in 1956 by John McCarthy and further expounded on by Alan Turing. We're only hearing a lot about it now due to the dramatic increases in computing power and storage we've been able to achieve in the last 15 years.

With any sort of technology though, the military is usually trying to figure it out first. Take the Defense Advanced Research Projects Agency (DARPA) for example. They were making digital assistants in the early 2000s, before anyone even knew what an iPhone was.

Since we're talking about artificial intelligence, it made sense to base it off actual intelligence. And the best way for us to conceptualize how intelligence works? Look at the brain. Between 1950 and 1970, researchers we're looking heavily into how to make machines communicate like how our brains do - in a neural network-like pattern. While technology at that time couldn't handle those networks, it did spur the push into machine learning until the early 2000's.

Today, we're looking at how to apply multiple layers of neural networks in computers in order to achieve deep learning machines, ones that can take vast amounts of data and learn from it.

So why is AI important?

"AI is going to take American jobs!"

"Machines are going to kill us all!"

"People are better than machines!"

You've probably heard at least one, if not all of these, at some point in your life. But let's look at what AI can actually do and what it can't/won't.

  1. AI is great at doing repetitive, high-volume tasks without slowing down. A human touch is still needed though to make sure these tasks are being performed correctly.

  2. AI supplements current products. Take any technology out there, whether it be a chatbot, digital assistant, or thermostat - AI can help make it better. AI is able to handle vast amounts of data that is generated by these products each day and make sense of it.

  3. AI can learn on the fly. As long as you continue to give an AI system accurate and large amounts of data, it can improve itself and become more accurate at classifying, identifying, or solving some problem.

  4. AI can be incredibly accurate through the help of deep neural networks, which is only possible with modern technology.

  5. AI gives your data the best bang for its buck. When every agency out there says that they are "data-driven", they better be using some sort of AI to make that statement worth it.

Yeah, great, sure - this all sounds good, but what does it actually look like? What are some tangible instances of specific industries using AI?

Well, in health care, we have AI programs can help understand X-rays, remind patients to take medicine, or tell them that they need to drink more water.

In retail, we can now have personalized shopping experiences with recommendations that would actually interest us. Plus, on a physical level, AI can help manage inventory in a warehouse.

Or even in sports, where AI applications can help record and analyze different plays and give relevant and useful data to coaches and players based on what it found.

Limitations of AI

Even though we've come a long way since 1956, AI is still is fresh from the womb.

Probably won't ever use that one again.

Anyway - AI is still very young and is nowhere near the JARVIS (Marvel) or R2-D2s (Star Wars) that we see in movies. When it comes to real-life applications of AI, it is all very, very, specialized. Each AI application is built to perform a specific task, and nothing else.

Plus, even if you built it to execute a task perfectly, the data you give it has to be accurate as well. Otherwise, everything will be for not and the result you get at the end will be worthless.

How it all works

So, here's what we know so far. AI uses:

  • Machine learning (see Day 2 for more info)

  • Deep learning (see Day 3 for more info)

  • Neural networks

  • Natural language processing (NLP) - how computers understand and translate human words into a format they understand.

But AI makes use of so many more areas and technologies besides the ones listed above. For example, through the help of modern Graphical Processing Units (GPUs), computers are able to process larger amounts of data. Or with the Internet of Things (IoT), think your phone being connected to your house which allows you to start your dishwasher, turn off the lights, and lock the door. With so many devices being connected, there's a lot of data flying around ready to be used and made sense of. AI can help with that.

In the end, AI is here to help us understand the mess we put into it. All that unorganized data can now efficiently be sorted, processed, and formatted in a way that improves human lives, production efficiency, and more. Or it will just kill us all one day. Could go either way.

When it comes to commonly used buzzwords that are generously thrown around, mostly by people who don't even understand the first thing about it, machine learning and artificial intelligence (AI) are at the top of the list.

Doing a quick Google search brings up various articles, videos, ads, and snippets all trying to make sense of it (including the article linked above).

The problem is that everyone is using a word, but never actually define what that word is. In most cases we probably think of machine learning, AI and deep learning as synonyms, and people will just use whichever one sounds coolest at the moment.

In this first part, we're just going to focus on machine learning. As we get to deep learning and AI later on, you will start to see the slight and drastic differences between all 3 of them. Or, just skip to the TL; DR.

Definition - Machine Learning

Depending on who you ask, the definition of machine learning will differ. It's not one of those words that people can just go to Merriam-Webster's and copy and paste some definition.

“Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.” - Techemergence

I think this definition does a good job of explaining machine learning to non-technical people. Techemergence used a relatively straightforward method of arriving at this definition - finding various definitions of ML from reputable sources and sending them all to other ML experts to evaluate and give their own. These reputable sources include Nvidia, Stanford, Carnegie Mellon, and more.

Also, I thought it was funny that the above Nvidia link goes to a blog post that uses the exact same title as this one. It's a good thing I'm not trying to make an SEO play here.

Learning Concepts

There are generally 3 basic concepts when it comes to machines actually "learning":

  1. Representation - like sets of rules, decision trees, neural networks, graphical models, etc.

  2. Evaluation - accuracy/error rate, likelihood, probability, margin, cost, etc.

  3. Optimization - search algorithms, linear/quadratic programming, gradients, etc.

In the end, the purpose of machine learning is to be able to make connections between various data points that it has never seen before. So it has to use methods that allow it to make some generalizations to account for all the unknowns.

Seeing is Believing

Many people, myself included, need to see something in action in order to better understand it. Machine learning is a complicated topic and talking about all the intricacies can be mind-numbing. So, let's use some pictures to help us understand what we're reading.

1. Decision Tree Model

2. Neural Networks

With neural networks, I always think of how Marvel showed Jarvis in Avengers: Age of Ultron (mostly because it just looks cool). Couldn't really tell you if it's an accurate representation of actual neural networks, but I'd like to think so.

While this isn't necessarily what you should expect to see in the AI/ML field, it at least shows you that everything in the network is connected in some way (just like your brain).

But what is Machine Learning actually doing?

Due to the varied routes you can take when using machine learning, everything you do will be based on the problem you're trying to solve. Hence why there is no "best" algorithm to go with.

The best way to really push existing algorithms to their limits and find new issues is to test them in the field. Doing so allows researchers to really see what the algorithm can or can't do, and what it's best at finding. Doing so helps systems perform better and connect the dots between data points that humans wouldn't have been able to connect before. Since again, the purpose of machine learning is to take a ton of data, parse it, and then learn or make some prediction based on all that data.

An important warning about ML though - don't just associate it with automation. This isn't some industrial boom that is going to replace a bunch of factory workers and make jobs more efficient. It's about having the ability to handle oceans of data and actually make sense of it in a way that a human would never be able to handle.

The Issues

If you give someone bad directions, they will probably get lost. If you give a computer bad data, you'll probably get the wrong result. Simple as that.

Since machine learning relies upon parsing all this data, you need to make sure that the data it's working with is pretty good. Otherwise, the result you get might not be a very reliable one. Generally, machine learning has two data validity issues:

  1. Overfitting - the model is biased towards the sample data and can't make a generalized prediction in the real world, or it just picks up random things when applied to real world data

  2. Dimensionality - the algorithms parse data in dimensions that we have trouble conceptualizing

When researchers are putting in their sample data, they need to make sure that they keep a set outside of the algorithms to use later. That separate set can then be used to check against a specific model later.

Let's Recap

So, I just blasted through this article and tried to give a pretty quick definition of Machine Learning. In the end, I really like how this Nvidia article conceptualized it - AI is this large entity, with machine learning being a subset of it, and then Deep learning a subset of machine learning.

But when it comes to Machine Learning:

  • You need large amounts of accurate data

  • Simplicity doesn't mean accuracy

  • Experimental data should be used over observational

  • You are working to predict the outcome of certain actions

  • Keep some data sets separate for future validation

Go to Table of Contents

If you're able to read, then you probably figured out that there is some connection between Machine Learning (ML) and Deep Learning (DL). If you can't, then I hope all this scribbling looks clean.

When it comes down to it, DL is really just a more specific method of ML. Remember those neural networks we talked about before in ML? Well, layer a shit-ton of neural networks on top of each other, pass data between all of them, and you got yourself a deep learning network.

DL is used to help classify images, text, and even sounds, and has practical uses in basically every field. Plus, it can even be more accurate than humans!

But deep learning (even artificial intelligence, for that matter) is not some new concept. DL was first introduced in the early 2000s, and while the idea made sense, the technology and data at that time was not advanced enough to provide practical and efficient uses.

DL requires enormous amounts of labeled data, simply because it needs to learn from all those wide and varied images, text, and sound. Without oceans of data, the neural networks would not achieve high amounts of accuracy.

But with great data comes great computational power, or at least is needed to use it all.

When a computer is trying to sort millions of images and understand what they are, you're going to need to have some serious hardware backing it up. Until recently, it hasn't been very efficient to use DL networks. But with the parallel architecture provided by modern-day GPUs, we now have the ability to efficiently sort and understand all this data.

Great, so now we are able to create and implement relatively complex deep learning networks. So what do we do with it. Well, take any modern-day technological breakthrough and try to connect the purpose of deep learning to it (process large amounts of data so it can classify data, while also becoming more accurate as it takes in more data). In the automotive industry, we can use deep learning to better recognize stop signs, lights, pedestrians, animals, bikers, and everything else that we as humans recognize naturally. In the medical field, we could use deep learning to better identify cancer cells. Or in the military, where we can use deep learning to find dangerous or safe zones. The applications of DL are enormous!

To recap so far, deep learning is a subset of machine learning that specifically uses many layers of neural networks to process and classify large amounts of data (images, text, audio, etc.).

The most common type of deep neural network that you'll hear about is convolutional neural networks, or CNN. The use of CNN's eliminates the need for humans to manually classify data (i.e., feature extraction). This CNN uses layers to dissect the image, passing it through possibly dozens, if not hundreds, of layers to identify it, with each layer performing a specific purpose.

So, where machine learning requires features to be manually extracted from data, deep learning does it automatically. And where machine learning eventually plateaus in terms of performance, deep learning improves as you provide more data to the system.

Go to Table of Contents

Too Long; Didn't Read

Artificial intelligence, machine learning, and deep learning. Three very important, very distinct, but also very interconnected technologies.

We can look at AI being the overall parent of everything. It is the generic term used for systems that can handle vast amounts of data, process it, find patterns and solutions, and then improve itself based on that data.

But machine learning is how it's actually able to process that data and complete tasks that humans have been doing since the beginning of time.

And deep learning is how it's actually able to learn from all this data (through the help of layered neural networks).

Or, we can just define them right here:

Artificial Intelligence: a system that "makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks." - SAS

Machine Learning: "the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.” - Techemergence

Deep Learning: "Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example." - MathWorks

Go to Table of Contents