Making Sense out of Machine Learning and Deep Learning

Today, artificial intelligence (AI) is mainly used as a generic term for all forms of compute-based intelligence. It can loosely apply to any system that imitates human learning and decision-making processes in responding to input, analyzing data, recognizing patterns, or developing strategies. The phrases machine learning (ML) and deep learning (DL) better describe the reality of present-day intelligent computing systems and the problems they can solve for developers and end users.

Machine Learning vs Deep Learning

Today’s state-of-the-art ML and DL computer intelligence systems can adjust operations after continuous exposure to data and other input. While related in nature, subtle differences separate these fields of computer science.

 

Machine Learning (ML) refers to a system that can actively learn for itself, rather than just passively being given information to process.  The computer system is coded to respond to input more like a human by using algorithms that analyze data in search of patterns or structures. ML algorithms are designed to improve performance over time as they are exposed to more data.

 

When a human recognizes something, that recognition is instantaneous. To help imitate this process, machine learning algorithms use neural networks. Like the human learning process, neural network computing classifies data (such as a massive set of photos) based on recognized elements within the image. The success rate of correct classification can improve over time through “expert” human feedback, which helps the system learn and confirm correct decisions from incorrect ones, with an eye toward optimal efficiency and greater accuracy. The neural network algorithm then modifies all future decisions based on the feedback received. This process mimics human recognition by training the network to produce a desired output.


1. A neural network could state:

1. A shape is a recurring element
2. These are possible shapes:

machine intelligence example 1

2. The algorithm then applies this learning to data by finding and categorizing the defined elements.

machine intelligence example 2

3. This process can improve over time with the help of human feedback. The neural network algorithm then modifies all future decisions based on the feedback received. This results in more accurate data collection.

For example, if the human feedback was that "each shape has multiple variations", the algorithm may organize results as follows:

machine intelligence example 3

For example, Google hired professional photographers and documentarians to provide expert guidance to train the neural network-based algorithm behind its intelligent camera, Clips. The human feedback helped the camera become intuitively better at not only the technical aspects of digital photography, but also at anticipating more abstract qualities of capturing memorable moments1

Deep Learning (DL) focuses on a subset of machine learning that goes even further to solve problems, inspired by how the human brain recognizes and recalls information without outside expert input to guide the process. DL applications need access to massive amounts of data from which to learn. DL algorithms make use of deep neural networks to access, explore, and analyze vast sets of information—such as all the music files on Spotify or Pandora to make ongoing music suggestions based on the tastes of a specific user.


1. A deep neural network, used by deep learning algorithms, seeks out vast sets of information to analyze. 

deep learning example 1

2. Upon processing this information, the deep neural network develops new classifications such as: 

1. Shapes can have different colors.
2. Shapes can have different thicknesses.
3. There are 2,000 different shapes in total.

3. This results in very precise outcomes that will continue to become more and more accurate.


The primary distinguishing factor between DL and ML is the representation of data. For instance, in the above Google Clips camera example of ML, input from professional photographers was needed to train the system. But in a DL system, experts are not required for precise feature identification. The input—whether an image, a news article, or a song—is evaluated in its raw or untagged form with minimal transformation. This unsupervised training process is sometimes called representation learning. During training, the DL algorithm progressively learns from the data to improve the accuracy of its conclusions (also known as inference).

 

Common examples of DL applied today include:

  • Autonomous driving: Combining deep data (maps, satellite traffic images, weather reports, a user’s accumulated preferences), real-time sensor input from the environment (a deer in the road, a swerving driver), and compute power to make decisions (slow down, turn steering wheel).
  • Medical: Cancer research, such as learning to detect melanoma in photos2.
  • Smart Home: Smart speakers using intelligent personal assistants and voice-recognition algorithms to comprehend and respond to a unique user’s verbal requests.
  • Entertainment: Analyzing a vast library of film/TV data (genre, actors, directors, reviews) against accumulating user tastes and preferences.
  • Dining: Intuitive restaurant recommendations based on a user’s physical location, critical reviews, reservation availability, defined preferences, and past behavior.

Smart Jargon: Understanding ML and DL

  • Neural Network: A computer system inspired by the human brain and nervous system comprised of different computer nodes (endpoint devices within a larger network) that are organized in layers. The different layers typically perform different types of calculations on data.
  • Deep Neural Network: A multi-layered neural network with many hidden levels that can access vast, more complex datasets than standard neural networks.
  • Training: The correction and adjustment process by which an algorithm learns to evaluate data with greater speed and accuracy. Training can be supervised (applying human input) or unsupervised (making accurate self-corrections based solely on exposure to raw data).
  • Inference: Making fast, predictive conclusions based on a combination of new data and applied, cumulative training.
  • Representation: Automatically detecting features or classifications from raw data, which allows a system to both learn and perform specific tasks.

AMD and Machine Learning

Intelligent applications that respond with human-like reflexes require an enormous amount of computer processing power. AMD’s main contributions to ML and DL systems come from delivering high-performance compute (both CPUs and GPUs) with an open ecosystem for software development. ML and DL applications rely on computer hardware that can support the highest processing capabilities (speed, capacity, and organization) to simultaneously manage complex data sets from multiple input streams.

For example, in an autonomous driving scenario, the DL algorithm might be required to recognize an upcoming traffic light changing from green to yellow, nearby pedestrian movement, and water on the pavement from a rainstorm, among a variety of other real-time variables, as well as basic vehicle operations. A trained human driver may take these coordinating reactions for granted. However, to simulate the human brain’s capabilities, the autonomous driving algorithm needs efficient and accelerated processing to make its complex decisions with sufficient speed and high accuracy for the safety of passengers and others around them.

The performance of AMD hardware and associated software also offer great benefits to the process of developing and testing for ML and DL systems. Today, a computing platform built with the latest AMD technologies (AMD EPYC™ CPUs and Radeon Instinct™ GPUs) can develop and test a new intelligent application in days or weeks, a process that used to take years.

The Power and Freedom to Go Further

Machine learning patents grew at a rate of 34% between 2013 and 2017.3 While much work is currently being done in this area, the industry is still in the formative stages of helping machines learn more efficiently from different types of data. Intelligent systems featuring ML and DL offer enormous potential for computing that mimics human recall, pattern matching, and data association with speed and accuracy. A path and a platform have been established, and new breakthroughs are not far behind.

Footnotes
  1. Josh Lovejoy “The UX of AI,” Google Design, January 25, 2018, accessed April 23, 2018.
  2. Susan Scutti, “Automated dermatologist' detects skin cancer with expert accuracy,” CNN, January 26, 2017.
  3. Louis Columbus, “Roundup Of Machine Learning Forecasts And Market Estimates, 2018,” Forbes, February 18, 2018.