DeepMind – an artificial intelligence (AI) company that Google acquired in 2014 – recently garnered headlines with its groundbreaking AI achievement. DeepMind succeeded in creating a machine learning program, called AlphaGo, that recently crushed one of the one of the world’s top players, Lee Sedol, at an ancient strategic board game called “Go.”Go, a game of profound esteem in East Asia, consists of two players moving black and white stones around a square board. The game takes a lifetime to master, is considered exponentially more complex than chess, and is considered to be a game of more than mental acumen – “It’s also psychology, philosophy – it’s art.”
Certain intangible qualities, such as wisdom and intuition, that all masters of Go acquire through years of intense training, make it extremely difficult for an AI system to beat a human master. In fact, even only a few years ago, it was predicted that an AI program’s slashing of a Go master would take decades. AlphaGo’s success, however, highlights the power and potential capabilty of a specific type of machine learning system, deep learning.
Deep Learning: What It Is & Why It Matters
While still a tech buzzword, deep learning is slowly but surely gaining traction in mainstream media. Unlike IoT, the cloud, and big data, deep learning hasn’t yet been called out on Gartner’s hype cycle. While knowledge of deep learning may still live predominantly within the engineering crowd, it will soon ascend through management chains and into the modern zeitgeist.
Deep learning is a branch of machine learning that specializes in building and teaching artificial neural networks how to carry out complicated tasks. Major tech companies, who are all racing to add machine learning capabilities to their cloud software platforms, recognize that the real power of deep learning is revealed when a neural network is trained to parse massive amounts of data and recognize patterns. Such feats have only recently been made possible, thanks to the power of modern high-end gaming GPUs and inventive mathematical tricks and algorithms.
Neural Networks, Defined
Artificial neural networks are at the heart of deep learning. You likely interact with an artificial neural network on a daily basis by simply asking Siri a question, or receiving a “did you mean…?” suggestion for a search term on Google.
Artificial neural networks get their name because they attempt to mimic how the human brain functions. The brain receives input stimuli (e.g., what we see) and outputs our interpretation of said stimuli (e.g., how we think/conceive of/react to what we see/smell/breathe).
Our brains pass stimuli through a hierarchy of neuronal levels that each create an interpretation of input stimuli. These interpretations are then passed through a hierarchy of neuronal levels until a judgment or action is made. But, the most amazing part of this network of interpretations is that it can “rewire” itself as the brain gives it “feedback” from the outcomes of each interpretation.
For example, a child may see steaming food after it is taken off of a grill. The input of “grill + food” provides an output of “eat now.” If the child eats the hot food and burns his/her tongue, the outcome of the stimuli “grill + food” can be rewired so that it produces an output of “caution.”
Replicating the Biological: the Power of Artificial Neural Networks
To artificially replicate the thought process in the above “grill + food” scenario, machine learning scientists create artificial neural networks, which can be thought of as series of black boxes that house algorithms. These black boxes are arranged in a manner that is inspired by how we believe biological neural networks process input stimuli before vectors of numbers, which can represent anything from pixels in an image to foot traffic from a retailer, are then fed, as “inputs,” into the black boxes.
As the vectors of numbers pass through each black box, a discrete transformation of the input value occurs that is then propagated through a network of mathematical equations that manipulates the data.
When the results of the manipulated data do not match up with what the programmer desires, the system is fed more input data that allows it to “try again” through tweaking each black box in the network and “rewiring” the network until it produces a desirable output. This happens again and again until the model correctly evaluates real-world outcomes from a set of input stimuli.
Neural Networks at Work, a ShopperTrak Example
The best way to illustrate how ShopperTrak uses deep learning is to pose a hypothetical foot traffic scenario:
After creating a series of neural networks, we “input” that traffic in 2015 was, for example, 450 people per store. We can also input that the weather was sunny and warm, and that nationwide traffic held stead from 2014.
Then, we would get a prediction: If the network predicts that foot traffic will be 800, when it was actually 400, we use a technique called back propagation to go back through all the steps, make adjustments, and adjust weights to achieve a more accurate result the next time around. This process is repeated millions of time, and each time our system gets a bit smarter about how to predict traffic.
Passing inputs through neural networks, adjusting weights, and rewiring via back propagation is the process that creates modern deep learning.
The Role of Intuition in Prediction
In the above scenario, the act of “adjusting weights” hints at the idea that deep learning systems attempt to replicate certain uniquely human qualities: namely, the aforementioned wisdom and intuition that Go masters develop over a lifetime.
When AlphaGo beat Lee Sedol, it rattled the AI community because the thrashing evidently demonstrated that a computer that leverages deep learning could “learn” or adopt qualities that are considered intrinsically human. However, one could argue that human intuition, which is what many of us would refer to as the thoughts or actions that result from years of experience and earned wisdom, is exactly what AlphaGo has “learned.”
Intuition at ShopperTrak
Industry veteran and ShopperTrak Founder, Bill Martin, helped create an entirely new industry in retail foot traffic counting and analysis. Through his 20 years of experience, Martin developed an intuitive ability to understand, analyze and predict foot traffic trends across the industry. When he retired earlier this year, he took his intuition with him. Luckily, we have spent years building out an experienced deep learning model that predicts traffic trends.
To do this, we took ShopperTrak’s unmatched data set of several billion traffic counts across tens of thousands of sites to perform deep learning that enables us to better estimate traffic. In doing so, we leveraged two specific components of modern deep learning that are critical to success:
- An extremely large training data set (unique only to ShopperTrak)
- Powerful modern computing
These two criteria enabled ShopperTrak to replicate the wisdom of its industry’s founder. We now crunch data sets at a rate that was impossible just a few years ago by using the most sophisticated gaming GPUs available, and we are on the bleeding edge of using techniques that go far beyond the basic statistics and linear regressions of the past. These statistical techniques are nodes within our hierarchy of our neural networks, and they allow us to quickly test and tweak our programs by using cutting-edge deep learning maneuvers.
Our model’s ability to predict traffic, merely based on lagged same-store traffic, is illustrated below. By including inputs such as organization average traffic, square footage, and ShopperTrak’s unprecedented traffic dataset, we get a very accurate prediction of what future traffic trends will look like. And, when we add in additional event data – such as promotions and campaigns – we produce an even more accurate model.
Taking it one step further, ShopperTrak’s Professional Services team can help provide advisory services that put our deep learning expertise to work on a retailer’s traffic data.
Interested to learn more? Contact us at email@example.com.