Google’s Future: Neural Networks, Deep Learning & Artificial intelligence

Let’s face it, they have built driverless cars and are even working on nanotechnology that can detect early stages of disease in the human bloodstream. This does not sound the work of a search company to me. You could easily argue that Google are not a search company anymore. Search is just a part of what they do. Larry Page commented that

“We don’t always produce what people want. That’s what we work on really hard. It’s really difficult. To do that you have to be smart, you have to understand everything in the world, you have to understand the query. What we’re trying to do is artificial intelligence…the ultimate search engine would be smart. And so we work to get closer and closer to that.”

Google’s technology utilises lots of isolated AI mechanics that work to support the search engine. Examples of these mechanics include Google Translate.

The Conventional Model

The conventional model of developing programs involves a lot of human input. A simple example would be a calculator. An input is fed into the program. The input is computed using specific parameters set by the programmers. An output is then expressed. So 1 + 1 is the input. That calculator runs its parameters, in this case, an algorithm and outputs 2. If the program needs to change a human alters the algorithm. While computers are capable of processing incredible amounts of information, they are not intelligent while they adhere to this model. They can only do exactly as they are told. Machine learning is the next step.

Machine Learning

Basic Artificial Intelligence has been around for a while. Machine learning is an early, simple but powerful example of AI. Machine Learning does not require the programmers to set the parameters, and therefore you cannot accurately predict the output. In Rand Fishkin’s Whiteboard Friday he mentions how Moz use simple machine learning mechanics to try and recreate Google’s page rank. They input into a program, just like the conventional model. In the case of Moz, these inputs are all of the page metrics that they have identified, or can confidently assume that Google would use as signals in its search algorithm. The program then uses the Google search results as a base and creates the algorithm as best as it can using it’s inputs. It will then output the resulting “MozRank”. Programs that use machine learning are still tweaked by a human. Changing those different inputs would alter the results. Moz uses a minimum sample of 10,000 in their MozRank program.

Google Translate also works by machine learning. Originally, thousands of translated documents were fed into the system as its sample. There is no definite output like with a calculator. When you input a phrase to be translated, the program cross-references its sample of translated documents and tries to reconstruct the phrase in another language as accurately as it can. The simplest way to make this more accurate is to increase the reference sample so that the program has greater ability to cross-reference.

Deep Learning & Neural Networks

Deep learning aims to take machine learning to the next level, letting the program choose its own inputs. This would be the first steps towards a true artificial intelligence. This is where the research of Dr. Geoffrey Hilton gives Google an advantage. Dr. Hilton is a pioneer in neural network systems. These are artificial programs that simulate the way that the human brain would operate. Dr. Hilton has the program work on a 4 plane model. Each plane contains a number of “neural functions”. These planes are basically complex groups of if statements. The results of each plane create the “neural functions” of the next plane.

An extremely simple explanation of the 4 planes are:

  1. Concept plane – Inputs are fed into the program.
  2. Pattern plane – Patterns are detected in the data.
  3. Prime plane – Decides how patterns should be treated.
  4. Action plane – The system is altered and performance analysed.

As a result of the analysis, new inputs are selected and the process repeats. This removes the need for human beings to set inputs. This technology is starting to move out of its infancy and Dr. Hilton has had promising early results. Stating that

“They got dramatic results,” says Hinton. “Their very first results were about as good as the state of the art that had been fine-tuned for 30 years, and it was clear that if we could get results that good on the first serious try, we were going to end up getting much better results.”

“It was as if a person could suddenly cram in, say, the equivalent of five hours of skiing practice in ten minutes.” This is because of the incredible speeds at which the computer can process information.

It’s about Intuition

This is really exciting stuff because it means that we have created an artificial program that mimics our own neurological functions. Because the program processes information so quickly and can readjust its own parameters so quickly, it show’s a kind of intuition. For the program, this is called “Unsupervised Learning”. Hilton explains it like this

“Think about little kids, when they learn to recognize cows,” says Hinton. “It’s not like they had a million different images and their mothers are labelling the cows. They just learn what cows are by looking around, and eventually, they say, ‘What’s that?’ and their mother says, ‘That’s a cow’ and then they’ve got it. This works much more like that.”

Skynet: The Future of Search

This could have huge ramifications for search, industry and anything that uses computing technology. We could see the first real AI programs in our lifetime. I’ll focus specifically on search as this is my area of expertise. While Google currently keeps their cards close to their chest in regards to organic search, they at least offer some transparency. People like Matt Cutts interact with the SEO community and can shed some light on the dos and donts. We have some level of transparency. For example, a Google engineer could say “Page speed is something that you should spend some time considering”. We don’t know everything from that statement, but we know enough. We can test samples to determine where the acceptable benchmark is from a UX and algorithmic point of view. When the program is constantly re-picking, altering existing and creating new inputs, that statement becomes “It makes sense that page speed could be something to consider.”

It may seem like a small alteration, but it’s a game changer. The best thing that we can do as inbound markers is acknowledge and accept that this change is coming. We are going to lose data, and our predictions will not be as accurate, but this could be a positive step for online marketing as an industry. Why? Because the general methods of content creation will remain the same, it will only force marketers to produce relevant, useful and interesting content for people not of machines. If Google’s track record is anything to go by, the priorities of a search engine powered by AI would be to serve the most relevant content possible to its human users.