1950 - 1980

Artificial intelligence.

AI has been around for some time and has been a field in Computer Science since the 1950s. We can define Artificial Intelligence (AI) as a computer program that generates intelligent output without explicit human input. Traditionally AI algorithms were rule-based engines. In general, these were large programs, written by hand and limited by the domain knowledge of the programmer.  An example of this would be a virtual chess player. These programs had hardcoded strategies written by experienced chess players and could be easily defeated if you would have knowledge about which chess strategies were available to the program.

Rule engines like this are still widely used in programs these days and can work really well. A lot of programs contain some sort of rule engine thusly, a lot of programs could be defined as containing AI algorithms. AI becomes more advanced and independent of the programmer's domain knowledge if it builds its own rules based on historical data. This is where you enter the domain of Machine Learning.

1980 - 2010

Machine learning.

As shown in the figure, Machine Learning (ML) is part of AI. ML is the subset of AI algorithms that use statistics to learn from historical data. It creates a computer program that learns to do a task by statistically analyzing examples of how to do the task. Classic ML algorithms often rely on a simple mathematical function or operation which it performs repeatedly on the data, aggregating the results. This way of learning has seen great applications like automatic spam filters, sentiment analysis or recommending products to customers. However, these classic algorithms are often quite static, not much can be changed to the math, and if the math doesn’t work well on your data, there is not a lot you can do about it. When it does work well with your data the ML algorithms can perform really well and have the great advantage that they are easy and fast to deploy and do not require a lot of learning time on today's computers.

2010 - now

Deep learning.

When classic ML does not suffice Deep Learning (DL) offers a lot of opportunities to solve your problem. DL is the subset of ML that uses Deep Neural Networks (DNN) to learn from historical data. Without going into too much detail DNN’s contain layers which hold mathematical units called neurons, these neurons do some simple operations on the information that comes in from the layer before and outputs the operation’s result to the next layer. During learning the network starts off with doing random operations on the input data resulting in random solutions at the end of the network. But by telling the network if the result is correct it learns over time to do specific operations on the input data that result in the desired result.

The main difference with the aforementioned classical ML is that the architecture of this network can be changed to fit your data really well. For example, you can use layers that work really well on image or time series data or you can add more layers or neurons to enable a more complex series of operations.

Newly invented layer types, advanced DL architectures, and ever-increasing computing power have enabled the latest AI revolution. These days we can solve many complex problems by applying this DL technology. To learn more about fields in industry where this technology can be used, visit the industry page.