A Master Machine Learning Algorithm?

It's all about extracting knowledge from data. And there's hope that this knowledge is understandable and computational. I worked with machine learning for over 25 years and I've learned the lesson: working in the data salt mines make us (too) often breaking a sweat.

A short story of being unlucky

The idea has a long tradition: computerized systems are people and there's a strong relation between algorithms and life.

First…if systems replicate all or knowledge, expertise…they are intelligent. This top down expert system thinking "died".

Then...Artificial Life, in contrast, named a discipline that examines systems related to life, its processes and solutions. Genetic programming would emerge intelligent artificial creatures? They did not..

Now…because we've neurons intelligent machines need to have them too…our brain has an enormous capacity…to make AIs we only need to combine massive inherent parallelism, massive data management and deep neural nets…?

All this phases have brought us a fantastic technology stack for machine learning…but we still wait for the breakthrough.

The race for the single algorithm

This race is motivated by the idea: if we understand how our brain learns, we can develop learning machines…and the belief of the schools of imitating biology seems to be that the way brains learn can be captured in a single algorithm.

The connectionists belief that all our knowledge is encoded in the neural nets and "backpropagation" is the "master algorithm".

The evolutionaries think the ideas of genetic programming further

But there're schools that find imitating evolution or the brain will not lead to an master algorithm. Its better to solve the problem using principles from logic, statistics and computer science.

The Bayesians think creating the master algorithm boils down to implementing the mathematical rules of Bayes's theory.

The symbolists closest to "classic"knowledge-based AI thinking believe in general-purpose learning in extraction and combining rules that can be evaluated by inference engines.

The analogisers work on forms of analogical reasoning based on massive data.

One size doesn't fit all?

By experience from practical machine learning projects, I'm infected by the multi-strategy and multi-method approach and the combination of modeling and machine learning.

This doesn't mean that I do not believe that a master machine learning algorithm isn't possible…but why should we make machines that think like us, if they possibly could extract things that we don't recognize…helping us to gain fundamentally new insights?

Applying machine learning as for the analysis, prediction and control of industrial processes, we've used fuzzy logic based methods to reduce the problem to a size that makes it fit for automation and identified parameters of the models for constant recalibration…with a set of methods from statistics and neural nets…

We shall not think like machine so that machines can think like us…but shall machines think like us in order to replace us or think differently to augment our capabilities?

This post has been inspired by an article of the April-16 issue of the Wired UK Edition (Pedro Domingos).