A Master Machine Learning Algorithm?

It's all about extracting knowledge from data. And there's hope that this knowledge is understandable and computational. I worked with machine learning for over 25 years and I've learned the lesson: working in the data salt mines make us (too) often breaking a sweat.

A short story of being unlucky

The idea has a long tradition: computerized systems are people and there's a strong relation between algorithms and life.

First…if systems replicate all or knowledge, expertise…they are intelligent. This top down expert system thinking "died".

Then...Artificial Life, in contrast, named a discipline that examines systems related to life, its processes and solutions. Genetic programming would emerge intelligent artificial creatures? They did not..

Now…because we've neurons intelligent machines need to have them too…our brain has an enormous capacity…to make AIs we only need to combine massive inherent parallelism, massive data management and deep neural nets…?

All this phases have brought us a fantastic technology stack for machine learning…but we still wait for the breakthrough.

The race for the single algorithm

This race is motivated by the idea: if we understand how our brain learns, we can develop learning machines…and the belief of the schools of imitating biology seems to be that the way brains learn can be captured in a single algorithm.

The connectionists belief that all our knowledge is encoded in the neural nets and "backpropagation" is the "master algorithm".

The evolutionaries think the ideas of genetic programming further

But there're schools that find imitating evolution or the brain will not lead to an master algorithm. Its better to solve the problem using principles from logic, statistics and computer science.

The Bayesians think creating the master algorithm boils down to implementing the mathematical rules of Bayes's theory.

The symbolists closest to "classic"knowledge-based AI thinking believe in general-purpose learning in extraction and combining rules that can be evaluated by inference engines.

The analogisers work on forms of analogical reasoning based on massive data.

One size doesn't fit all?

By experience from practical machine learning projects, I'm infected by the multi-strategy and multi-method approach and the combination of modeling and machine learning.

This doesn't mean that I do not believe that a master machine learning algorithm isn't possible…but why should we make machines that think like us, if they possibly could extract things that we don't recognize…helping us to gain fundamentally new insights?

Applying machine learning as for the analysis, prediction and control of industrial processes, we've used fuzzy logic based methods to reduce the problem to a size that makes it fit for automation and identified parameters of the models for constant recalibration…with a set of methods from statistics and neural nets…

We shall not think like machine so that machines can think like us…but shall machines think like us in order to replace us or think differently to augment our capabilities?

This post has been inspired by an article of the April-16 issue of the Wired UK Edition (Pedro Domingos).

An Innovator's Reputation


When I present an innovation, I trust that it's good. No, I trust that it provides differential advantages over competitive offers. I trust that it'll be accepted at least by those who I want to care. Maybe I entrust even that they remain loyal to me and my innovation. I also trust that my team and I are capable of shipping the required quality and to respond appropriately to future needs. I trust that we will do the right stories to stretch and expand to new buyers. I trust that we have found the right position on the value / price Map. I trust that I know the competitive arena well enough...I trust that we will control our costs and financial obligations and not the other way around...

This is quite a lot of trust. Too much to keep it under competitiveness?


When I've developed my innovation atop a partner's products and and consequently, I act as a reseller of their products, I trust that the originators have considered all aspects of above (I'll check all factors through my lens). I trust that they're interested in my market analysis and experience. I've confidence in my own abilities as a reselling partner, but I need to trust that they do not let me develop a market and then change licensing and reselling agreements….to my disadvantage.

This is quite a lot of trust. Too much to keep it under cooperation?

Cooperate to become competitive?

Provided I've been lucky and introduced a bestseller innovation for a specific market and I offer my know-how, my technology and my special marketing knowledge…to other developers? I empower them to become creative copiers. What do I trust? I trust that they help me to stretch and expand to markets that I can't reach alone. I trust that I can continue determining the rules of the game…that I'm bursting with ideas and that the pioneer always keeps an advantage over the creative copier.

Does this describe the paradox that businesses may be competitive and cooperative simultaneously?

It's not an easy game, but an innovator's reputation rises with the reputation of the technology it uses and the reputation of the innovations that are built atop it.

It's important that you understand the game of Open Innovation very well…its options and market risk...