but I do not know how…
What Do You Think About Machines That Think? is the 2015 Edge question for the brightest minds. (Maybe) in reply to Stephen Hawkins' warning: "The development of full AI could spell the end of the human race".
The first contributors and responses can be found here. I walked trough the list and about a third of the contributions…
My view is shaped by 25 years of practical project experience with AI tools - particularly machine learning. Here it is, in a concise form:
I think the real questions behind are
"…..About Machines That Think Like Us?". Us mathematics professors, truck drivers, traders…
"Is building a thinking machine possible?" and if yes "how far from thinking are machines we can build in the near future?".
"Should thinking machines be built at all?"
Machines can think
I have no doubt that thinking machines are possible. But their thinking may differ - dependent on the different underlying matter. We know from ourselves that the right configuration of chemicals can perform thinking. Although our thinking has the Goedel weakness…we think that we think.
A short history of being unlucky - the future that never happened:
The idea has a long tradition: computerized systems are people and there is a strong relation between algorithms and life.
First…if systems replicate all or knowledge, expertise, they're intelligent. This top down expert system thinking "died".
Then...Artificial Life, in contrast, named a discipline that examines systems related to life, its processes and solutions. Genetic programming would emerge intelligent artificial creatures? They did not.
Now…because we have neurons intelligent machines need to have them too. Our brain has an enormous capacity - to make AIs we only need to combine massive inherent parallelism, massive data management and deep neural nets?
All this phases have brought us a fantastic technology stack - multi paradigm programming languages, generic optimization techniques, symbolic computation, computer vision…
But, IMO, our objects of desire, universal machines that think like us, are far away.
The smart connected machine?
I'm a good car driver. But I can imagine machines that drive much better. Provided, they see, and consequently, anticipate much better and react rational when the moose breaks out of the wood unexpectedly … constantly interact with other cars to optimize traffic …
Thinking and decision making is not the same, but wouldn't we think a car that drives much better than ourselves are intelligent?
How much autonomous intelligence may a car have to collaborate with others perfectly? Not much.
The super-intelligent loner?
By education, I'm infected by logic, the object level of mathematics. It needs 10 axioms and fundamental rules to create rich sentences of logic symbols. This is the layer of maths that works. If you want to know, whether the results make any sense, you need models (and semantic functions).
We know that computers are great in testing that mathematical theorems work. But because the intellectual process of creating new mathematical knowledge is also characterized by systematic steps, computers can also do pure mathematics.
Wouldn't the autonomous maths computer be seen as prototype for super-intelligence? IMO, yes.
Does it need (much) interaction? IMO, no.
If it does maths that helps doing better maths it cannot even be falsified …
The ultrafast and feverish trader
Stock quotes are always to the cent, but real prices can go to fractions. No sane person would make decisions based on the sum of a few thousands of a cent each millisecond. Computers do not get bored. They do it in high frequency (algorithmic) trading…what if they could think?
Would a thinking trading machine squeeze any fraction of a cent out of any market? If, would we accept them as intelligent? IMO, no.
I don't make any forecasts, but I think the maths professor machine will be an early result, being accepted as thinking machine.
Summarizing, I'm in the camp of people who believe that machines that think can complement us doing things better for a better society. But it depends on what they are supposed to be thinking about.
What I fear: that we try to teach people to behave like machines - if we think like machines, it will be easier for machines to think like us.