Product Lives, Inner and Outer Struggles for Existence

Most innovations are about better products / services, better work, better market implementation and better selling.

I concentrate on innovations that transform knowledge into margins, by emphasizing on quantitative theories, methods and implementations. And we assume selling gets better, if we understand the types of our innovation better.

Innovations support product / service life cycles, their building processes and implementation environments. This influences, often one to one, the structure of the innovation itself. Its objects, functions, configurations, operations, purpose, methodologies and technologies.

To maximize its success means making danger and opportunities a positive contribution - optimize market risk.

In a general scope you may need to deal with paradigms like Duality - split the known and unknown, find measurable Boundaries between the two, Optimization that is only possible at the known, Evolution views they help to optimize, Superposition that is about true randomness, Game Theory that is a great paradigm for systemic issues…and not to forget Money - it's a product itselfs, but you cannot optimize market risk without thinking of money.

LinearFlow, MultiFlow and AntiFlow build our workflow triangle.

LinearFlow is the classical structure of innovations. They feature a single object that is to be built to pursue a desire, an objective, a requirement…a change. Its confronting external forces, but their inner structure is quite clear. It's causal, real and linear. The innovation leads to an irreversible change of the object that is fit for a purpose, has its quality and is prepared to materialize as a price at a market.

Its underlying object may have started as an idea, a commodity or a rough material. LinearFlow describes a product life story.

Think of a financial instrument, say, an exotic option. A trader has an idea of an attractive investment profile and asks the quantitative analyst: "structure me this". The quant chooses the model for the underlying, calibrates the model towards prices of liquid options and use the parameters to value the new option. After a positive assessment the new option will be traded and the process-through valuation becomes important.
Pricing may become very complicated when strict regulatory forces come into play...

Or the housing for a simple gear. The production manager has the idea to build the housing by "folding" a metal sheet instead of milling it from a solid metal block. The mechanical engineer chooses the elasto-plastic model for the material, calibrates the model towards standardized material parameters and use the parameters to calculate the bending properties…

Great. But wait a little, both may find: the model is correct, the fit of the calibrated model to data was so well, but the results were so bad. Why? Because both didn't calibrate to data that are informative and actual enough. Both need acquire them during the trading / bending process - perform adaptive recalibration.

This is intrinsic to many LinearFlow, process oriented innovations. You create something new and therefore you do not have enough informative data in advance.

MultiFlow objects struggle with their inner affairs, their interplay and interaction with other process objects and environments. When they are transformed as they move through the system they look more for optimal processes then for external barriers, constraints or limitations. They organize their inside life.
Multiflow innovations often offer "open results"...configuration modeling, across scenario simulations…
The objects may change or remain the same at the end of the workflow. But they've have change their roles...

Thinks of a portfolio of financial instruments and their risk management or the gear production and its quality management…

AntiFlow fights the existence of the innovation itself. In AntiFlow innovations, there's no requirement for causality, nor a constant reality, no time constraints and the objects (agents) remain as they ever were.

Think of systemic risk analytics - exposing the risk of collapsing an entire financial system, market or even economy. Risk imposed by interlinkages, interdependencies…An innovative approach is Debtrank where those banks represent the greatest rusk that would cause the widest spread of economic distress (if they failed). It uses the paradigm of evolution and analogies between the "food web" (what eats what) and debt webs.

Next, I'll move to Matter Types, but I'm sure I'll return to structure aspects.

How Innovations Change Our Lives

Moving on to the fourth leaf of our five leaf innovation type clover, let's talk about structure. We want to make better thing in better ways…With better things, we do not mean things of yesterday, made faster...or cheaper? And with better ways we do not mean industrial work alone…it needs lab work. In lab work we strive for a breakthrough, proposing non-obvoious approaches or solutions. It's difficult to do factory and lab work simultaneously.

A story is driven by a plot. It has characters. The analogue in innovation is workflow (functions, operations) and agents (objects, actors, actions)…

It helps to understand the value of an innovation by recognizing whether it's workflow or agent driven.  Innovations often apply "What If" thinking. In agent-oriented systems those are realized as design, engineering, simulation tool sets or systems…testing change, like, refinement, extension, configuration, integration, emergence (change from lower to higher levels)…Workflow dominated systems manage (control)…their "What If" thinking is internal.

Structure Types

LinearFlow - systems that represent how (single) objects are transformed as they "move" though a system thats processes are sequential. Its steps are often divided into (symbolic) preprocessor-processor-postprocessor modules. Pre processors prepare the input for better processing and post processors help to interprate results.

They usually represent linear information flows, control flows, money flows...or flows of physical…objects. Not surprisingly they are the choices for representing the characteristic flow processes (metallurgical, chemical, energy production…). The challenges lie in the processes (they may be complicated).

Systems representing linear flows often need (statistical) data analysis, modeling, parameter identification, optimization…consequently powerful computing. The flow linearity suggests: one system one integrated workflow (process-through integration of evaluation and data management is indispensable). They often organize scheduled tasks that are processed automatically plus interaction where required.

MultiFlow - systems that represent how multiple objects are transformed as they move through multiple processes sequential and in parallel.

Prototypes are flexible discrete manufacturing systems…Their challenges lie in the processes (they may be complicated, even programmable), and the optimization of process utilization, work in process and time in process…Their flexibility may come from the process flexibility or the routing flexibility.

In the local modules the MultiFlow supporting systems may not be so different from LinearFlow supporting systems, but they need a services and communication layer that organize them. They're usually integrated, but not too tightly, and interactivity is more important.

AntiFlow - systems composed of interacting intelligent objects (agents). They interact among each others and with an environment. Intelligence may include logic, rule based, functional, procedural...methodic and algorithmic capabilities. They're often represented as a set of techniques, from mathematics, engineering, sciences…

They usually seek explanatory insight into a collective behavior of agents. typically in natural systems. More general the theory of complex systems is about how relationships between parts give rise to the collective behavior of a system and how the system interact with an environment.

Their homes are networks, where flows are undirected and feedback loops matter.

Motivation to study complex system comes from physics…but it is seen as an indispensable approach to social and economic sciences, where (irrational) human behavior introduces the complexity. Econophysics, a prototypical fields, applies the theory of complex physics systems to economics.

This is the place to warn of the tightly coupled complex systems trap: complex systems can have unintended consequences, but tightly coupling means in most of the cases that there is not enough time to react to them.

Complex technical systems, say, chemical reactors, need to cope with the tightly coupled complexity, but why should a bank?

Structure and Matter are intimately related. In Linear and MultiFlow systems the objects change. They require an orientation towards quantifiable objectives, whilst AntiFlow systems strive for insight (AntiFlow objects remain the same). AntiFlow systems look into systemic behavior, whilst Linear and MultiFlow systems systematize…To analyze systemic risk is completely differed from exposing enterprise or portfolio risk. And so different are the models, methods and solutions…the innovation.

In the next post I'll create a more practical view into the Structure Types…

UX - The Appeal of an Innovation

Front-ends are results of a deployment principle and the interface of the inner world of the innovation to its actors (users). They determine, how the users experience the innovation (UX -User Experience).

Time types are quite self-evident. You've introduces a blazingly fast engine, a real time system or and off-time "computer-aided…" system. They're quite easy to level. The same Innovation Mesh principles are relevant to time types.

Choices you make in reality and style types will help you to check later whether they are consistent with the most important choices of structure and matter types. If you have not much to say, I does make much difference how real it is, how it is said?!

Style Types

Documentary - a widely used document-centered system is Excel. So called note book front-ends supporting documents that look like text books, but behave like programs, have been introduced with the Wolfram Notebook Interface, IPython Notebook Interface…Notebook front-ends are built for interactive computing. They're usual cell based, combining text, input and output cells…and run at desk-tops or the web. Not surprisingly they are widely used in science, research and technical computing.

Literary - referring to systems with linguistic interfaces. They are used to organize question and answer sessions (Google…), but in "quantitative programming" we are especially interested in literate (declarative) programming that may lead to the support of natural language programming. In literate programming methodologies and implementation details become invisible…algorithms may be automatically selected, precision automatically controlled…languages may support multiprogramming styles, like procedural, logic, functional, object oriented…they support development the explorative, experimental and evolutionary fashion. Literary interfaces represent a kind of "high-art" sensibility.

Graphical - referring to user experience by point&click interactions controlled by a mouse, leap motion…With the mergence of smart devices the style has changed. After a time where the web browser have been promised as powerful client operation systems the apps have introduced user interfaces that are more about getting than searching…But there's more: visual programming is style where program elements are manipulated graphically (rather than textually). They may by icon-based, form-based, graph-based…

There's a paradox with GUIs. Skeuomorphs. GUIs of programs are skeuomorphs, if they emulate objects of the physical world (like a physical audio equipment…). New media give you the opportunity to design new interaction paradigms:

Theatrical - interfaces that work with qualities of a theater…with actors, scenes…They're event driven. You have them in computer games, but they're also improving instructions, guides, education…in general, they can be a great medium for study, skill gain…

This was the simpler part, without thinking too much about the conventions, obligatory workflows, requirements, positioning, technology...I can say:

UnRisk Capital Manager is Realistic, Real Time, Documentary…


UnRisk Quant is Idealistic, Off-Time, Literary…

although both utilize quite the same platforms from the UnRisk technology stack.

Next, I'll dive into the essence of innovations the Structure and Matter - about workflow, purpose and realization types.

How Far Does an Innovation Disrupt Our Believes

The innovative spiral is driven by cycles of examplary concretion, abstraction and reconcretion. In the rather new philosophy movement of speculative realism Quentin Meillassoux says (paraphrased): if we do not know more properties of a real world behavior than represented in our models, our models ARE reality.

We often think, physical modeling in contrast to, say, social or economical, is "real". But this is not always true. The prediction of physical effect is not so easy as we often think. The theory of complex systems come into play…It matters what context we write into our models.

"Invert, always Invert" (to solve a problem) said the famous mathematician Jacobi. But inverse problems are ill-posed. To solve a real problem, you often need to ride the waves of a real behavior and use tricky inversion techniques to recalibrate your models…this is required from metal forming to pricing derivatives…

In quantitative fields, programming is essential. Constructor theory, a new fundamental theory of physics tells us: there's no such thing as an abstract program. It needs to run on something physical.

In a birds view, we see that in low tech societies, a few people took the lead and explained to the rest what is "true". Technology may help to explore and use helpful information without taking a special lead.

This is why I find low-abstraction - high abstraction type definitions very helpful assessing an innovation.

Reality Types

Factualism - systems that refer to facts. Curated data, knowledge, proven algorithms…they are often represented in (library)tools, engines, platforms, servers…the top reference Wolfram|Alpha the computational knowledge engine.

Realism - systems that refer to real world and real life problems…they're usually solvers driven by implementation, efficiency....They can go deep or offer a broad coverage. Calculate stress, heat, mechanisms dynamics, elasto-plastic behavior, fluid dynamics, magnetism, ray-tracing…or all of these…control functional complexes or complex flow processes…manage risk and return dynamics…for different sectors or cross-sectoral…
They're usually combing evaluation and data management relying on intelligent combinations of (mathematical)modeling and data science…

It's subject of a later post, but I want to mention it here. IMO. it is required for those kind of real world problem solvers to organize their objects, models mad methods (solvers) strictly orthogonally. This is one of the conventions, I suggest in The Innovation Mesh.

It's so simple, but often suppressed: in many fields you need multi-model and multi-method approaches - what's the use of a complicated model that's extra information gets lost in the numerical jungle?

Idealism -  systems that support the problem solving process itself . They're usually offering a programming environment with language constructs that provide abstraction, the sense of  language cascades (one developed atop the other). Examples are Modelica or Wolfram Language in combination sold as System Modeler.

Fantasy - systems that provide information and knowledge in a virtual, speculative "world" and with special effects. They're often representing results of design, analysis, creative ways. In design, arts, entertainment...but increasingly important in science.

I often read: those systems will replace the "scientific method" (in the exabyte age). I could not disagree more. They're extending the expressiveness. And this is a lot.

In the near future, I'll come to Style Types.

What's Real-Time?

Before I delve into the types, starting with time types, I want to say a little more about my working principle. It's strictly prototyping oriented...the bottom up fashion.

When I was inspired to build The Innovation Mesh for innovations in quantitative fields I built a rough scheme and backtested it by walking UnRisk through it.

Then I decided to disclose the ideas and a scheme. This is what I'll do in the near future. But it's still a prototype and it will be refined, when not reinvented with growing practical experience. I plan to provide releases.

Remember, the types shall enable the innovator to find a short description of what her innovation does and how it is positioned. Something that wraps the mind of investors, actors, clients, partners, marketers…Types are a way of cataloguing innovations related to the expectation of actors (those who use, enrich, integrate, market,

The Innovation Mesh is a diagnosis tool. Although it is for quant fields it's not quantified, but it suggests some conventions, rules, workflows and even licensing schemes.

Time Types

First I thought it shall be session related, but then I decided for runtime. How fast will the innovation work related to the reality and how well is it synchronized with it.

High Performance Computing  - mostly compact solutions or tools needed to beat real time significantly to, say, early anticipate anomalies in a real behavior, or to be applied in scenarios...

Real-Time Computing - these are usually built to control real behavior (synchronously). This systems are often embedded into products, processes, trading environments…they're online.

Off-Time Computing - theses are usually working offline to real systems supporting complex workflows and tasks, by analysis, predictive modeling, simulation, optimization…

Time types influence other types more than we may think now.

In a quick view Engineering is characterized by definition-and-verification and management by plan-and-control. This suggests that engineering tasks are usually offline, whilst management usually needs systems that are linked to real systems.

Production engineering is about building the right factory for the products to be made, whilst production management is about controlling its parts through the factory.

In the details engineering and management tools may need fast computing in their internal engines, but the overall time types are different.

Fast computing is often used millions of time to value one object. It needs to be blazingly fast and accurate…think of the valuation of a portfolio of derivatives across scenarios.

Another view into systems is their purpose: analysis, prediction or control. Control systems usually have real-time requirements, whilst analysis systems are usually off line. You gather information in order to extract (computational) models…if the data acquisition is semi online, you have real-time requirements…

These things are important if you apply an intelligent combination of, say, modeling and machine learning techniques….but more in an internal view. I'll come back to this in future posts.

However, with an unclear assessment of these types you may run into the risky horror of operational risk.

What's Real-Time?

In earlier time of computer science it was assumed that real-time computing needs real-time, operation systems, real-time networks and real-time programming languages (in short everything had time stamps - it had a scheduling policy with priorities, synchronized request and suspends...).

In the meantime computing became so fast that I simplify to "meet all required deadlines". But, real-time computing is not the same as high performance computing. Meet the deadlines needs to consider loads.

In a real-time system, such as the FTSE 100 Index it became horrible if the date came delayed for its applications.

 Consequently real-time systems are often stand alone.

Real-time periods to deadlines may be very long (climate) or medium (a blast furnace process) or very short (an ABS system).

Simulation is the imitation of a real-world behavior. We use it to gain insight. But also, when the observation of the real behavior took too long, the real system cannot be engaged…In any case it's beneficial if the simulation time is below the real time. From their character they're still slow (offline).

Reality Types? That's what comes up next.

Tame My Own Enemy

that is: being happy with what I've achieved. Systematization helps to tame this enemy...but not to innovate.

Simple quantitative decision support

When I worked in factory automation make-or-inlicense decisions were vital. To support them, I created my individual criteria "catalogues" and when the decision was inlicense the catalogue was refined to assess various offerings.

The criteria were built hierarchical and weighted summing up to 100. Each leaf-criteria was scored from 0 to 10, enabling a total score of max 1000 (with hierarchical group scores).

The weaknesses of this simple method: the criteria and weights are inevitably result of individual preferences and the monetary aspects were scored from normalized cost of ownership. Although, i applied a few tricks to cross check weights…

The assessment was two-dimensional: scores vs cost.

Solution and development system in one

When I assessed, say, complex Computer Aided Engineering/Manufacturing systems, I gave programmability always a higher weight. I found visual manipulation of things indispensable, but concrete, whilst programming allows for parametrization, configuration…This got verified when "parametric element design" and "domain specific programming" became popular in our field.

They need to be in one system. This is still right. Especially, if each interaction is replicated by a piece of code and the programming style is symbolic, literal, domain specific, task oriented…I come to this later when refining the idea of types and The Innovation Mesh.

At that time; I had no idea of (real) option valuation and values at risk. Although, in retrospect, the decisions were not too bad, I could have done better.

What was good: we never forgot assessing our own developments in relation to the best systems in the market. This is indispensable, if you want to outlicense (sell) them, anyways.

The many options of symbolic computation

When I launched my own business based on symbolic computation...I saw so many opportunities and options that I concentrated on structuring. What is technology, what a solution and how can we tie things together in industry scale projects. At that time I did not systematize assessments much. I was too sure that finding new ways will convince enough?!

At the beginning of a programming project you look into the problem and detect whether it's decomposition shall be data, object or function oriented. In quantitative fields it's dominantly functional. But in symbolic languages symbols are organized as pieces of data, so complexly nested functions can be built that's arguments are anything (formulae, graphs, geometrical objects, movies…even programs).

This is very important to understand the types of quantitative systems. It enables programming the bottom-up style...a kind of computational knowledge based programming….describing things in computational form. At the other hand the language constructs need to be implemented by computational engines…and this distinguishes algorithmic from axiomatic approaches.

BTW, a clever programmer is able to program a story: functional goats, wolves and lions

I simply underestimated that there are a lot of new approaches in one set up. And the eager seller - stony buyer principle came into action.

There will be more about structure and style types in the future.

My life as a project evaluator

Mid 90s I was invited by the European Commission to join an expert pool evaluating and reviewing innovation projects in their framework programs.

In total, I've evaluated about 500 projects, often in two-step procedures. The coverage was broad, from new materials to complex automation systems. Including fields, I've never worked at.

Evaluation teams were built of an EC officer and 3 independent experts (from professors to practitioners). A systematized evaluation scheme was vital and I've influenced building some…especially when our team stabilized.

In the first phase (introduction) it was just evaluation (go or stop).  Twenty teams evaluated, say, 40 projects each (in ten days). An assessment scheme was provided by the EC, but, clearly, different teams showed different assessment profiles and I managed to introduce a kind of "isoline" normalization that worked for a fairer overall evaluation.

In a second phase (demonstration) evaluation, we recommended changes and in review we helped to fix problems and adapt approaches technically and economically.

This was the first time, when I thought of innovation types and a kind of benchmark. Due to the need to put enthusiastic innovators out of their comfort zone: "the idea is compelling but it will not work or sell".

This turned my view on categories and criteria although I didn't see The Innovation Mesh then.

I evaluated 10 years with about 10 % of my working time.

Tame my own worst enemy

And of course, I applied what I've learned to my own projects and those with distinguished partners. There are best selling technologies, platforms, solutions, development environments…What does best selling mean to us?

We disrupted and reinvented ourselves ourselves repeatedly.

It works with UnRisk

To market an all new product you do it again and again. This is the technical side of success. It's fruit of deep analysis and decision what works, what doesn't in a market that is at one hand innovative itself, at the other hand more regulated and standardized than others. Works, doesn't work. It's industrial work, not lab work.

We walked through the matters, structure and style on and on and we still do, because we provide new releases nearly each half year.

The most difficult thing is to resist copying major players…those with more awards…
Even more difficult to understand that we don't work to get picked, but to change things where it's required.

In the near future I will start with the time types...

What I Do and How I Work

I want to share my forty years experience as technology developer, business developer and evaluator of many projects for the European Commission…all characterized by one word: innovation

I'll concentrate on innovations in quantitative fields, where computational models are a dominant part.

Able to create the classics of the future?

When an innovator invites me for an introductory meeting, I ask her to tell me the story of her innovation.  I listen. I ask for technologies. A presentation. Business plans.

First I'm not applying any mesh. I'm trying to understand the river of the innovation...where it comes from and, more important, where it shall go to. Will it work? Will it sell? Will it even be able to create a classics for the future?

When I like it, I offer something that might not be common in innovation advisory:

Will it most probably work and sell?

I'll invest a day or two to run the information through an analysis as deeply as required to roughly identify the opportunities, barriers and even dangers and understand whether there's a chance to optimize the market risk.

Usually, I do not charge for this phase.

If the innovator and myself agree to continue, I'll run the information through a deep analysis.

Things that matter for those who care?

Over forty years in innovation I've used various methodologies and tools to assess projects.

The Innovation Mesh, inspired by Shawn Coyne's  great blog, is a culmination...assessing whether an innovation based on quantitative programming (writing computational stuff) will work and sell. I understand "hard work" as doing things that matter and "sell" fort those, who care.

The Innovation Mesh adds more emotional criteria than other methodologies and tools I ever have used before. It's built to assess and fix problems taking the view of the actors, clients, market segments...

It probably cannot inspire an original creation but help overcoming the eager seller - stony buyer principle, understand how to simplify, focus and impute and how to fix a methodological or technological problem. And, how to find the right pricing and licensing scheme.

I call The Innovation Mesh a tool. It's a technique that I use. I'll refine it and hope being able describing it in a way that it can be re-used.

What follows in the next time is a description of it enabling you to apply it. But I'll also be pleased to share my experience in a project. Remember, My trick is to represent the project.

The Beauty of Code - The Code of Beauty

I just returned from my cross country skiing vacation. Optimizing risk was easy this time. Warm weather reduced the snow so drastically that only a one kilometer skating track remained conserved by the organizers of the Nordic combined world cup event that took place the previous weekend. The track was easy with only little volatility. Happy that I could run at all and I concentrated on stabilizing my technique...

The post title is the subtitle of Vikram Chandra's book Geek Sublime.

I recalled, that I read Chandra's first great novel Red Earth and Pouring Rain years ago (in German) when I found his new book among the Wired best science books 2014 and of course, I will buy this one.

Coders, as writers, search for elegance and style…but there's more. Programming, as writing, is about language and its semantics - in programming the semantics is operational, but also in literature something must happen…And it is vital to think about everything in advance.

In what we can learn from great book editors and authors I've presented my motivation to assess the value of innovation projects by The Innovation Mesh. And not so surprisingly, it works well applied to projects in quantitative fields that always involve programming.

In future posts, I'll delve into the special types (time, realty, structure, style, matter).

Cross Country Skiing and Optimal Risk

Tomorrow I'm heading to Ramasau Dachstein for a cross country skiing week. It's a top destination for Nordic ski sports, with skating tracks where world championships took place. They're most exciting but (at my age) it's indispensable to find the optimal risk using them.

I need to select duration and speed to have fun and conserve/improve my fitness but avoid hazardous actions. Their profile look like a smoothed trajectory of a, say, stock price time series. Ups and downs follow quite frequently - arhythmic.

So I need to decide: how fast do I go down to get enough momentum saving energy when skating up the following uphill section without falling. And how fast do I skate uphill to reserve enough power to make the final step powerful enough to get the right speed for the following downhill part.

And from time to time, I'll take the option to run at the easy tracks to backtest, stabilize and automate my technique...

Running the same tracks more often you become more experienced and find the optimum easier.

I'll not be able to post during the next week, but I'll have time to think in depth about future challenges and opportunities related to innovation…

Cyber-Physical Systems

Three lessons for the modern (consumer) product design…through the lens of a non-designer…rough thoughts, quickly cooked.

Unite the digital and physical - some cars are supported by millions lines of code already. But they can become even smarter supporting drivers by "seeing" and reacting better…before they take over completely. Why not bicycles and many other things…

Unite new looks and utility  - technologies enable us to create personalized, adaptive arrangements for various life situations…emphasize on the beauty of light - not lamps…There are many arrangements in our daily life that become beautiful because of a great visual, audio…functionality.

Put performance into the detail - we can put little computers into many things. They can have clever operating systems and programming environments. What's possible in an iPad should be possible in a smart kitchen aid…

You can prototype the future by taking existing technology and link it to equipment you want to arrange to something new?

But, there's also modeling and simulation software for cyber-physical systems - the SystemModeler for example.


A little about the rise of silicon…

I've read the term "cognitize"in the Future Of AI, Nov-14 issue of the WIRED Magazine.

I've worked in AI for nearly 30 years now. At the frog level first, the bird level then.

A short story of being unlucky?

AI is a story about promises that were not fulfilled…Is the AI community ready to leave the bully phase for a utility phase?

Computerized Systems are People?

The idea has a long tradition that computerized systems are people. Programs were tested whether they behave like a person. The ideas were promoted that there's a strong relation between algorithms and life and that the computerized systems needs all of our expertise to become intelligent…it was the expert system thinking.

It's easier to automate a university professor than a caterpillar driver…the AI community said in the 80s.
The expert system thinking was strictly top down. And it "died" because of its false promises.

Artificial Life

Christopher Langton, Santa Fe Institute of Complex Systems, named the discipline that examines systems related to life, its processes and evolution, Artificial Life.  The AL community applied genetic programming, a great technique for optimization and other uses, cellular automata, agent based systems...But the "creatures" that were created were not very intelligent.

We can create many, sufficiently intelligent, collaborating systems by fast evolution…the AI community said in the 90s.

Thinking like humans?

Now, companies as Google, Amazon…want to create a channel between people and algorithms.

Our brain has an enormous capacity - so we just need tot rebuild it by

Massive inherent parallelism - the new hybrid CPU/GPU muscles able to replicate powerful ANNs?
Massive data - for learning from examples
Better algorithms - ANNs have an enormous combinatorial complexity, so they need to be structured

To make AIs collaborative, make them consciousness-free

AI that is driven by this technologies in large nets will cognitize things, as they have been electrified. It will transform the internet. Our thinking will be extended with some extra intelligence.

As in freestyle chess, where players use chess programs, people and systems will do tasks together.

I have written about the Good Use of Computers, starting with Polanyi's paradox and advocating the use of computers in difficult situations. IMO, this should be true for AI.

We can learn how to manage those difficulties and even learn more about intelligence. But in such a kind of co-evolution AI must be consciousness-free.

AI will always think differently about food, clothes, arts, materials…

Make knowledge computational and behavior quantifiable

I talk about AI as a set of techniques, from mathematics, engineering, science…not a post-human species.

And I believe in the intelligent combination of modeling, adaptive calibration, simulation…with an intelligent identification of features. On the individual, as well as on the systemic level. The storm of parallelism, bigger data and deeper ANNs alone will not be able to replicate complex real behavior, IMO.

We need to continue making knowledge computational and behavior quantifiable, but even more important find new interaction paradigms between intelligent "Silicon" and us.

But however it goes, the impact on the political, socio-economic and cultural systems will be enormous. And not to forget business…innovation and marketing.

A Machine Future - Good or Bad?

I dealt my whole business life with automation: first manual work then brain work. So, I'm biased and not the right person to predict the impact of the rise of the machines.

I better let Nouriel Roubini speak: Rise of the Machines: Downfall of the Economy?

Will industrialized and wired ages inevitable lead to the age of the independent AIs - digital and physical things united replacing not only human work, but humans?

I agree it's a big challenge.

In one of my next posts I will put some of my raw thoughts into a pan and cook them to: "the rise of silicon"...what if things will be "cognitized" as they were electrified?…must machines remain consciousness-free?…is there a good use of the AIs?...

I found the link at MarginalRevolution

The Systemic Risk of a Marginal Cost Regime

After I've written the Uberization post yesterday the risk of marginal cost (more a danger) came to my mind.

In discrete manufacturing, we often thought about what will the next part cost and how to increase productivity to drop it. And along with this, makers thought of achieving market leadership by establishing a marginal cost pricing regime. As a result of this race the market became quite unstable.

The down spiral in a marginal cost regime

It is not so easy to understand marginal cost. It does not include past development, design, set-up... cost of the products and their manufacturing systems. Only the direct cost at risk of the next item are calculated. But what implications will this have to modernization, improvements, flexibility...? Will fixed cost be only allowed if they fit into the marginal cost regime?

What will the next e-book cost, what the next digital copy of a John Coltrane concert, what the next use license of a piece of software?

Fair prices?

In a competitive, undifferentiated market the price will generally be lowered until it is just above marginal cost. Especially in a market segment that promises "fair prices".

The only defense against the race to marginal cost is differentiation, products that have no substitute…radical innovation.

Financial instruments a prototype?

What will the next financial instrument cost? The values of financial instruments are calculated under the fair price assumption. How they are traded depends on Value at Risk calculations across various risk factors. A newer one is counter party risk. To minimize it prices shall be adjusted by CredidValueAdjustment/FundVA/DebtVA (xVA) calculations. Regulatory bodies came further to the conclusion that the centralization of counter parties (clearing) will help to reduce risk?!

Let us assume the calculation of xVAs are widely used to organize a prerfect collateral management with central counter parties, and the risk will be optimized in a way that the marginal cost are intuitively approaching zero.

So, if the regulators do not only force market participants to apply xVA calculation ito risk management, but to pricing, will this have similar effects as above? With the expectations of frozen counter party relations, counter-business clearing out to zero nettings ..?

Industries thats intuitive marginal cost of their products may approach zero are inherently unstable.
And if a few market participants figure out how to become the champion-in-all-arenas the effective marketing hypothesis is (definitely) broken.

Is Uberization Innovative?

Tying loose ends of things like streets, rails, cables…together creates wide infrastructures. Infrastructures enable (economic) activities.


Most are of good nature for a traditional principle of sharing: trains, buses, telephones…Sharing of infrastructure, resources, input (not output).

We can also understand shops, schools, hotels, bars…as objects of this kind of sharing, but let us concentrate on sharing in operational environments based on traditional "networks". Their problem is usually the "final distance". How to get to the remote hidden place, or into the little peripheral street…with low cost.

In electronic communication we have crossed all severe boundaries left. With a change of the media: from cables to air.

But traveling? Of course, we have taxis, car rental…

The rise of Uber

It's a ride-sharing service. It has established a fast growing transport network. But their network doesn't have any physical objects at all. They coordinate drivers and passengers. Based on a sophisticated software. Their business model depends on the connection of riders to drivers. The basic idea: use driver/car pairs in their downtime and let clients pay for it in a marginal cost regime.

They use available methodologies and technologies of ubiquitous computing to manage a "one-click" come-together and its administration.

Risk? What will happen in a future of driverless cars - will they still be in business and just cheaper?

At the other hand why only rides? "Uberization" would work with every service. But there are others who (also) know a lot about clients, providers, second sources.... Amazon comes to my mind.

Innovation types

I selected this example to show that new power can be built atop existing things. It's the idea and the passion to do it, that makes the difference But new power needs new values - self-organization (replacing institutionalism), transparency, a nowist and maker culture…

The Uber innovation is of the matter type: sharing-in transportation-atop ubiquitous computing services…its structure type is peer-to-peer connectivity with simple interaction (style type)…its reality type may be dominated by: make it real and grow and look what the rules say (some speculative aspects facing challenges of regulatory bodies…).

There are other matter types beyond Sharing - Funding, Producing, Distributing..

New power with Funding/Producing/Distributing?

Finance an innovation by selling Futures (contracting the obligation to buy your innovation at privileged conditions) and buying Options (contracting the right for the production of a certain volume of your innovation for a fixed price from an outsourcing partner).

This is what is usual at the mass commodities markets. They do not "invest", they clear input and output…and have special margin arrangements with the clearing houses.

You need lower investments, get more information about your markets…

And there are methodologies and technology that quantify the fair value of the futures and options.

Who does it?

Is Uberization innovative? IMO, yes, but incremental. It can be copied.

Edit: this post has been inspired by the Dec-14 issue of the HBR magazine. "Unberization" I found here (Six Pixels of Separation)

We Had Unmarketing - Now We Have Unselling

A few days ago, I wrote about Unmarketing. The same authors wrote UnSelling, a book about the bigger picture of sales.

You need a lot of experiences and in depth knowledge to get the bigger picture.  Especially in innovation selling you need to unlearn the usual rules and conventional wisdom taught in sales organizations.

Replacing solution selling by insight selling?

means to upend clients' approaches to their business and not being afraid to push them out of their comfort zones - but remember, not too much.

Think of the fictive position of a Chief Executive Buyer. They research solutions, rank options, set requirements, analyze the possible cost frame…before they have talked to any seller.

How do they become aware that you have something (radically) new?

Educate the change agents

The insight-based sales ladder would be awareness-education-actions (simplified), but in innovation selling awareness needs already education. So, instead of looking for the CEB, look for those, who want to manage a change in a sector, where your innovation makes the difference.

To find them, offer educational events,…that give full explanation on your new approaches, models, methods and critical implementations... Not use training courses…but insight courses.

But remember, education takes time and passion!

Made to measure

To talk to change agents is easier if your offerings can serve users individually but are based on generic technologies (a made to measure system approach). That will help removing barriers that might lead to long decision procedures.

About Possible and Impossible Tasks

More about the constructor theory through the lens of an innovation marketer.

Universal systems are programmable

The side of the constructor theory that interests me most: there's no such thing as an abstract program - it needs matter and a universal system to run at. A system is universal if it is solid enough to store...and fluid enough to transform…Computers, obviously able to store data and transform signals are the prototypes of universal systems.

But, IMO, also our money system is universal: it's able to store values and manage economic transformations…

Think it, build it, market it

You can have great products on your "drawing boards", great process recipes, operation plans...but do they exist, before they are built?  No, but even more, there are no goods, before they do not materialize as prices at a concrete market in a concrete sales contract.

Technical and economic feasibility

To asses whether a task is technically or economic feasible, we make usually predictions. But in the sense of constructor theory, we need to switch to the mode of explanation - the task is possible or impossible. And my plea is: go beyond the technical explanation, try to verify the economic feasibility.

Tasks, objects, information, reality

Provided, you seek perfection of a rolling process at your hot rolling mill. You've developed a great predictive model of your rolling process, based on elasto-plastic and thermo-dynamic theories, deep metallurgical transformation've recipes, know the ingredients…you've calibrated your models to concrete historic rolling data and the fit was so well…but the result is suddenly out of your ambitious tolerances.

The reason? You may have rolled a material with unexpected elaso-plastic properties thats concrete parameter values can be only acquired during the process. You need another information to derive them  (a kind of verified soft sensing)…in order to re-calibrate your models adaptive.

The same things happen usually to economic processes, where the models are not dominantly physical…but stochastic. Even more, the unexpected behavior is systemic.

and marketability

In the rolling example, you might need to ride the material property waves and in an investment example price and volume waves.

What we can learn from the constructor theory: switch from prediction to explanation.

Remember, the constructor of your new robot generation lies in its robot control…try to explain their fitness for purpose, coverage, precision, robustness…in depth and try to explain to yourself what the optimal market risk is.

Explanations are easier, if your knowledge is computational, your programming contains symbolic computation techniques, you combine analytical, statistical and data driven methods intelligently…

Behavior Engineering

How do innovators successfully create things that people can't stop using?

No, it's not about getting picked by many. This would be against my choose-yourself plea. It's about getting early adopters to come back and use your innovation in their daily business.

In short, you need to change their behavior…but not too much. Otherwise you may fall into the eager seller - stony buyer trap, where sellers overestimate the value of their offerings threefold and buyers value what they have threefold as well.

Create behavior-building Innovation?

Remember, when drawing boards were replaced by Computer Aided Design? How many did move to 3D Geometric Modelers in one step? Only a few. And to me this is the prototype for quantitative systems. BTW, I fell into this trap, when marketing symbolic computation systems…25 years ago.
I was so excited by the exact solutions, declarative programming, computational documents…but, it took so many years to sell it outside academic institutions.

I did not study the behavior of problem solvers enough.


is the title of this book talking about what drives behavior and what this means to acceptance and love.

To get actors hooked to your innovations, you need to

SUPPORT ROUTINE - detect unconscious, regularly actions that are successful and support them

IDENTIFY UNDERLYING EMOTIONS - understand what actions are made with confidence and which ones create uncertainty - help overcoming the fear

REMOVE ANY BARRIER FROM USING IT -  offer simple set-up, licensing and pricing schemes and make use easy

MAKE THINGS THAT MATTER - focus on the most valuable features

LET THEM ADD THINGS THAT MATTER - if actors add data, models, documents, templates, programs…they enjoy enriching your innovation

STRENGTHEN THEIR ROLE - support them to do the difficult work and help them to show why their results are better

EDUCATE REGULAR USERS - they're the ones who care and tell your story…

The Innovation Mesh will help you…whether you want to serve managers, engineers, researchers, developers, educators, designers, automators…in engineering, life sciences, business and finance,  information and communication…with modeling, simulation, features analytics, parameter identification…with quick solvers or integrated systems…managed in short or long sessions…embedded or interactive...based on technology stacks, development environments, tools, solutions…utilizing a control, a desktop or a change (working) behavior and habits of important people.


This is the book: UnMarketing. It's about the bigger picture of marketing. A stop sign to high-pressure marketing (and sales).

To get a bigger picture you need a deeper knowledge.


IMO, it's the change from place-price-promotion to access-value-education marketing. Especially, education means individualization…you can't educate by just pushing out your messages…it needs interaction, relationships...

Innovation marketers know: the programmability of marketing (and sales) needs the deep access value and education for the optimization of market risk. They help to find the options…

Make things that let feel the other person engaged and explain every detail you have mapped into your system to enable this. But also explain your innovation types, methodologies and technologies and why you have chosen them. Explain your innovation mesh.


Yesterday, I've posted about unstrategizing. Surprised about my own heart, I continue...remember, I'm not a social end economic scientist and not a management theorist. But I read and have experienced...

I look at these things through the lens of a business developing mathematician and innovation marketer at heart.

Traditional companies are "incremental". Strangely, only a few C level members seem to tackle the challenge of innovation. They're trained for operational efficiency. Even in a crisis there are few organizing a bottom-up renewal?


I grew up in organizations where strategies were built at the top, big leaders controlled little leaders, team members compete for promotion…Tasks were assigned, rules defined actions. It was the perfect form of "plan-and-control": a pyramid. Only little space for change.

In an organizational pyramid the yesterday overweights the tomorrow. In a pyramid you can't enhance innovation, agility or engagement.

It is indispensable to reshape the organizational form.


Traditional managers want conformance to specifications, rules, deadlines, budgets, standards and strategies!. They declare "controlism" as the driving force of the organization. They hate failures and would never agree to "gain from disorder".

Not to make a mistake, control is important but freedom is important as well.

Management needs to deal with the known and unknown, ruled and chaotic, (little) losses for (bigger) gains…


Bureaucracy is the formal representation of the pyramid and the regime of conformance.

Bureaucracy must die.

This part is inspired by Gary Hamel's Blog post in MIXMASHUP,

Change the organization

If we want to change the underlying form-and-ideology of management that causes the major problems, we may want to learn a little from the paradigms of modern risk management…and rely on technology stacks, know how packages, constructive learning arrangements…that destroy cold-blooded bureaucracy.

When knowledge and skills spread management fades

Strategy Must Die?

I'm afraid, I still use the S-word myself now and then. It becomes an excuse for not-doing...the idea of an idea…

In the trap of strategic planning I wrote a plea on simple strategies. But, after some more thinking, I'm not so sure any longer. I've mentioned a few key success factors - most of them natural, although not applied often…Success factors and strategies are not the same.

A strategy for the climate change…get used to it? A strategy to innovate (more, better)?

A framework of actions

It's not because it is in constant overuse, it's intrinsically misleading. IMO, strategy is meant as a framework of actions to win - not the actions themselves.

Ant strategy

Is unconditioned collaboration…already a strategy? Then ants are strategizing perfectly. But wait a little, they've got the perfect intelligence level to collaborate algorithmically. What if they got a higher IQ…would they still fulfill this intrinsic (instinctive) "strategy"?

To avoid misunderstandings, military power need strategies (and tactics and plans).

But business, innovation…?

Application Frameworks

One of the arguments pro object oriented programming styles was the ability to build application frameworks. Whilst "libraries" were used to build applications the bottom up fashion, application frameworks set the standard structure of an application. The framework was completed with instantiations.

AFs were great for GUIs, database applications…

You can guide a framework what you know exactly, what you want to deliver. It is adequate, if you want performance not resilience. And this is the problem.

Strategic lies

Why do we need a strategy? Is it the belief that only things we understand now will be successful? Do we rather want a lie that everybody believes than an idea that needs radical experimentation? Are we architects of our own future?


To make a system immune against prediction errors and stronger with added stress (antifragile) needs the expectation of the uncertain and consequently optionality. To learn from turbulence, you need to prototype future, abandon assumptions, provoke the imperfect…

A framework is great for work industrialization but innovation needs freedom. Innovators need to understand their innovation types, have a compass, ride the waves of market expectations….and choose themselves.

No, the antistrategy is not a strategy.

The Antikythera Mechanism - A Radical Innovation

Years ago, I've been to the beautiful Greek island Kythera. It was a great stay with hiking, swimming, reading… It's one of the islands influenced by centuries of coexistence of different cultures.

Recalling Kythera the Antkythera Mechanism came to my mind. It's an astronomical calculator…that contains many mysteries. Who made it? Is it the only thing of its kind?…there's more here.

How sophisticated it is can be seen through a Lego model.

In ancient Greece, they've made more of this kind of mechanisms, but they have remained unreported…the mechanism does not look like a one's-genius-unique-device. The manufacturing technology looks surprisingly "industrialized"?

What is it that drives

Radical innovation?

Is it the revolution of heroes or the heroes of revolution? A coincidence of talented people at a time or people's motivation to join a revolution and share their best work?

The latter.

Technology revolutions make heroes 

Paradigm shifts expose the work of outsiders and original thinkers attracted by the rapid change and growth opportunities. This is true for radical innovations in technology development.

In quant theories, we need to "lift" methodological concerns to more abstraction and modeling and to a meta-level of  domain engineering.

How to integrate domain and requirements engineering and software design? How do we achieve abstraction and modeling, handle more general issues of documentation and the required pragmatics?

By programming in a symbolic domain specific language, apply a document centered system design and link technologies that enable the integration of proprietary algorithms… They enable the development of individual complex solutions whilst driving generic technologies.