Tame My Own Enemy

that is: being happy with what I've achieved. Systematization helps to tame this enemy...but not to innovate.

Simple quantitative decision support

When I worked in factory automation make-or-inlicense decisions were vital. To support them, I created my individual criteria "catalogues" and when the decision was inlicense the catalogue was refined to assess various offerings.

The criteria were built hierarchical and weighted summing up to 100. Each leaf-criteria was scored from 0 to 10, enabling a total score of max 1000 (with hierarchical group scores).

The weaknesses of this simple method: the criteria and weights are inevitably result of individual preferences and the monetary aspects were scored from normalized cost of ownership. Although, i applied a few tricks to cross check weights…

The assessment was two-dimensional: scores vs cost.

Solution and development system in one

When I assessed, say, complex Computer Aided Engineering/Manufacturing systems, I gave programmability always a higher weight. I found visual manipulation of things indispensable, but concrete, whilst programming allows for parametrization, configuration…This got verified when "parametric element design" and "domain specific programming" became popular in our field.

They need to be in one system. This is still right. Especially, if each interaction is replicated by a piece of code and the programming style is symbolic, literal, domain specific, task oriented…I come to this later when refining the idea of types and The Innovation Mesh.

At that time; I had no idea of (real) option valuation and values at risk. Although, in retrospect, the decisions were not too bad, I could have done better.

What was good: we never forgot assessing our own developments in relation to the best systems in the market. This is indispensable, if you want to outlicense (sell) them, anyways.

The many options of symbolic computation

When I launched my own business based on symbolic computation...I saw so many opportunities and options that I concentrated on structuring. What is technology, what a solution and how can we tie things together in industry scale projects. At that time I did not systematize assessments much. I was too sure that finding new ways will convince enough?!

At the beginning of a programming project you look into the problem and detect whether it's decomposition shall be data, object or function oriented. In quantitative fields it's dominantly functional. But in symbolic languages symbols are organized as pieces of data, so complexly nested functions can be built that's arguments are anything (formulae, graphs, geometrical objects, movies…even programs).

This is very important to understand the types of quantitative systems. It enables programming the bottom-up style...a kind of computational knowledge based programming….describing things in computational form. At the other hand the language constructs need to be implemented by computational engines…and this distinguishes algorithmic from axiomatic approaches.

BTW, a clever programmer is able to program a story: functional goats, wolves and lions

I simply underestimated that there are a lot of new approaches in one set up. And the eager seller - stony buyer principle came into action.

There will be more about structure and style types in the future.

My life as a project evaluator

Mid 90s I was invited by the European Commission to join an expert pool evaluating and reviewing innovation projects in their framework programs.

In total, I've evaluated about 500 projects, often in two-step procedures. The coverage was broad, from new materials to complex automation systems. Including fields, I've never worked at.

Evaluation teams were built of an EC officer and 3 independent experts (from professors to practitioners). A systematized evaluation scheme was vital and I've influenced building some…especially when our team stabilized.

In the first phase (introduction) it was just evaluation (go or stop).  Twenty teams evaluated, say, 40 projects each (in ten days). An assessment scheme was provided by the EC, but, clearly, different teams showed different assessment profiles and I managed to introduce a kind of "isoline" normalization that worked for a fairer overall evaluation.

In a second phase (demonstration) evaluation, we recommended changes and in review we helped to fix problems and adapt approaches technically and economically.

This was the first time, when I thought of innovation types and a kind of benchmark. Due to the need to put enthusiastic innovators out of their comfort zone: "the idea is compelling but it will not work or sell".

This turned my view on categories and criteria although I didn't see The Innovation Mesh then.

I evaluated 10 years with about 10 % of my working time.

Tame my own worst enemy

And of course, I applied what I've learned to my own projects and those with distinguished partners. There are best selling technologies, platforms, solutions, development environments…What does best selling mean to us?

We disrupted and reinvented ourselves ourselves repeatedly.

It works with UnRisk

To market an all new product you do it again and again. This is the technical side of success. It's fruit of deep analysis and decision what works, what doesn't in a market that is at one hand innovative itself, at the other hand more regulated and standardized than others. Works, doesn't work. It's industrial work, not lab work.

We walked through the matters, structure and style on and on and we still do, because we provide new releases nearly each half year.

The most difficult thing is to resist copying major players…those with more awards…
Even more difficult to understand that we don't work to get picked, but to change things where it's required.

In the near future I will start with the time types...