The Big Takeaway from Innovations in a Connected World

In the past many pioneers, explorers, innovators…took the arrows and the followers, settlers…took the land.

Apple was a pioneer in personal computers, but Dell and others settled the land. Xerox introduced the GUI…it has to do with the K├╝bler-Ross change curve, eager sellers and stony buyers principle, the investor psychology swinging from optimism back to pessimism, the loss aversion (Kahneman)…the pioneer finds the attractive place in the jungle but the settlers build the fence in a relative save place…see risk of the jungle.

But, with the emergence of smart connected things…pioneers, explorers, innovators…may take the land and the followers, the rent?

In order to leapfrog when they were behind (DVD for iMacs was not ready) Apple introduced iTunes...

So, innovators do not only need to know whether their innovations works, what's it for, what it needs…but how it will sell in a (even small) connected world. What's its big takeaway for individuals and the network as a whole?

Make the Important Not the Profitable

At UnRisk, we've recently made a marketing decision that needs another reinvention of ourselves. We've announced UnRisk Quant a brand new product...I've posted about the rationales here.

Quantitative analysts, quants, were the protagonists of the story of the development of a new financial system. And they have a lot of antagonists. Since the financial crisis they need to make big decisions and answer big questions. Has your maths destroyed the system? What are your ideas to fix it?…This is more than the climax of critical moments. It asks for the "right to exist"?

The best bad choice

They're not in a preferable situation. The regulatory waves of centralization and standardization are a clear signal against creative solutions…instead of distinguishing between the toxic and the healthy financial objects, methodologies, technologies…new complexity shall lead to simplicity?

What choices do quants have? Seek niches where the new regulatory regime is not applicable…fight it…surf on the regulatory waves?

Surf on its waves is the best bad choice. It will involve them more in industrial than lab work (instruments are becoming less complex, some derivatives will be centrally cleared, central counter party regimes, margin rules and other collateral agreements shall minimize credit risk…). But they may find new ways against a possible margin compression that does not only harm Wall Street but also Main Street (real economy).

Don't make a mistake, I'm not against regulation...but the antifragile needs diversity, options…and fragility.

Help quants leverage their work 

We decided to help quants by getting them a start point at a much higher level…let UnRisk do the industrial work and empower quants creativity by high level development tools. Uncompromising. Include a unconstrained know how transfer.

The important (probably not the profitable) decision

To make this decision we walked through The Quant Innovation Mesh and made UnRisk Quant part of the resolution of the "crisis" quants need to face.

Its Idealistic, Off-Time, Literary, MultiFlow, Quant Risk Work, Development…for small quant groups...Setting the stage for successful risk management, leveraging technology, building quant finance skillsits built from bottom upoffering valuation and data management from the single instrument to the most complex portfolio…it meets the regulatory requirements…but empowers the engineering and validation of new financial  concepts…its multi strategy-multi model- multi method…it has implemented all practical details…

Its structured, following the conventions of quant systems (preprocessing-processing-post processing). It has constructors and solvers that empower quants resolving the progressing problems. 

It works...

Advice Isn't Criticism

A restaurant critic visits eateries and assesses the meal, the wine, the service, the ambiance…she may put this at a score board…to do that she may have a scheme, conventions and rules to determine a score.

A restaurant advisor tells the audience about (her) principle eating, drinking...preferences and (implicitly) recommends restaurants to those who agree to the preferences in principle. She may have a scheme that helps the audience to understand the references and herself to make an internal ranking…If the scheme is well structured and well explained it may serve the special audience doing their own selection easier…and the restaurants that want to serve the special audience, offering things that taste well and sell for the adequate price.

A critic may be someone with an agenda that's different from mine. But an advisor may be an expert in the field I am most interested in. I'm interested in special eating and drinking genres, meals with 4 to 8 dishes, each rather minimalist, with regional ingredients, carefully mixed with exotics, modern, but not stylish...local wines (if the regions grow some) that are medium long, natural, documentary clear, moderate complex but dense, mineral or floral (not  baroque florid)…as in my favorite restaurant world-wide: Le Calandre

Help me, help you...

This is what I've in mind with The Quant Innovation Mesh. It helps me, the innovator and the prospective clients.

It's preselecting the type: innovations in quantitative fields.

It recommends multiple structuring into (sequential or nested) preprocessing-processing-postprocessing and highlights the need for constructors and the expectation of progressing problems that are inherent to that type of problem solutions.

They cumulate in critical situations, even dilemmas, that materialize as questions, requiring decisions. If the resolutions are well made, they help with options, quantifying the choices…

Advice is not criticism - you can try it out.

This post has been inspired by Seth Godin's post: Advice or Criticism?

I Think Machines Can Think Once

but I do not know how…

What Do You Think About Machines That Think? is the 2015 Edge question for the brightest minds. (Maybe) in reply to Stephen Hawkins' warning: "The development of full AI could spell the end of the human race".

The first contributors and responses can be found here. I walked trough the list and about a third of the  contributions…

My view is shaped by 25 years of practical project experience with AI tools - particularly machine learning. Here it is, in a concise form:

I think the real questions behind are

"…..About Machines That Think Like Us?". Us mathematics professors, truck drivers, traders

"Is building a thinking machine possible?" and if yes "how far from thinking are machines we can build in the near future?".

"Should thinking machines be built at all?"

Machines can think

I have no doubt that thinking machines are possible. But their thinking may differ - dependent on the different underlying matter. We know from ourselves that the right configuration of chemicals can perform thinking. Although our thinking has the Goedel weakness…we think that we think.


A short history of being unlucky - the future that never happened:

The idea has a long tradition: computerized systems are people and there is a strong relation between algorithms and life.

First…if systems replicate all or knowledge, expertise, they're intelligent. This top down expert system thinking "died".

Then...Artificial Life, in contrast, named a discipline that examines systems related to life, its processes and solutions. Genetic programming would emerge intelligent artificial creatures? They did not.

Now…because we have neurons intelligent machines need to have them too. Our brain has an enormous capacity - to make AIs we only need to combine massive inherent parallelism, massive data management and deep neural nets?

All this phases have brought us a fantastic technology stack - multi paradigm programming languages, generic optimization techniques, symbolic computation, computer vision…

But, IMO, our objects of desire, universal machines that think like us, are far away.

The smart connected machine?

I'm a good car driver. But I can imagine machines that drive much better. Provided, they see, and consequently, anticipate much better and react rational when the moose breaks out of the wood unexpectedly … constantly interact with other cars to optimize traffic …
Thinking and decision making is not the same, but wouldn't we think a car that drives much better than ourselves are intelligent?

How much autonomous intelligence may a car have to collaborate with others perfectly? Not much.

The super-intelligent loner?

By education, I'm infected by logic, the object level of mathematics. It needs 10 axioms and fundamental rules to create rich sentences of logic symbols. This is the layer of maths that works. If you want to know, whether the results make any sense, you need models (and semantic functions).

We know that computers are great in testing that mathematical theorems work. But because the intellectual process of creating new mathematical knowledge is also characterized by systematic steps, computers can also do pure mathematics.

Wouldn't the autonomous maths computer be seen as prototype for super-intelligence? IMO, yes.
Does it need (much) interaction? IMO, no.

If it does maths that helps doing better maths it cannot even be falsified …

The ultrafast and feverish trader

Stock quotes are always to the cent, but real prices can go to fractions. No sane person would make decisions based on the sum of a few thousands of a cent each millisecond. Computers do not get bored. They do it in high frequency (algorithmic) trading…what if they could think?

Would a thinking trading machine squeeze any fraction of a cent out of any market? If, would we accept them as intelligent? IMO, no.

I don't make any forecasts, but I think the maths professor machine will be an early result, being accepted as thinking machine.

Summarizing, I'm in the camp of people who believe that machines that think can complement us doing things better for a better society. But it depends on what they are supposed to be thinking about.

What I fear: that we try to teach people to behave like machines - if we think like machines, it will be easier for machines to think like us.

The Quant Innovation Mesh 2.0

Hopefully improved by simplification

For UnRisk Capital Manager

The Type - Realistic, Real-Time, Documentary, MultiFlow, Financial Risk Management, Market Risk, Open Valuation and Data Management Factory

External Purpose Type - Risk Thriller (Expose Risky Horrors and manage them)  
External Value at Use - Market opportunity creation by risk informed decisions
Internal Purpose Type - Arming David. Empowering small capital management firms
Internal Value at Use Fundamental quant work and automated risk services

Point of View - Fund management 

Objects of desire Conserve capital, set the stage for successful risk management, leverage technology, build quant finance skills

Obligatory tasks Manage risk in a regulation and business framework  1. Model validation and robust model calibration 2. Portfolio across market behavior simulation 3. VaR calculations, Tail risk analysis…4. Portfolio re-engineering and -construction…5. Contribution to enterprise risk management 

Preprocessing constructors users, instruments, models, methods, market data, valuation regimes
Preprocessing progressive problems Model, method and calibration risk (technological risk) may add toxicity to instruments and become horrible in interplay
Preprocessing solutions Change valuation regimes as result of model-method scenarios

Processing constructors portfolios, scenarios, simulations…simulation regimes 
Processing progressive problems risk analytics methodologies selection  massive valuation and data management requirements...A portfolio shows unexpected riskiness with respect to certain market behavior, but it's difficult to quantify the influence of instruments, risk factors, observed periods, correlations…     
Processing solutions Comprehensive tests checking such influences…and help re-engineering the portfolio

Postprocessing constructors further, more global risk insight and reporting framework, like for cross business units, market segments or territory analytics
Postprocessing progressive problems Further analytics needs additional data sets and consequently algorithmic treatment...Additional data may be uninformative related to the desired objectives and attribute more to noise than dependencies...     
Postprocessing solutions - Data driven techniques or dynamic visualization to curate these data sets

Conventions Valuation and data management are twins. It's built multi-model and multi-method. It's a high performance system. It comes with a web front-end. It comes with the UnRisk development system.

Core Technologies needed - UnRisk CM is built of the UnRisk Technology Stack that needs the Wolfram Technology Stack. UnRisk Financial language is an extension of the Wolfram Language implemented in UnRisk Engines (atop Wolfram Engines) 

BTW,  UnRisk CM is built as integration of the following technologies: UnRisk FACTORY a web based risk management platform, the Value at Risk Universe and UnRisk-Q the programming power behind UnRisk. Its used by capital management firms, asset management groups of insurances

They all enjoy its automated task processing, extendability, transparency, robustnessintuitive useThat it's ultrafast is seen as requirement. And they are pleased about the level of interactions and their indispensable role in smart decision making…when not excited.  

In the End, Solutions Are Not That Difficult

We've now looked into constructors kicking off of our innovation, and discussed why progressive conflicts are inherent quantitative systems.

We want a wide coverage and automation depth knowing that this has trade offs. We know that user interaction is indispensable in the constructor phase as well as progressive problem phase, where tasks work with real data. But to what extent?

It's now time to talk about the resolution of all that conflicts that may even turn into a crisis.

And in the end it's not so difficult.

We want extendability, technologies that help to fix what they broke and build skills from using the innovation.

This principle solutions represent our Ending Payoff: 
  1. Provide ready to use solutions that are development systems in one
  2. Provide know how packages
ad 1.) Provide technologies that empower the extension of the algorithmic engines and the programming language in which you program your tasks.

UnRisk Engines extend the Wolfram Engines into the universe of derivatives and structured financial products and the UnRisk Financial Language is built atop the Wolfram Language to program cascades of valuation and risk management tasks. UnRisk services tie things together by scheduling the tasks and  build communication layers for interactive front ends. 

The foundational numerical schemes are built in C++/OpenCL and integrated into UnRisk Financial Language (Wolfram Language Extension…). Most of UnRisk is built in UnRisk. UnRisk services are built in Java, JavaScript…The UnRisk Financial Language provides all functions in which UnRisk tasks are built and access to the UnRisk Data Framework. UnRisk front-ends run at desktops, the web and smart devices.

There's one detail that I find essential: all (point&click) interactions at UnRisk solutions are logged as a UnRisk Financial Language program that can be re-fed into the system to detect input errors or test input scenarios quickly.

Quant developers use UnRisk to extend it into their universe of investment and risk management...

ad 2.) Provide not only use training, but give full explanation on the theories, models, methods…and critical implementations…disclosing all assumptions, oversights…

For packaging know how, we've established the UnRisk Academy.

Is UnRisk so specific? 

No, it's a reference - we've used the same principles for building an offline and online quality management system for the hot rolling of heavy steel plates. Computational engines wrapped by a domain specific, task-oriended language, services managing the engines and build communication layers for front-ends, APIs…

What technology you should build your innovation atop depends on the innovation type…"Wolfram" is just another reference…whatever technology stack you choose for transformative developments, it helps if it provides a symbolic, declarative, multi programming paradigm...language, a wide coverage of algorithms, mechanisms for hybrid (multi language) programming, fast engines that implement the language, data management, document formats, front-end builders…

Constructors-Progerssive Problems-Solutions the building blocks of the quant innovation determine the effectiveness, efficiency, completeness, consistency...of your innovation...It's vital that they avoid/minimize operational risk…they determine whether it does work or doesn't.

The very detail, the language design, the interaction patterns, the navigation…the result representation…determines whether it sells. 

In the next post I'll present the simplified Quant Innovation Mesh 2.0.

And later I'll think more about how the insight gained from applying the scheme will help optimizing the Market Risk... 

Critical Moments

I'd thought I'll immediately move to the ending building block of quant innovations: "Solutions", but then it came to my mind that progressive problems come to life in critical moments. Moments that are decisive in the sense of a possible escalation or abatement of problems.

The small worlds of exact solutions

We want to solve models exactly - find a closed form solution. But the world of closed form solutions is usually small.

Continuum models must often be discretized in order to obtain a numerical solution. You need them to solve models of material flows, chemical reactors, conservation of energy, macro economics, quant finance…The selection of the numerical scheme is critical for the numerical stability.

You may think of decomposing the problem into subproblems where exact solutions are possible and recompose the results…but this is very ambitious in high-dimensianal (multi-factor) problems

But even proven approaches can create problems in extreme regimes.

Take a Partial Differential Equation (PDE) of the reaction-convection-diffusion type. You'll find Finite Element solvers preferable. But if the PDE is convection dominated, you need some special stabilization techniques to obtain acceptable results from the FE solver.

These decision points are critical and problem dependent.  

Ill-posed problems

A problem is well-posed in the senesce of Jaques Hadamard, if mathematical models of physical phenomena have the following properties
  1. A solution exists
  2. The solution is unique
  3. The solution's behavior changes continuously with the initial conditions
Problems that are not well-posed in the sense of Hadamard are ill-posed.

Inverse problems are often ill-posesd. They ask for the conversation of final data into information about the system that has produced them. Noises in data can lead to results that are pure nonsense. (About 1500 people a week are confused with criminals at the world's airports face recognizers)  

Calibration of models - identifying its parameters from final data - is an inverse problem.
But there is hope. There are stabilization techniques helping to obtain stable parameters. To apply the correct one is really critical.

When Monte Carlo is the only choice

Solving high dimensional (stochastic) models MC methods may be the only reasonable choice? But (time) constraints may force you to decide using the much better performing, but less random, Quasi Monte Carlo technique or apply a variance reduction techniques. 

How to compare this approach with the possibility to transform the stochastic model into a PDE…?

These decisions may influence you model evaluation and simulation regime significantly.

It's hard working in the data salt mines

Its impossible to create a model? Not even a stochastic? A data driven method is the only choice? It's critical to check
  • What truth does your data set represent?
  • What are raw data?
  • What machine learning method fits best for the purpose?
  • Do you have enough data samples? 
  • Does the selected methods generalize enough? 
  • ….. 
To make your decisions, you need to know, how to perform cross-model validation…and other tests requiring deep knowledge in machine learning…avoiding critical moments.

System or user action? 

There are much more critical points related to the problem, the models and their solvers…and there are additional ones arising from the implementation.

In critical moments a system or user action is required.

The two extremes:

A) The system may select all models and methods automatically, correct results... -  it may work perfect in a special, predefined world, but produce pure nonsense outside.

B) The system asks users to select and correct them all  - it may work well in many cases but they may not be qualified in others and make decisions that accumulate into risky horror.

The innovation will not work, if a reasonable balance between the two is not found. Scenarios across models-methods..."What if" treatment...(subject of the Constructor phase) will help.

Next up…about Solutions

Progressive Problems

Last Wednesday, I've posted about the first principle to create building blocks of our quant innovations. The Set Up.

After some further thinking, I'll call it "Constructors". In Object Oriented Programming, constructors create objects and prepare them for use. Our Set Up creates more general foundations with higher level initializations, so I rather borrow the name from constructor theory, a new fundamental theory of physics - developed by David Deutsch and Chiara Marietta.

According to constructor theory, the most fundamental components of reality are entities - constructors - that perform particular tasks, accompanied by a set of laws that define which tasks are possible for a constructor to carry out (from an article by Zeeya Merall, Scientific American)
A computer program controlling a factory is a constructor. The execution of the program at a specific factory explains, whether the transformation of a set of raw materials into the a desired finished product is possible…its the aggregation of a cascade of programs controlling individual manufacturing systems, storage, transport, assembly, finishing...The involvement of a concrete factory…suggests post processing - the compilation of abstract manufacturing programs…into the individual control languages of machines, robots, vehicles…

The need to show that all manufacturing operations are possible touches an intrinsic problem…uninformative, unpredictable, perturbed…data may influence the precision of individual manufacturing  tasks. They may be related to the individual machines, tools, material…input or control parameters of individual models...This is the sword of Damocles hanging over tasks at all aggregation levels. It suggests (multi-level) preprocessing.

In the processing phase, the concrete programs - constructors - need to replicate operation models, machine models, tool models…they are abstract and need to be calibrated identifying parameters from real world objects in order to control the precise manufacturing of other real objects.

This is all simplified: in reality the control of discrete manufacturing tasks is quite complicated and complex.

IMO, the example shows that constructor theory is a perfect framework for the replication of laws of the real world in computational form.

Constructors do not return results. Tasks do. And trigger

Progressive Problems

at all levels and related to the degree of interdependence…However the tasks are organized...sequential, parallel…hierarchical…low level problems will not fade but become horrible at higher levels.

How do we know that a quant innovation isn't working?

We should look at it through the lens of a crisis manager.

If a wrong model was selected it will not approximate a real behavior good enough. If the model was correct, but the method to evaluate it was lousy it's even worse, because it's behavior mimics spurious correctness.  If model and method were correct in some partitions, they may be drastically wrong in others. They may be correct in all desired partitions, but the calibration may identify the parameters wrong (calibration is an ill-posed problem!). Provided, the model-method pair is correct, the calibration is correct, but the results are still bad…The model may have been calibrated to uninformative data…and so on…and so on.

Problems occur progressive - they may become horrible in interplay…concurrent or over time.

In quant finance we often need ten millions of single instrument valuations to get a few risk data…

That all boils down to one name: operational risk. IMO, its more a danger than a risk, because usually, it has no upside. Ten rights can lead to a wrong, but one wrong may destroy the system. The innovation must take measures to avoid any instance of operational danger - uncompromisingly.

In any field, the worst innovation is the one that gives its users false comfort about its adequateness, accuracy…instead it shall disclose its assumptions, oversights...

My experience in quantitative innovation taught me that innovators should be humble in applying complicated and complex systems and offer black boxes only if there are no doubts about their correctness.

Assessing how the innovation treats progressive problems

How do I do that?

I track the innovation's possible progressive problems from bottom up…exposing the escalating degrees related to its purpose and objects of desire.

No, I do not inspect code, I look how the tasks and the system work...try to act as stressor, test extreme cases…

I need to understand, in which workspace the innovation works and in which not, what measures it takes to avoid the model-method-calibration-uninformative-data traps, how its key objects and their representations are organized...if it offers enough scenario and what if simulations…if it uses generic technologies inducing a proven design…

The escalation may lead to a crisis, but this is just a climax of progressive problems. Therefore I'll merge Problems and Crisis into Progressive Problems.

I'll change my scheme and reduce the principles to
  1. Constructors
  2. Progressive Problems
  3. Solutions
applied to preprocessing - processing - postprocessing.

The Quant Innovation Mesh 2 will be posted in the near future - I'm surprised myself how simple it may become.

Next, I'll go deeper into "Solutions".

5 Insights Help Optimizing Innovation Marketing Risk

Before I go into more details of the middle build of quant innovations...problems or even crisis…I compile a few insights that help unmasking established concepts as unreliable when markets shift to new regimes.

They'll help optimizing market risk. Market risk management is the management of returns of positions arising from changes of market behaviors…under risk consideration. This is not completely independent of the approach and realization of the innovation.

 Insight 1 - Understand Innovation Risk

Provocatively speaking, a more-complete but more-complicated innovation may carry greater risk than a cruder one. Risk becomes a danger, if the actors using the innovation are not qualified for the job.

Talking about quant innovation, I recommend the following rules
  • Recognize that models are needed
  • Acknowledge their limitations
  • Expect the unexpected
  • Understand use and user
  • Check the infrastructure (source of operational risk)
Models can be formal or intuitive, but in quant innovations, computational knowledge help transforming objects is indispensable. Users want to check possible transformations in scenarios, what-if analysis…

If the models for the core processes are already intrinsically flawed, too complex, not robust…their use models in interplay are consequently wrong. But if core models are right, the use models can still be wrong because of new complexity, wrong operation or interpretation.

Insight 2 - Understand the UTOPE Risk

Unfortunate cosT Of Pattern rEcognition (from the cover story of Wilmott Magazine, Jul.12). Apophenia is the technical name for a problem with human nature: seeing meaningful patterns, when they do not exist. 

But this is not only restricted to human nature…it's systemic.
There's an intrinsic trap in all inverse problems…it's much more difficult to extract insight, knowledge, models…from data than the other way orotund, data from...models. Solving inverse problems of a mathematical nature mathematically, you need tricks, like regularizytion...

Apophenia is not uncommon in marketing. It's said, innovation marketers of tomorrow need to have experience in understanding big data. But, because the nature of innovation is dealing with the "new", big data have usually high dimensions but a comparatively low number of samples and in such sets large deviations are more attribute to noise than to information.

If you just naively apply statistical methods you will most probably find significant correlations that are spurious.

The disadvantage of big data: the more variables the more spurious dependencies...the possible misinterpretation grows nonlinear with respect to the dimension.

Big data may become a big joke.

So, ill-posedness of inverse problems may become a danger internally and externally.

Insight 3 - Understand the Tightly-Coupled-Complex-System Risk

A complex system can have untended consequences and tightly coupled means there is not enough time to react to them. What is true for the process is true for the system controlling it.

In a tightly coupled process there's no plan for possibilities and options. A tightly coupled complex system, usually result of such a process, does not increase the number of choices and the emergence of an unexpected event can become horrible.

So, diversify and decentralize your system, make it agile by increasing its adaptability and optionality. Make intelligent independent system components that are accurate and robust, but flexible. Let them co-exist and co-evolute.

This may raise the eyebrows of the system centralizers, integrators, top-downers … but it is correct.

There is no such horrible operating risk than a tightly coupled organization, serving tightly coupled complex processes by tightly coupled complex systems.

Insight 4 - Play the 80% Low Risk Game

Optimize risk in the regimes where you know the conditions with a tendency to lower risk in, say, 80 percent of the cases (a safe game with very low probability to ruin the the solution, the process…the investment, the business). You can then play a higher risk game.

This also refers to Five Paradigms of Market Risk Management.

In financial business planning, this would be a kind of playing the cash cow game.

Insight 5 - Choose the Right Kind of Luck

There are unexpected events that can help you a lot or harm you a little and others that can help you a little or harm you a lot. Distinguish between risk with a lower and higher upside. This kind of asymmetry is usually not easy to see...

If you price your innovation at 100, but your client had paid 102, you've lost 2, if the client was only willing to pay 98, you've lost 98…

Not surprisingly the innovation type influences its risk profile and consequently marketing success factors. 

Set Up

Sounds boring? If you think of a beginning hook of storytelling, each scene, act and the whole story should start with an inciting incident.

A Set Up kicks off our system

And it's true, in standard information systems the set up process is focusing on the  preparation of the static and dynamic data. If you data model is well done this should not be to difficult.

But in quantitative systems the set up includes models and their possible representation by methods, the regime for possibles evaluations, scenarios and simulation regimes, "what if" treatment…they can become quite exciting and need already an intensive involvement of special users.

Additionally to that you may need to set up users, their roles and restrictions…say, a system administrator, product manager, product engineer, process manager, process engineer, quality manager, risk manager…they all use the same system but with different coverage, depth…

Users with an "engineer" attribute may be prominently involved in the set up...

It's a beginning hook

Quantitative systems may offer interactive sessions, but also batch processes represented as scheduled tasks. Batch processes create a problem, if there is not enough time to recover from false results, but their asynchronous nature can also provide a plan for resilience, if there is plenty of time for recovery.

Many innovation types may have conventional set ups, but in quant innovation the set up make already a promise to its users…solutions to a variety of problems.

If your quant innovation is about the adaptive control of a, say, machine tool, your set up must include a machine model, because the precision of the operations will depend on the, say, machine dynamics…For an efficient tool management, you may need a tool life cycle model…And in the set up you will decide how your system is going to use this models in operation…

In UnRisk Capital Manager the set up defines user profiles, deal types, models, methods, portfolios, scenarios, market data, valuation regimes, risk factors, risk methodologies, risk management regimes…convents about the curation of data are definable…tasks for workflows are scheduled…

The set up of the UnRisk CM is organized hierarchically in the sense that parts of it work for their processes, tasks…like model calibration, model validation, valuation of single instruments, valuation of portfolios, VaR calculation, backtests, stress tests…aggregation and reporting of risk data…

What's the general set up for? The capital manager finds a complete universe of data, methods and tools  to get a quantitative understanding how her portfolios perform related to the desired return and risk profile. 

The set up may be great, but there are (hidden) problems and traps lurking...

Risk of the Jungle

Risk management is a quantitative field with specific techniques. A business developer wants to optimize market risk. What does that mean in concrete?

I borrow a metaphor of Aaron Brown's great book: Red-Blooded Risk:

Value at Risk of the Jungle - don't let the leopards and mambas in

VaR is a statical technique to quantify the level of a financial risk of a "financial project" over a specific time frame. It quantifies the amount of potential loss, a probability of that amount and the time frame. A 95%, 1 month VaR of 100 means: there's a 5% chance that the project creates a loss of more then 100 during any given month. The % "Confidence" and time shall reflect your business characteristics. VaR is usually calculated by virtually trade a "product" across historic or market behavior scenarios.

VaR is like a fence built around a village in the jungle. The fence is built in a relatively safe place, and remaining dangers inside are cleared out. But people spend, say, 5% of their time outside the fence where dangers are greater and less known.

People grow, breed and produce food, craft things...inside, but some times they want to go out, select raw materials, hunt and trade. For the village people dangers and opportunities come from nature and they have only the ability to control them inside the fence.

But uncertainty is not risk

You can bet on a five possible outcomes of of a dice throw. It's still uncertain, but there's only low risk.

So, there's a chance to even take risk outside the fence. This is for example easier if there are other villages around - they could jointly study the behavior of the animals, arrange exchanges with high utility…organize shared production…erect safe trading places…

So, what I've roughly described here are two of the risk principles:
Duality - the known and the unknown 
Boundary - in 95 of 100 days, life and business of the village people is normal…they can optimize risk quantitatively. But people can also take risk outside the fence..
On the long run the village people will gain from explorative learning, evolutionary approaches and optionality.

They hopefully remain risk takers but not blind.

To innovate is risky - because you make something new...for new markets. You don't have too much data to quantify your risk spectra. But you still have measures for risk "optimization"…options to name one.

Principles of Quant Innovation

What improves the chance to introduce an innovation that becomes a classics of the future?

About building…the story from the simple to the complex

Complex systems may be result of the repeated application of a few building blocks, rules or programs. Think of words and natural language or Lego bricks empowering the construction of practically "everything".

Let's think of a Lego-type-building a little deeper.  With the basic bricks you can build a lot of static things, for mechanisms you'll have  special elements. You may want to control the mechanism, say, you want to build robots…you want to control them…program them. The smarter you want your things be the more specific and complicated the physical building blocks become.

What's behind is geometric, kinematic, dynamic modeling, control theory, sensor processing…robot programming…

To make your toy robots autonomous you need to cognitize them and if you want them to interplay, connect them.

This is a toy world, but in the large you may also apply a kind of innovative spiral from simple elements to complicated components to complex systems. Your quant innovation may help developing, producing or running such a (complex) system.

Smart, connected products

Things built evolutionary: static --> mechanized --> electrified --> computerized --.> cognitized --> connected

Empowered by quantitative technologies that's capabilities developed evolutionary as well: monitor --> control --> optimize --> empower autonomy --.> connect

Think of the systems driving the evolution of "car intelligence".

The evolution of tools

Remember, the history of tools from the fist wedge to the computer is characterized by building higher level tools by lower level ones. In smart computing, tools at all levels of the development evolution are still available.

Wolfram Engines are built in C, UnRisk Engines are built in C and Wolfram Engines. UnRisk VaR Universe its built in UnRisk and Wolfram Engines...

This enables a lot of choices guiding your innovation. If the selections are good or bad needs principles…that go beyond the making.

This is why I introduced The Quant Innovation Mesh.

The Principles of Quant Innovation

Most of the systems that require quantitative treatment (computation) have a clock, flows, events, transformations, massive information requirements…Most of the problems suggest a function-oriented decomposition and the bottom up composition of computational tasks, based on modeling, control, simulation, optimization, cognition, interplay…characterized by object transformations and events…on each level.

Mathematical problem solving has three steps: 1. transform a non mathematical problem description into a mathematical model and transform the model into one of good nature for calculation --> 2. calculate --> 3. interpret the results. This fits into the general model: preprocess --> process --> post process. Each of the steps has its traps and subproblems and may run into a crisis…and the solutions may offer some striking findings.

Each of them, let's call them tasks, from the micro to the macro layer, create Beginning hooks, Middle builds and Ending payoffs

Consequently, innovations that have the potential becoming a classics of the future should have the following flows
  1. Set-up  Edit: Constructors
  2. Problems 
  3. Crisis 
  4. Solution
Each element, component...of your innovation should have it. On each evolution level, it may be required to ask "what if?" and apply kind of counterfactual regret minimization (If I haven't overheated, then …)…(Those  questions become indispensable in a complex interplay of agents…)

It applies to your basic functions, processes, tasks and the system as a whole. They all need a constructor, experience problems and maybe even run into crisis…finally it's all about the solution.

In our Quant Innovation scheme I fill out the most important phases of the whole system. Preprocessing. Processing. Postprocessing (but never forget the workflow, tasks, functions…).

Remember, never organize a complex system of tightly coupled components. Never. Desires, requirements, events…may force immediate regime switches.

Be suspicious about unconditioned integration, centralization, supervision, linearity…one-approach-fits-all…

It's a striking finding and it has been (again) inspired by Shawn Coyne's blog: The Five Commandments of Storytelling in this case.

The Quant Innovation Mesh 1.1

I could not resist: If I have an idea, a theme, a scheme...and write about it I run through a critical situation, when not a crisis…I make notes, analyze them and try to transform the result into a (hopefully better) release. It's how I work.

This is

The Quant Innovation Mesh 1.1  
For UnRisk Capital Manager
The Type - Time Type, Reality Type, Style Type, Structure Type, Matter Type
Realistic, Real-Time, Documentary, MultiFlow, Financial Risk Management, Market Risk, Valuation and Data Management Factory
Purpose  - What's the innovation for
      External Type
Risk Thriller (Expose Risky Horrors and manage them)    
      External Value at Use
Market opportunity creation by risk informed decisions
      Internal Type 
Empower small capital management firms.(for better portfolio engineering and investment management)
      Internal Value at Use 
Fundamental quant work and automated risk services in one…represented as know-how packages 
      Point of View - what is the characteristic domain, scope and generality
Instrument risk, portfolio risk
      Objects of Desire
Conserve capital, set the stage for successful risk management, leverage technology, build quant finance skills
      Obligatory tasks and conventions 
Manage risk in a regulation and business framework  1. Model validation and robust model calibration 2. Portfolio across market behavior simulation, VaR calculations, Tail risk analysis…4. Portfolio re-engineering and -construction…5. Contribution to enterprise risk management 
Realization - How the innovation works and what it needs
      The controlling idea - approaches to simulate, control and risk manage complex systems
Build and profile the foundation, value deal types correctly, build smart portfolios, value nested portfolios across nested scenarios, calculate characteristic risk spectra, document them, help to transform them into insight for decisions    
      Hook.Build.Payoff  - How it transforms objects, controls processes and supports decisions

          Preprocessing - validate the foundation  

Set-up users, instruments, models and market date, valuation regimes...
Model, method and calibration risk (technological risk) may add toxicity to instruments (and even deal types)
Operational risk becomes horrible in interplay 
Change valuation regimes as result of model-method scenarios 

          Processing - manage portfolio risk 

Set-up portfolios, scenarios, simulations…simulation regimes 
Select and apply the right risk analytics methodologies, meet the massive valuation and data management requirements...  
A portfolio shows unexpected riskiness with respect to certain market behavior, but it's difficult to quantify the influence of instruments, risk factors, observed periods, correlations…   
Comprehensive tests checking such influences…and help re-engineering the portfolio  
          Postprocessing - aggregate, visualize, report

Set-up a further, more global risk insight and reporting framework, like for cross business units, market segments or territory analytics.
Further analytics need additional data sets and consequently algorithmic treatment.    
Additional data may be uninformative related to the desired objectives and attribute more to noise than dependencies... 
Use data driven techniques (statistics, machine learning) or dynamic visualization to curate these data sets with respect to the purpose

Model risk quantification, "toxic" instrument detection. Portfolio risk dependencies and quantification. Rusk impact on business decisions analytics. Real-tome treatment of massive valuations and data. Full evidence and transparency. Explorative procedures.
     Conventions - about data and evaluation management, models and methods, programming paradigms, deployment services, performance...
Valuation and data management are twins. It's built multi-model and multi-method. It's a high performance system. It comes with a web front-end.
      Technologies needed
UnRisk CM is built of the UnRisk Technology Stack that needs the Wolfram Technology Stack. UnRisk Financial language is en extension of the Wolfram Language implemented in UnRisk Engines (atop Wolfram gridEngines and proprietary algorithmic technologies)
Other quant innovations will have other phases and flows, but if they are of the risky horror type, they will need a kind of bottom up approach for early error, anomaly, danger…detection and management.

The intention of the scheme is providing insight whether the innovation will work in principle. Later on, I'll go through through some details…more on risk, the idea of fragility, why it's sometimes wrong to use probability, the danger of correlation, why simpler models are often better, why calibration does not always work…and why programming needs a new fashion, about parallelism and what platform agnostic means. All from the perspective: will it work and sell.

They will not have a special order, I'm afraid…and I'll inevitably create further The Quant Innovation Mesh releases.

However, if you think I'll be able do support you with your quant innovation making and marketing. I'll be pleased. Contact me here

Innovation Is About Change

The Innovation Mesh helps innovation marketers to answer some fundamental questions. In principle, whether the innovation works and sells. Will it be even a candidate for creating the classics of the future?

The Quant Innovation Mesh is a refined scheme an evaluator, marketer, investor...can walk a quantitative innovation through.

Does the innovation matter? Who cares?

I define innovation types to understand the positioning and approaches better - the innovations's relation to the external world and its internal approaches, conventions, methods and tools.

Time, Reality and Style Types give us a global view and we make a deeper look into Structure and Matter Types to understand external and internal values and risk better…the point of view, objects of desire, world view, the controlling idea, the movement from the set-up to the solution, the payoff…in principle what it's for, how it works and what it needs. Guided by conventions, rules and recommendations…

The controlling ideas describe how the innovation meets the desires, objectives, (quantitative) requirements…From the controlling ideas we derive our key messages. If we understand the general what-and-how we can put the big takeaway into a few words.

We were working down The Quant Innovation Mesh and I gave some examples…I filled it out for UnRisk but I know, this is not enough. We will iterate it later. I used the metaphor of a thriller to explain what kind of innovations may be compelling and candidates for the classics of the future in their field…

But we shall never forget

Innovation is about change

Innovators want to change the underlying systems that are causing major problems of our lives and time…at a society, sector, institution, individual level

To manage a change you need to know what the reaction to it is. The first reaction is usually shock and denial, in a next phase a change may even cause a depression, before the change becomes accepted and will be integrated.

To make system changes more effective, you should act as
Analyst - analyze the system you want to change (quantitatively), but not too much
Nowist - start immediately after the idea and apply explorative, experimental and evolutionary prototyping
Communicator - organize feedback cycles with potential users, partners...
Connector - if you want to change systems you need connections and alliences
Risk Manager - to recover quickly from difficulties you want to make the change antifragile (especially to optimize the market risk)
Up to now, I have presented The Quant Innovation Mesh as diagnosis tool, but it can help the innovator to plan and create the innovation.

While I wrote this posts some new ideas came to my mimd. I'm sure that I can make the mesh even simpler, maybe more focused…But this needs more brooding…

If you want to (re)read The Innovation Mesh posts, here they are.

It Must Have a Payoff

In Risky Horror I've written about the thrill dealing with (operational) risk of complex systems.  Especially the risk analytics part. Bad actors may generate risk by bad engineering, contra...or actions may be implemented in technologies that are not proven.

There are a few counterintuitive systemic aspects. Complex systems may be result of the repeated application of a few, simple, building blocks, rules or programs. Think of words and natural language or Lego blocks empowering the construction of practically "everything".  The selection of the good over the bad results needs context, constraints, production rules…

What makes a nuclear power plant so dangerous. It's complex and tightly coupled. The high complexity makes untended events more probable and the tightly coupling reduce the time to react so drastically that there is little chance to manage it.

What risk managers keeps awake at night
The front to back responsibility - they need to understand all risk drivers. 
The analytical framework - issues of methodologies, modeling challenges, data and information, communications… 
Culture, government and relationship - the definition of risk "appetites", communication of risk profiles, a culture of risk-informed decision making
And remember, every innovation is risky itself, the following rules are recommended for quantitative innovations.
Recognize that models are needed
Acknowledge their limitations
Expect the unexpected
Understand actions and actors
Check the resources and infrastructure
Again, what is the use of an extra information if it gets lost in the algorithmic jungle, if actors cannot master the complexity, if data are not informative enough…

We assume that our innovation has clear objects of desire - better uses, better products, better making, optimal risk…they are the big takeaways for its buyers.

But there's more:

A Payoff

Assessing a quantitative innovation I nail down the major movements. The set-up, the problem, the crisis, the solution and the payoff. Has it a beginning, middle and end? In quantitative innovations the payoff should be quantifiable. 

In quantitative risk management systems, it may be functions. 

Simplified: it makes a big difference if you shut down a process, run it at low levels or re-adjust it to acceptable levels. 

Managing risk of complex systems need 
A comprehensive set-up - users, elements, actions, models, internal and external data, scenarios, simulations, test procedures…
Problems and trap checks - validate models and methods, simulate the interplay and check the simulations in backtests, stress tests… 
Crisis prediction: identify elements, actions...thats interplay may drive the process towards a hazard 
A framework of solutions:  keep all actor's actions evident over the system's life, provide critical case analytics and simulation, perform power law analysis…
A guiding payoff  - the recommendation of quantitative actions towards payoff functions
There are a lot of conventions that empower the realization of such a system movement. Organize things orthogonally, make valuation and data management twins, apply hybrid programming, make your innovation inherently parallel, apply advanced statical methods and asymptotic mathematics, use Monte Carlo techniques…but most important make the programming power behind your innovation available to your users…symbolic languages, implemented as engines. data frameworks, computational documents….

Ok, a payoff is something I get for a hard work. Risk management is hard work and so is making tools in service of risk managers. It's maybe even harder.

So, it's not a bad idea to make it compelling.

Risky Horror

The thriller seems to be the story genre of our times - IMO, Risk Management seems to be the thrilling innovation type of our time. Financial risk management pushed it into our awareness. But it's indispensable for complex systems, in industries that process dangerous materials or need dangerous (automatic) processes…chemical reactors, oil platforms, power plants, heavy industry factories as prototypes…

Some call risk management the "art of security"…but, IMO, it should be quantitative and following a clear workflow.

Why risk analytics is thrilling

Everyone talks about risk, but only a few give serious thought to what it is. It's two-sided - with opportunities and danger - consequently, risk can be optimized.

Any business that transforms imagination, knowledge, products or services into margins, risks. On the business level it's Market Risk, the risk of losses in positions arising from market behavior (external factors). Market and credit risk are the major risk factors of financial industries, but I'll concentrate on operational risk (internal factors) here.

We all want to run our processes going to border speed, heat, reactivity…but, strictly avoiding hazardous actions.

We need in depths predictive modeling to forecast failures and consequences…hazards like explosion, fire, gas dispersion, dispersion of melt…uncontrollable reactions…

Hazards may harm individuals….kill a business...harm a society as a whole…

You need to identify the riskiness of your actors and actions, knowing that risk may become horrible in their interplay. You rely on models, but you also need to deal with uncertainty, even the unexpected…you run simulations to determine the possible outcomes…finally you make a decision.

If the iron melts through the walls of a blast furnace, everybody better runs…risk managers should have uncompromisingly requested  a predictive 3D model of the process with respect to the erosion of the refractory lining…and constantly re-calibrated it's parameters to the actual process behavior.

Bad actors or actions…investigations…prevention…control

IMO, we are attracted by thrillers because we feel the world becomes increasingly chaotic and the connectedness is not only for the good…it helps the villains to organize. A Thriller share s aliments of Action, Horror and Crime genres.

And this is what the Matter Type Risk Management does. It expects a "crime" (a misbehave of actors in the process, technical or human that's actions lead to a dangerous process behavior).

The innovation plays a powerful investigator, detecting the "crime" early and analyses under which conditions it may become horrible…the investor uses advanced feature recognition, modeling, simulation, stochastic´c calculus…techniques…the value of the whole effort is not only the avoidance of the (risky horror of the) "crime" but the corrective control of the process…

Like in many thrillers the investigator (hero) does not only fight villains and bad actions, he works against the clock. This is an important convention of a risk management system (Main Street or Wall Street). It must work in or beat real time.

Risk management has clear but ambitious goals and a clear mission. They can create high values, but their functional and operational complexity is driven be the complexity of the processes, flows…Thrilling.

The Quant Innovation Mesh

Remember, The Innovation Mesh is a diagnosis tool not a recipe. It helps to identify problems and provide insight for better marketing.  The Quant Innovation Mesh scheme shall represent one page document that answers the core questions of an innovation marketer. It shall help to assess the effectiveness of a quant innovation. Whether it will work and sell.

The scheme shall span the scope and describe orientation, positioning and rough plans. To master the complexity, it's driven by conventions, rules and recommendations related to types and categories. But no, it's not a constructor.

The best architectural plan does not guaranty that the building's not lousy erected. But if there are flaws they may be corrected by hard work. But if the architecture is lousy top practitioners cannot make it attractive. And no marketer can sell it.

As an example I fill The Quant Innovation Mesh out for the UnRisk Capital Manager

The Quant Innovation Mesh 

The Type in one sentence - Time Type, Reality Type, Style Type, Structure Type, Matter Type
Realistic, Real-Time, Documentary, MultiFlow, Financial Market  Risk Management, a HPC Valuation and Data Management Factory
      External Value / Risk
Risk-controlled, regulation-conform Investment Management / needs external information and intelligent set up
      Internal Value / Risk
Inherent parallelism, stable and robust mathematics, web deployment / increasingly massive valuations and data need to be managed
Purpose - what the quant innovation is for
Serve fund and risk professionals performing advanced risk management processes
      Point of View
Financial markets, Capital Management, Market risk management
      Objects of Desire
Conserving capital, set the stage for successful investment management, leverage technology, build quant finance skills
      Worldview, Status...
Arming David with affordable systems that were fit for the Goliaths
      Conventions, Rules... 
Risk needs to be managed in a strict regulatory framework. Instrument (especial derivative) and portfolio valuations are subject to laws. The regulatory regime sets cornerstones like, counter party rules, collateral management, initial margin ruler...risk limit treatment, core capital requirements...
      Characteristic Workflow
Portfolio across scenario simulation  
Realization - how the quant innovation moves from a beginning to and end
      The controlling idea
It works like a valuation and data management factory, supports risk management cascades throughout the process...exposes risk from a single instrument to a complex portfolio...supports decision making for risk optimization.
      Hook.Build.Payoff  - how the innovation drives movements, solves problems and master critical situations…

Set-up users, instruments, portfolios and scenarios…value portfolios across model, market data, stress...scenarios for a variety of simulative risk tests…special simulations, VaR calculations across risk factors, historic or simulative scenarios... 
Model, method and calibration risk can become horrible in interplay, regime switches, like low interest rates require model and method switches, VaR calculations are only as good as they backtest, simulations and risk tests depend on the set-up...
Toxic instruments have been identified, they contribute to unexpected severe risk limit breaks, portfolio risk may drive the core capital requirements up.
All actions and actors are evident over the whole life time of the system. It provides insight from the analysis of the critical cases, expected shortfalls can be calculated and power law analysis of fat tail events can be performed. Portfolios cam be re-engineered or re-configured easily…
Tons of resulting risk data can be individually post processed an integrated into the overall enterprise workflow
     Conventions - in general, combine computation and data management, use multi-model and method approaches, use programming languages that are perfect for the purpose.
UnRisk Valuation and data management are twins. Models and methods can be cross validated in scenarios. UnRisk applies hybrid programming…
      Techniques and technologies 
UnRisk CM built of the UnRisk Technology Stack atop the Wolfram Technology Stack and proprietary technologies. The technologies are multi (programming) language, inherently parallel and platform agnostic. Most of UnRisk is programmed in UnRisk.
Its an exciting innovation that creates values for its clients and its makers by the ability to swiftly meet new requirements, performance criteria, deployment and integration needs.

I've filled some textual description into the scheme, but you could also put some keywords or even short quality scores.

However the criteria themselves need more explanation…this will be done in my next posts...

p.s. I've used a quant finance innovation, because it's "thrilling". It has features of crime (even horror), (catastrophic) misuse and investigation…guilt and atonement...innocence...impacting individuals, institutions and even entire economies…but it introduces innovations (options and future…) that could influence the real economy positively…at the instrument level as well as at the technology level…but the Main Street and the Wall Street does not love each other…

Innovation is About Change…I'll come back to this at a concluding post.

The Core Questions an Innovation Marketer Asks

To remind ourselves, why we do all that analysis, I put the core questions an innovation marketer may ask to identify problems and how to fix them.
  • What's the type?
  • What are the conventions, rules, state of the art for the type?
  • What's the point of view?
  • What are the desires, requirements, needs?
  • What are the controlling ideas, methods and technologies?
  • What are the Constructors, Builds and Payoffs?
  • What are the potentials for Co-creation or Co-marketing?
We all sense that power is shifting in the world. Goliaths are topped by Davids…new products and services range from networked systems to things that are cognitized (as things where electrified in the last centuries).

A new power also gains its force from people's growing desire and capacity to participate in the innovation beyond consuming, sharing, shaping, funding, making, co-owning…

This is how I work when helping develop a business for an innovation. Only when I deeply understand its purposes and principles, I can apply marketing and sales approaches like solution-access-value-education marketing mixes, insight selling

We know now enough to conclude our types for

UnRisk Capital Manager is Realistic, Real-Time, Documentary, MultiFlow, Financial Market Risk Management, (empowered by) a HPC Valuation and Data Factory

UnRisk Quant is Idealistic, Off-Time, Literary, SingleFlow, Financial Market, Credit and Model Risk Analytics, (empowered by) Financial Language implemented by multi-model, multi-method Engines

This makes it clear (to me) that a risk manager in a capital management firm clicks a button, or schedules a task to quantify the risk of portfolios (funds) putting out the result before at a certain time, whilst a quant may program a high-level cross-asset risk management process for, say, an institutional investor checking out the best model-method combinations for the instruments, constructing portfolios, calculating VaRs, applying the rules for a central counterpart regime...

Remark, both products utilize the same technology stack that includes the UnRisk Financial Language implemented by UnRisk Pricing, Calibration, Value at Risk, Counter Party Exposure Modeling…Engines, Data Frameworks, UnRisk Services, UnRisk Deployment Systems…

What may be surprising…a similar architecture has been successfully applied to a process-through modeling and quality management system in a complete different field: hot rolling of high quality steel plates. It's about inline evaluation of process properties that lead to quality (risk). And, at the micro level, the underlying equations are not so different. The dominating point of view is Analytics and Control. That's it.

It's my strong belief that the coverage of The Innovation Mesh is broader, but in the next contributions I'll focus on quantitative applications where programming plays an important role.

It's my objective to present a scheme helping to systematize an assessment. I call it The Quant Innovation Mesh. The scheme will not ask for scores but it shall motivate thinking about the external and internal values and risks.

To do this we need to look a little deeper into the innovation's Wants, Hows and Needs. Whether there's a big Delivery? What are Controlling Ideas? Does it offer a Set-Up, Build, PayOff movement?

Maybe, I'll sketch a scheme in the next post and go from there?

Innovation Energy

We've now taken a rough view into Time, Reality, Style Types and dived into the three Strutcures…
Now I'll challenge the quality of the most critical type: Matter.

What does the innovation want?
How does it get it and what does it need?

An innovation needs energy to deal with problems of different complexity. There are three levels of problems: object building, object interaction and object positioning problems. From an external point of view the innovation can solve, but also create problems - internal problems need to be solved.

Big wins often come from creating problems. Innovations that make a big change often create a problem before they solve it (think of fashion…).

The struggle to get objects doing new things in new ways is the "heart and soul" of an innovation. Focusing on that struggle may make the differential advantage.

So, the first job an innovator has to do is to specify what the innovation wants, how it is going to meet the requirements and what it needs.

Matter Types

divided into

Purpose Types - what the innovation is for, what does it drive. Innovations may want to improve global external values, or extends, enrich (computational) domains knowledge for sectors...or enable better problem solving on various levels in various fields...

In the stone age of computing, we spoke of general or special purpose systems, assuming that there's a trade off between coverage and automation depth.

But this is not longer valid. Cascades of methodologies and technologies enable systems that are broad and deep…automated solutions and development systems in one. This motivated me to take another view into the matter types.

The macro point of view - the expressional forms of societies

Education, R&D, Economy, Social Affairs, Culture, Health, Ecology, Security, Defense, Finance, International Affairs…

The meso point of view - domains, fields, sectors and their main functions

Sciences, Engineering, Public, Supply, Information, Communication, Entertainment…doing funding, design, making, logistics, admin, marketing…related to usability, quality, risk, attraction…

The micro point of view - the basics for getting better results

Presentation, analytics, prediction, control, simulation, optimization

Thinking in tasks, innovations may support society tasks, performance tasks, action tasks, presentation tasks…and there are some internal aspects like worldview (changes the innovation wants to convey at a global level), status (changes at a target group level).

Status? At UnRisk, we have decided Arming Davids - in a market of Goliath able to justify huge spend on enterprise (risk) management how are the numerous small capital management houses to meet regulatory pressures whilst remaining cost effective.

Making big systems for the small follows a kind of Reverse Innovation (put enormous effort into making something for those who cannot pay much and sell the innovation to the big later). Big systems mean: identical functionality, but moderately massive data and consequently resource usage.

To make Reverse Innovation possible, you need the most advanced methodologies and technologies to start at.

Realization Types - how the innovation works and atop what systems it is built.

Characterize the architecture, designs, models, methods, critical implementations of your innovation and what technologies it utilizes.

Problem solving consists of using generic or ad hoc methods from mathematics, engineering, computer science, artificial intelligence…psychology…

An example of a critical decisions: taking a mathematical (abstract and re-concrete) or an engineering view (define and refine)….

In quantitative fields techniques are found in symbolic and numerical computation, stochastic methods, logic methods, optimization, machine learning, data and algorithms, control theory...cellular automata…or more general, the theory of complex systems

Quantitative problem solving is often characterized by reading (preprocessing) - computing (processing) - writing (post processing) cycles. This is typical for mathematical problem solving where we transform a textual description into a model - make the model of good nature for computation - calculate - interpret the results.

Consequently, it will be helpful (indispensable) using platforms and tools that provide such problem solving techniques (and work flows). In my experience, platforms are beneficial, if they support explorative learning as well as problem solving - they're fit for innovations the bottom-up fashion.

Some problems are difficult, when not fiendish…Inverse problems are ill-posed. In general, creating data from models is easier, than extracting models from data. Think of Computer graphics vs image processing.

There are so many aspects that it needs aggregation and typology. It took me many years to roughly understand this and what the key points are.

In my evaluation work for the EC I was quite often confronted with a principle misunderstanding: this is a great innovation! But what's it for? For this and that! But how will it be exploited? (I find Graphene a fantastic innovation. But after a time of enthusiasm...innovation analysts are asking: "What's for?").

We can't master the full complexity of the innovation assessment (even when focusing on quantitative problems) without trying to map it into a mesh of types.

How they're built, what they're for, how they effect work, how users experience them, how real they are and how they synchronize to real world...real life behavior?

Next up…more about conventions and techniques realizing innovations that are fit for purpose and optimize market risk. Will we be possible to write this all down in "one" page?