Progressive Problems

Last Wednesday, I've posted about the first principle to create building blocks of our quant innovations. The Set Up.

After some further thinking, I'll call it "Constructors". In Object Oriented Programming, constructors create objects and prepare them for use. Our Set Up creates more general foundations with higher level initializations, so I rather borrow the name from constructor theory, a new fundamental theory of physics - developed by David Deutsch and Chiara Marietta.

According to constructor theory, the most fundamental components of reality are entities - constructors - that perform particular tasks, accompanied by a set of laws that define which tasks are possible for a constructor to carry out (from an article by Zeeya Merall, Scientific American)
A computer program controlling a factory is a constructor. The execution of the program at a specific factory explains, whether the transformation of a set of raw materials into the a desired finished product is possible…its the aggregation of a cascade of programs controlling individual manufacturing systems, storage, transport, assembly, finishing...The involvement of a concrete factory…suggests post processing - the compilation of abstract manufacturing programs…into the individual control languages of machines, robots, vehicles…

The need to show that all manufacturing operations are possible touches an intrinsic problem…uninformative, unpredictable, perturbed…data may influence the precision of individual manufacturing  tasks. They may be related to the individual machines, tools, material…input or control parameters of individual models...This is the sword of Damocles hanging over tasks at all aggregation levels. It suggests (multi-level) preprocessing.

In the processing phase, the concrete programs - constructors - need to replicate operation models, machine models, tool models…they are abstract and need to be calibrated identifying parameters from real world objects in order to control the precise manufacturing of other real objects.

This is all simplified: in reality the control of discrete manufacturing tasks is quite complicated and complex.

IMO, the example shows that constructor theory is a perfect framework for the replication of laws of the real world in computational form.

Constructors do not return results. Tasks do. And trigger

Progressive Problems

at all levels and related to the degree of interdependence…However the tasks are organized...sequential, parallel…hierarchical…low level problems will not fade but become horrible at higher levels.

How do we know that a quant innovation isn't working?

We should look at it through the lens of a crisis manager.

If a wrong model was selected it will not approximate a real behavior good enough. If the model was correct, but the method to evaluate it was lousy it's even worse, because it's behavior mimics spurious correctness.  If model and method were correct in some partitions, they may be drastically wrong in others. They may be correct in all desired partitions, but the calibration may identify the parameters wrong (calibration is an ill-posed problem!). Provided, the model-method pair is correct, the calibration is correct, but the results are still bad…The model may have been calibrated to uninformative data…and so on…and so on.

Problems occur progressive - they may become horrible in interplay…concurrent or over time.

In quant finance we often need ten millions of single instrument valuations to get a few risk data…

That all boils down to one name: operational risk. IMO, its more a danger than a risk, because usually, it has no upside. Ten rights can lead to a wrong, but one wrong may destroy the system. The innovation must take measures to avoid any instance of operational danger - uncompromisingly.

In any field, the worst innovation is the one that gives its users false comfort about its adequateness, accuracy…instead it shall disclose its assumptions, oversights...

My experience in quantitative innovation taught me that innovators should be humble in applying complicated and complex systems and offer black boxes only if there are no doubts about their correctness.

Assessing how the innovation treats progressive problems

How do I do that?

I track the innovation's possible progressive problems from bottom up…exposing the escalating degrees related to its purpose and objects of desire.

No, I do not inspect code, I look how the tasks and the system work...try to act as stressor, test extreme cases…

I need to understand, in which workspace the innovation works and in which not, what measures it takes to avoid the model-method-calibration-uninformative-data traps, how its key objects and their representations are organized...if it offers enough scenario and what if simulations…if it uses generic technologies inducing a proven design…

The escalation may lead to a crisis, but this is just a climax of progressive problems. Therefore I'll merge Problems and Crisis into Progressive Problems.

I'll change my scheme and reduce the principles to
  1. Constructors
  2. Progressive Problems
  3. Solutions
applied to preprocessing - processing - postprocessing.

The Quant Innovation Mesh 2 will be posted in the near future - I'm surprised myself how simple it may become.

Next, I'll go deeper into "Solutions".