I've just decided to introduce a new topic: "telegrams"…to address my opinion on things that touch me, spontaneously.
I read this article in Wired and want to add (after 25 years first hand machine learning and second hand inverse problems experience): inverse problems, like any "recognition" problem are ill-posed. Trying to extract structured information from a "flat" representation is very difficult - from model to picture is easy, from picture to model not at all.
It's not that algorithms need to be "flawed"
This is the reason, why I never applied even the most clever machine learning algorithm to analyze human data. Never. In technical, economic...systems a little misinterpretation may have technical, economic...consequences. It's a risk: we need to solve inverse problems there, otherwise we could not calibrate and recalibrate our models but be careful with extracting models from data (context free).
Algorithms are great in so many fields, but don't analyze human data for "identity" or "behavior recognition" automatically. Once an algorithm will misinterpret the analyzers...