![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Interesting that it all comes down to this, in a sense.
The real question is this: how do we learn from this, and put in place safeguards against this kind of thing happening again? Clearly the real problem is that the model was applied without regard for its limitations, so it's not clear that better models are the right answer here.
Interesting that it all comes down to this, in a sense.
The real question is this: how do we learn from this, and put in place safeguards against this kind of thing happening again? Clearly the real problem is that the model was applied without regard for its limitations, so it's not clear that better models are the right answer here.
hm
Date: 25 February 2009 14:17 (UTC)They didn't know, or didn't ask.
So what I found kind of astounding here was the extent to which the widespread use of this expression actually prevented people from trying to assess risk at all. Far from not knowing how, the situation seems to have come about because, in a chicken-and-egg kind of way, everyone was basing their numbers on market prices which are based on the premise that someone else had run the numbers properly. So it's like playing the Nose Game with everyone's retirement savings, and watching a lot of bankers yell "NOT IT!" all at once.
But I mean, essentially we know how to do this, or anyway, experimentally we have only one sensible route: look at historical data on correlations. Which suppposedly doesn't exist. This is after all how Nate Silver gets such great predictions for his elections: he looks at how all the states varied together as far back as the world keeps records. Anyone who started collecting this kind of information could sell it at a very high price. Of course there is still the possibility that the world will do things you didn't predict because it's so strongly coupled. But not even making any kind of reasonable estimate has got to be a mistake.
Re: hm
From:(no subject)
Date: 25 February 2009 17:24 (UTC)Here's my take on the reasons this turned out badly.
(1) The correlation measures that they had were implicitly conditioned on externalities (e.g., the state of the housing market) which turned out not to be constants.
(2) Insufficient attention was paid to assigning appropriate values to the "rocks fall, everyone dies" scenarios: probability, magnitude, and scope (e.g. secondary and other cascading effects).
(3) The model does not itself take into account the fragility implied by its universal use.
I don't really see that the use of the expression _prevented_ anyone from trying to assess risk; it just made people think that they didn't really _need_ to.
(As for Nate Silver, state correlations are just one aspect of that model, of course--but it's true that picking good features for machine learning/statistical models is often the hardest part.)
Re: insert subject header here
From:in case it's not clear
From: