![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
Interesting that it all comes down to this, in a sense.
The real question is this: how do we learn from this, and put in place safeguards against this kind of thing happening again? Clearly the real problem is that the model was applied without regard for its limitations, so it's not clear that better models are the right answer here.
Interesting that it all comes down to this, in a sense.
The real question is this: how do we learn from this, and put in place safeguards against this kind of thing happening again? Clearly the real problem is that the model was applied without regard for its limitations, so it's not clear that better models are the right answer here.
hm
Date: 25 February 2009 14:17 (UTC)They didn't know, or didn't ask.
So what I found kind of astounding here was the extent to which the widespread use of this expression actually prevented people from trying to assess risk at all. Far from not knowing how, the situation seems to have come about because, in a chicken-and-egg kind of way, everyone was basing their numbers on market prices which are based on the premise that someone else had run the numbers properly. So it's like playing the Nose Game with everyone's retirement savings, and watching a lot of bankers yell "NOT IT!" all at once.
But I mean, essentially we know how to do this, or anyway, experimentally we have only one sensible route: look at historical data on correlations. Which suppposedly doesn't exist. This is after all how Nate Silver gets such great predictions for his elections: he looks at how all the states varied together as far back as the world keeps records. Anyone who started collecting this kind of information could sell it at a very high price. Of course there is still the possibility that the world will do things you didn't predict because it's so strongly coupled. But not even making any kind of reasonable estimate has got to be a mistake.
Re: hm
Date: 25 February 2009 17:24 (UTC)(no subject)
Date: 25 February 2009 17:24 (UTC)Here's my take on the reasons this turned out badly.
(1) The correlation measures that they had were implicitly conditioned on externalities (e.g., the state of the housing market) which turned out not to be constants.
(2) Insufficient attention was paid to assigning appropriate values to the "rocks fall, everyone dies" scenarios: probability, magnitude, and scope (e.g. secondary and other cascading effects).
(3) The model does not itself take into account the fragility implied by its universal use.
I don't really see that the use of the expression _prevented_ anyone from trying to assess risk; it just made people think that they didn't really _need_ to.
(As for Nate Silver, state correlations are just one aspect of that model, of course--but it's true that picking good features for machine learning/statistical models is often the hardest part.)
Re: insert subject header here
Date: 25 February 2009 18:08 (UTC)This, and your (3) above, is what I was getting at. It "prevented" them in the sense that it made them feel as though independent risk assessment was unnecessary, so they didn't do it. But one of the implicit assumptions of the model is that the market prices are a fair indicator of the risk. Unless someone in the economy has actually based the market value of a commodity of some kind off of historical measurements, the price is based entirely on smoke and mirrors (other people's perceptions of value) and becomes decoupled from reality. It seems kind of related to the "tragedy of the commons" in this way: the responsibility for doing something of crucial value to society is shifted to someone else, but since nobody else has volunteered nobody actually does it.
In the case of something like art or designer clothing, perceptions may be an accurate measure of the value; less so where e.g. historical authenticity is important. But in the case of a bet on the outcome of some event which has an objective state that can be measured, like whether people in a particular community or income bracket do or don't have enough money to pay their mortgages, well...
There are all kinds of other caveats, obviously, when someone tries to do things more rigorously. And maybe the point is that the sub-tranching and meta-tranching of these derivatives makes it much harder to do a rigorous job, and then when you make bets on what happens within this tightly coupled system you're likely to get the wrong answer. That may also be. But part of what I got from this article is that the quants took this thing as carte blanche to not even try, and moreover as carte blanche to create infinitely more complex things than could ever possibly be priced in a more traditional way -- because the risk assessment in the model was based on everyone else's perceptions of risk, rather than on any reasonable empirical calculation of the risk.
And that, if true, seems to me a clear dereliction of duty by the financial sector and grounds for tarring and feathering them all.
in case it's not clear
Date: 25 February 2009 18:13 (UTC)