A function F is said to be linear if it satisfies
(i) F(x+y) = F(x) + F(y), and
(ii) F(sx) = sF(x).
Heuristically, a linear function satisfies a generalized distributive property.
A function G(x) is called affine if G(x)=F(x)+c, where F is linear and c is a constant. Mathematicians and the general public often apply the term linear to any affine function. History is probably on the side of this broader usage, as the graph of an affine function (of one variable) is a line.
Mathematicians love linear functions because they are so easy to analyze. An entire semester college course is typically devoted to their study. Moreover, the object of a differential calculus course is to study the approximation of functions by linear functions. Crudely, a function is differentiable, if in sufficiently small regions, it is well approximated by affine functions (the requisite affine function depending on the small region in question).
We spend several years in elementary school teaching children that multiplication satisfies properties (i) and (ii). We then spend several years in high school (occasionally spilling over into college) convincing students that most functions are not linear. Our success in this latter effort is clearly limited. For example, we often see data analysis accompanied by a computation of the affine function which best approximates the data, even when there is no theoretical or experimental reason to suspect the underlying phenomenon is described by a linear function. In fact, in most natural systems, linearity seems highly unlikely. Remember, the graph of a one dimensional affine function is a line marching onward in perpetuity. If you do not think that a stock price or a population of herring is likely to increase (or decrease) at a constant rate forever, then a linear model is clearly ruled out. Nonetheless, our affinity for linearity is extremely strong, as demonstrated by our tendency to see straight rows in scatter planted corn fields.
I was drawn to think about our bias toward linearity, by a WSJ article,
Balanced Budget vs. the Brain. The focus of the article is the putative irrationality of the average economic actor, who fears risk more than he values gain. The author cites psychologists Daniel Kahneman and Amos Tversky who asked students to bet on coin flips.
"If the coin landed on heads, the students had to pay the professors $20. Messrs. Kahneman and Tversky wanted to know how big a potential payoff their students would demand for exposing themselves to this risk. Would they accept a $21 payoff for tails? What about $30?
If the students were rational agents, they would have accepted any payoff larger than the potential $20 cost. But that's not what happened. Instead, the psychologists found that, when people were asked to risk $20 on the toss of a coin, they demanded a possible payoff of nearly $40. "
Question: why is it rational to value a $20 dollar gain to be worth the risk of a $20 dollar loss? If you own one house (and do not have a large bank account), is it a rational gamble to risk it in an even bet in exchange for an additional house? Are we irrational if we do not view the prospect of owning two houses to be worth the risk of becoming homeless? It is easy to construct numerous examples of this kind. I think the correct interpretation of the study is the pair of observations :
(1) the value we place on assets is not a linear function of their dollar value, and
(2) Jonah Lehrer, the author of the WSJ piece, has an irrational expectation that linearity is ubiquitous in our highly nonlinear world.
I bet (but not $20) that if the students were risking a 20 cent loss, their acceptable payoff would be closer to 20 cents. This is an expression of my irrational expectation that differentiability can frequently be found in our highly nonsmooth world.
As the French are wont to say, "Vive la nonlinearity."
Sunday, March 13, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment