Sunday, March 27, 2005

 

A Trust Fund Parable

You have a new baby. Congratulations! In 18 years, your young genius will surely attend a prestigious college. Being certain of this future expense, and being the prudent financial planner you are, you resolve to put a small amount of cash into a drawer each month for next 18 years. You expect that, 18 distant years from now, the accumulated funds will reduce what you will have to pay out of your running budget while your child attends college.

Ten years on, you need to replace your aging car, and, while you certainly qualify for a car loan, you actually have enough money in the drawer to buy a new car outright. Congratulations on your diligent saving! Raiding the trust fund for the car is actually prudent, you reason, since you would have to pay interest on the loan, while the money in the trust fund is interest-free. Of course, you realize, the trust fund will have to be re-paid, so you replace any cash you take out of the drawer with a piece of paper on which you have written the amount of your withdrawal.

After 18 years of diaper-changing, temper-tantrums, piano lessons, homework, school dances and test prep courses, your young genius is indeed accepted into a prestigious college. You open the drawer an find... a lot of pieces of paper.

Is the trust fund real? Don't be too hard on yourself. You love your child and you're good for the money, so certainly in that sense the trust fund is real. On the other hand, you didn't achieve your goal of reducing the money that must come out of your running budget during the next four years to pay for your child's college education. Whether you prefer to think that you are paying the college directly, or that you are paying back the trust fund which in turn pays the college, won't make an iota of difference in the amount that must now come out of your paycheck each month.

Democrats and Republicans love to argue about whether the social security trust fund is real. As my parable illustrates, their arguments are entirely beside the point. Receipts from the payroll tax have, for quite some time now, exceeded social security payments to retirees, and are expected to continue to do so until about 2015. The idea was that, by levying a larger payroll tax than was required to pay retirement benefits before 2015, we would reduce the taxes that would have to be levied after 2015 to pay to for the glut of baby boom retirees.

But that excess cash wasn't put into a vault. And indeed, it wouldn't have been fiscally prudent of the government to leave it sitting in a vault while issuing interest-paying bonds to cover large budget deficits. (What would really have been prudent is for the government to loan the excess cash to interest-paying debtor nations while not running large budget deficits. But that is water under the bridge...) So the excess cash was spent, but carefully accounted for. The social security trustees know how much the government owes them and they are counting on it being paid back. If it were not paid back, a significant tax increase would be required to make up for the missing funds. On the other hand, since the government also doesn't have the money in a vault, in order to pay it back a significant tax increase will be required.

Of course, a taxpayer couldn't care less whether he is taxed by the social security trustees or taxed by the government in order to pay the social security trustees. In either case, the only way we can pay retirees their promised benefits is to raise taxes just as much as if we hadn't saved at all. I'm not saying we aren't good for it -- given current demographic trends, and comparing the voting records of old and young people, I'm pretty sure we are going to be good for it. I'm just saying that we haven't succeeded in our ostensible goal of creating a cushion that would have allowed us to make good on our promises without raising taxes.

Stay tuned for more retirement financing conundrums.

Sunday, March 20, 2005

 

Mind your Sigmas and Mus

Recently, Larry Summers, the president of Harvard University, has found himself in hot water for suggesting that the under-representation of women in the highest echelons of science might be due to innate biological differences between the sexes. In the past, others have been similarly chastised for suggesting that the the over-representation of blacks in the highest echelons of many professional sports might arise from innate biological differences among races.

The most amusing aspect of these debates is that almost all the participants are completely off the mark. How different groups perform at a given task on average is entirely irrelevant to predicting which group will contribute the most to that tiny portion of the population who are the very best at the task. Whether men or women are, on average, better at science has no effect on whether the best scientists are men or women. Whether blacks or whites are, on average, better at competitive athletics has no effect on whether the best competitive athletes are black or white. How can this be?

Below is a picture of the bell curve, named for its shape, beloved by statisticians and bemoaned by test-takers everywhere.



It is a picture that shows how often a particular value will be measured. When the line over a value is high, that value is measured often; when the line over a value is low, that value seldom occurs. The shape of the curve, which fits the measured distributions of many attributes and test scores remarkably well, says that medium values occur often, while higher and lower values occur less frequently. Very high and very low values occur, of course, very infrequently.

Most of the area under a bell curve, representing the great mass of the population, lies near the middle value, called the average or mean. The very highest values, obtained by the people who are the very best at the measured skill, are represented by the small area under the far-right tail of the bell curve.

Before we proceed to determine who dominates that far-right tail, you must know that you need two numbers to characterize a bell curve. First, of course, you need to know the middle value, which statisticians represent by the Greek letter μ (mu). But you also need to know how spread out the values are around the middle; visually, that corresponds to how wide or narrow the bell curve is. Statisticians call the width of a bell curve its standard deviation, and represent it by the Greek letter σ (sigma).



Take, for example, measurements of IQ. The average IQ is μ=100, and the standard deviation σ=15. A score over 115 (one standard deviation above the mean) will be measured for around one person in ten. If instead we had σ=5, a score of 115 (now three standard deviations above the mean) would be measured only for around one person in 100,000. As σ decreases, the bell curve becomes narrower and it becomes more difficult to get a score far from the mean. Conversely, as σ increases, the bell curve becomes wider and it becomes easier to get an extreme score.



Now consider two different groups of people, which we will call pink and blue. Suppose we measure the distributions of some characteristic in the two populations, and get the bell curves shown above. I don't know what the characteristic is; perhaps it's height or weight, or perhaps it's interest in science or the score on some math test. You'll notice immediately that the pink population scores, on average, lower than the blue population. It has a lower μ. What you might not notice right away is that the pink population's bell curve is also a little wider than the blue population's. It has a higher σ. That fact isn't important for determining whether a typical pink or blue person is likely to have a higher score. But it is decisive for determining whether the highest scores will belong to pink or blue people. Notice that, despite that fact that the blue average is higher than the pink average, pink dominates the very highest scores, due to its higher standard deviation. Below is a close-up of the far-right tails of the distributions, to make this easier to see.



But surely, you might argue, having a higher μ helps. If blue's average were high enough, there would be more blues than pinks in the region where there are now more pinks than blues. And that is "sort of" true, but only "sort of". It turns out that, as long as you go far enough out in the tails of a bell curve, the population with the higher standard deviation will always dominate. By changing the means, you can change the score where the cross-over occurs, but you can't change the fact that the higher-σ group will eventually, at some point in the far-out tails of the distribution, overtake the lower-σ group.

I don't know whether the pink and blue lines sketched above accurately represent the distributions of scientific ability in the sexes, or of athletic ability in whites and blacks. Such measurements are fraught with peril for the career of any statistician who might think to undertake them. And whatever one chooses to measure, whether it is the right measure of ability is entirely debatable. But I do find it amusing that, for any measure, the suggestion that one sex or race might have a higher μ is enough to elicit a torrent of vitriolic responses, while the suggestion that one group might have a higher σ usually elicits yawns. For the question at hand, it is really σ that counts!

By the way, the mathematical properties of bell curves were first investigated in detail by the German mathematician Carl Friedrich Gauss in the early 19th century. So we should really all have this down by now.

Thursday, March 17, 2005

 

Is trade good or evil?

Boeing builds many planes near Seattle, and my hometown newspapers regularly run editorials calling for the U.S. to sanction European countries for subsidizing Airbus. Magazine articles endlessly re-hash the debate over whether low-cost imports from China are good because they help keep inflation low or bad because they destroy American manufacturing jobs. The U.S. steel and textile industries clamber for protection against dumping.

Trade policy is forever in the news. It was in the 1930s, when the U.S. congress, reacting to the Great Depression, passed the Smoot-Hawley Tariff Act severely curtailing trade. It was in the 1840s, when Great Britain fought the Opium Wars to protect its right to sell the drug to Chinese addicts.

A depressing number of debates on the merits of trade go like this: A says "trade benefits consumers by lowering prices." B says "trade harms workers by eliminating jobs." A responds "but everyone, including the displaced workers, benefit from lower prices." B responds "you can't buy anything at all if you don't have income". The assertions of both A and B are obviously correct. There is no way to get beyond these sound-bites without quantifying the dollars saved and wages lost, and no newspaper dares to impose such an analysis on its readers.

There actually is such an analysis, which is done in introductory economics courses around the world. Economists are quite fond of it. Done in the early 19th century by David Ricardo, it was one of the first mathematical models of an economic phenomenon. You draw some lines, measure slopes and intercepts, and obtain an actual answer to the question of whether trade is, in the net, good or bad. I'm quite fond of it myself, but I'm not going to describe it here.

Primarily because of such mathematical analysis, almost all economists are in agreement on the issue of trade policy. From that darling of the right, Milton Friedman, to that darling of the left, Paul Krugman, they will tell you that unfettered free trade is almost always good.

I'm going to show you an entirely non-mathematical way to understand the answer.

Imagine that, instead of selling us a product at a cost lower than domestic producers, a foreign producer were to give us the product. Surely the recipient of a gift isn't made worse off by accepting it? (Well, perhaps if the gift is opium, or a a Torjan horse, but let's stick to airplanes and textiles for the moment.) The situation for domestic producers would certainly look bleak. No one would buy from them. They would go out of business, and their workers would be out of their jobs. Still, even in the short run, the total amount of stuff available for domestic consumption would clearly be the same or greater. All the other stuff that was made before would continue to be made, and the free foreign supply of the product in question would replace the old domestic supply, if not exceed it.

In the longer run, the workers would get different jobs. They would produce other things, perhaps things that previously didn't get produced at all, because they had spent their time producing the old product. Then even more stuff would be available for domestic consumption.

The more realistic scenario is that the foreign producer still wants payment for the good, but less payment than the domestic producer. When a foreign producer undercuts a domestic producer, we haven't quite reached the happy state of getting the good for free, but we are closer, and, therefore, in the net, better off.

You can, if you like, also imagine the even less realistic scenario in which some hyper-mercantilist foreigners give us all the the products we currently produce, for free. Then no one in our country has a job, but we certainly aren't worse off!

I want to be very honest about what this argument proves and what it doesn't. It doesn't prove that every displaced worker is just as well-off in his new job as in his old one. Trade can increase inequality, and if you want to oppose trade you are welcome to claim that it does. But my argument does show that, if dollars saved are counted one-for-one against wages lost, in the aggregate and in the net, tade makes us better off. So if you want to oppose trade, you cannot claim that the costs of lost wages will outweight the gains of lower prices. That is simply and provably wrong. By concentrating on the total ammount of stuff available for domestic consumption, we have been able reach this conclusion without having to seperatly weigh the effects of each dollar of lost wages against the effects of each dollar of consumer savings.

Being able to produce more things with fewer workers lies at the heart of what we mean by economic progress. Seventy years ago, in the United States, we employed about 20% of workers to feed ourselves; today, we employ only about 2% to do so (U.S. census statistics). We are richer precisely because we don't need as many workers to feed ourselves as we used to. In the same way, our country will be richer if, by trade, we can obtain larger quantities of steel and textiles using fewer workers.

Imagine what the world might look like if, at the behest of farm workers, we had undertaken measures to insure that agricultural efficiency not increase, so that just as many of us had to work at keeping ourselves fed as did 200 years ago. Imagine what the world might look like if, at the behest of the luddites, we decreed that all socks be knit by hand. While these imagined worlds might have a certain romantic charm, it wouldn't take very long in a 200-year-old standard-of-living before the vast majority of us opted for progress.

Wednesday, March 16, 2005

 

First Post

This is my first post to my google blog. I hope my musings will be of interest to family, friends, and perhaps even strangers. I plan to write mostly about social, economic, and political issues that might concievably be of interest to the wider world.

At the moment, I am a professional computer programmer. Five years ago, I was a professional research physicist. I have a strong background in mathematics and economics. My background in history, literature, and philosophy is less strong, but I do very much enjoy those subjects. Basically, I'm a hard-numbers guy who was blessed to have an excellent liberal-arts education in his youth. Many thanks to my teachers!

This page is powered by Blogger. Isn't yours?