Monday, September 24, 2012

Agent-Based Computational Economics: Growing Economics from the bottom up by Leigh Tesfatsion (Artificial Life 2002)

This survey paper outlines the objectives and characteristics of the agent-based computational economics (ABE) from eight different research areas:




  1. Learning and the embodies mind
  2. Evolution of behavioral norms
  3. Bottom-up modeling of market processes
  4. Formation of economics networks
  5. Modeling of organizations
  6. Design of computational agents for automated markets
  7. Parallel experiments with real and computational agents
  8. Building ACE computational laboratories




Thursday, September 20, 2012

Statistical fallacies by C. CALLOSUM


http://callosum.blogspot.com/2005/03/statistical-fallacies.html

I've just finished reading How to Think Straight about Psychology by Keith Stanovich. It's a wonderful book, and, to be honest, really about critical, scientific thinking and not so much about psychology. Most of its examples are from the field of medicine, in fact.

The best parts of the book, to my mind, are the ones that discuss how humans deal with probability and statistics. Everyone knows that statistics are dangerous, but the danger doesn't wholly come from deliberate misuse. Some of the danger comes from the way people intuitively interpret statistics - or, rather, misinterpret them. Not to mention the way people dismiss statistics when they should be taking them seriously.

To summarise the relevant chapters, as much for my sake as anything else, the ways people mistreat and misuse statistics are:

(1) "person-who" arguments (Stanovich's terminology)

People treat a statistical finding or law as invalid because they know of an exception to the law, despite "knowing" that the law was probabilistic in the first place and that there would be exceptions. A lot of this is due to "vividness" effects: probabilistic law is not concrete to most people, but a living, breathing counter-example is. What has a greater effect on their thinking? The counter-example, of course, leading them to believe the law inaccurate.

(2) discounting base rates

This topic is treated in many statistics classes (at least the ones I've been in), but people often seem to forget about it. So the classic example goes, supposing that there's a rare disease that occurs in 1 out of 1000 people (ok, so that's not so rare). Further suppose there's a test that diagnoses the disease that has a zero false-negative rate (if someone has the disease, the test always gets it right) BUT has a 5% false-positive rate (if a person doesn't have the disease, there's a 5% chance it'll say that they do).

So you pluck a random person off the street and administer the test on them, and it says yes, they have the disease. What's the chance that they do have the disease?

Well, even physicians get this wrong and say 95%. The true answer, if you do the math, is about 2%. Why is the intuitive answer so off-base? Because they forgot about the huge effect of the low base rate - the unlikelihood that that random person would have had the disease in the first place. This is also why implementing security systems that are "99% accurate" gives you absolutely no boost in security: the extremely low probability that a random person you choose will be a terrorist [I'm pretty sure Bruce Schneier discussed this at least once on his blog, but am unable to find the exact URL].

(3) failure to use sample size information

To put it simply, people forget (or don't realise) the effect of thelaw of large numbers - that "a larger sample size always more accurately estimates a population value".

(4) the gambler's fallacy

Say you're flipping a coin, and you've had 5 heads come up. Ask someone whether they think the sixth will come up heads, and they will say it's unlikely, despite the fact that the coin flips are independent. They operate on the basis of a "law of averages" - but in reality, there's no such thing as a law of averages!

(5) thinking coincidences are more "miraculous" than they are

Skeptics often point out that if something is a "one-in-a-million" occurrence then, depending on how you count a single event, at least 300 should happen a day in the U.S. (population approx. 300M). Another classic example is asking people in a class of 30 their birthdays and seeing if any coincide. Students often think the probability of two people in a class having a birthday as a low-probability occurrence, but it's really more probable than none of the students at all sharing a birthday!

(6) discounting incidences and only seeing coincidences

This is common to all of us. Coincidences are vivid - you think of old Uncle Al and suddenly he rings up on the phone. Hey, ESP! But what about all the times you thought of him and he didn't ring up? Oh, you forgot about those, did you?

(7) trying to get it right every time - even when it's better to be wrong sometimes

Stanovich describes an interesting experiment here (Fantino & Esfandiari, 2002 [Pubmed abstract]Gal & Baron, 1996 [abstract]). Subjects are sat down and told to predict which of two lights, red and blue, will blink. Often, there'll be some money paid for correct predictions. The sequence of red and blue lights is random, except that red flashes 70% of the time and blue 30%. Analysis of the predictions make afterwards show that subjects pick up on the 70-30 spread pretty well, and guess red 70% of the time and blue 30% of the time. But, if they'd just guessed red 100% of the time, they'd have done better! Alternating red and blue with the 70-30 spread gives them, on average, only about 58% accuracy.

The thing is, guessing red all the time guarantees you'll be wrong 30% of the time - while alternating still opens up the possibility that you'll be right all the time, by some miracle. Hope springs eternal in the human heart.

Stanovich further explains how this carries over to clinical vs actuarial prediction. Actuarial prediction is based on historical statistical data. Clinical prediction is based on familiarity with individual circumstances. It seems to people that clinical prediction should be better - (1) you have more information to go on (actuarial + individual), and (2) doesn't actually knowing a person and his circumstances tell you more than a bunch of numbers?

Well, it doesn't: in many, many replicated studies, it's been shown that adding clinical prediction to actuarial always *decreases* the accuracy of the prediction. As unlikely as it seems, restricting yourself to judging based on past statistical trends is always better in the long run. You have to accept the error inherent in relying only on general, statistical, historical data in order to decrease error overall.

(8) trying to see patterns where there are none - or the "conspiracy theory" effect

Stanovich uses the stock market as an example. Much of the variability in stock market prices is due simply to random fluctuations. But people try to read patterns and explain every single fluctuation. What about those people who are always correct? Well, take 100 monkeys and ask them to throw darts at a board. Use the positions of the darts to determine how to place bets. Do this for a year, and 50% will have beat the Standard and Poor's 500 Index. Want to hire?

This is made even worse when people think they should be seeing a pattern, seeing structure. Take the Rorschach test, for example: clinicians using it see relationships in how people respond because they believe they are there. If they believe the theory behind the test, they think there'll be a relationship between what people see in the random inkblots and the makeup of their psychology. But there is no evidence for this whatsoever.

(9) the illusion of control

When people think they have control over a situation, they believe personal skill and actions can affect situations which are actually way beyond their control. I believe the classic example here (not cited in Stanovich's book, he has more interesting ones, actually) is the sports fan who believes that by performing certain actions, he can affect the outcome of a match.

All that's from this book, and I hope I haven't misreported or misrepresented anything. It's [the book's] very pithy, straight to the point, and is a joy to read. The explanations he gives are a good deal better than the ones I've given above, so go check it out from the library or buy it - whatever you do, I encourage you to read it. Stanovich also has a bunch of papers online that look interesting.

In my next post, I'll discuss some of my thoughts on people's statistical abilities and their relation to learning, especially learning language.

Thursday, September 13, 2012

General Admission Information: Department of Economics at Purdue


Each year the department of economics enrolls about 10 students, among whom about 8 are funded. At Purdue, your tuition fee is waived if and only if you receive assistantship. The assistantship should be enough supporting you.

If you are an international student, one caveat for you: although not stated officially, the economics department at Purdue has not enrolled international students who are not currently studying in the United States, and it’s unlikely that this will change soon. Under rare circumstances some international students got admitted to the PhD program in Economics directly after they receives a bachelor or master degree in their own country, but none of them got funding.

I knew this after I came to this program. This tells you it is a good idea to contact current students at the programs you want to apply for checking their admission policies that are not stated officially.

The Admission Process
The admission decision is made based on overall evaluation of the applicants, and it is hard to say which component is more important than others. I know recommendations are quite important, but all the faculty members are well aware that students from China and India write their own recommendations, rendering the recommendations useless for them. For other application materials, it varies among universities and even varies among different departments in the same university as to which one is valued most. But I know that for most graduate schools, PS is of less importance. A bad PS might nevertheless have a detrimental impact on your application. Since math is pretty important for economists, it’s good if you have strong math skills and background.

A professor once told me that the ideal recommendations are written by economists who are active in research. Economists who are known by the admission committee are perfect. I've heard one previous econ PhD student at Purdue transferred to Northwestern mainly due to a good recommendation letter: her former advisor graduated from Northwestern and wrote her a good recommendation.


Do I need to contact the professors while applying for the program?

I have studied engineering before, so I know there is a big difference between economics program and other programs including engineering, math and the natural sciences. During the admission in engineering, if a faculty member would like to admit you as his student and work with him, you are almost guaranteed to get admitted. This is because most areas in engineering are project-oriented. The professors have funding for some projects, and they can support you financially using the funding so that you can help them with the projects.  Because this close working relationship between you and your advisor, your research area and “interest” is more or less determined since the first day you come, that is the research area and research interest of your advisor. Another consequence of this project-oriented nature of engineering is that it is comparatively easy to publish a paper compared with economics. After you get data or results from the project, you can publish them as an article paper. Most of the papers you publish, if not all, will be coauthored with your advisor, again due to the close working relationship.

It is another story in the departments of economics in the Unites States. The PhD students studying economics typically spend the first one or two years studying theory. After the courses of theory, they will try to pass the qualifying (preliminary) exam. It is after this exam that the students really start to do research. You will also notice that only a small portion of the papers published by economics professors are coauthored with their students because the seldom need any concrete help from their students: most of the papers are written by themselves or coauthored with other professors from similar fields.
Due to this, the students are not paid by the professors, but by the department. Also due to this, in contrast to engineering program, one professor alone cannot decide whether to admit you. It is the group decision of the committee. Hence it is less important to contact any professors during the admission process, and it is useless claim the advisor that you want to work with before you come. The professors are pretty aware that it’s too early for you to claim your field before you come to the program. Hence it’s not necessary to decide which area you want to work in, but it might help if you mention your interests in your personal statement to show that you are self-motivated.