标题: 历届诺贝尔经济学奖获得者及其成就 [打印本页] 作者: 如风岁月 时间: 2005-8-3 17:27 标题: 历届诺贝尔经济学奖获得者及其成就 [2002] Daniel Kahneman 因为他"将来自心理学的洞见整合到了经济科学之中,尤其是关于在不确定性下人们的判断和决策制定行为" Vernon L. Smith 因为他"建立了实验室实验作为实证经济分析的工具,尤其是研究可选市场机制方面"
<br>[2001]乔治·阿克洛夫(G.Akerlof)、迈克尔·斯彭思(M.Spence)和约瑟夫·斯蒂格利兹(J.Stigliz)
<br>奖励他们对"非对称信息市场"分析所做的贡献。\r<br>[2000]詹姆斯· 赫克曼( JAMES J. HECKMAN)丹尼尔·麦克法登 ( DANIEL L. McFADDEN)
<br>在微观计量经济学领域的贡献。他们发展了广泛应用于个体和家庭行为实证分析的理论和方法。
<br>[1999]罗伯特·门德尔(ROBERT A. MUNDELL)\r<br>他对不同汇率体制下货币与财政政策以及最适宜的货币流通区域所做的分析使他获得这一殊荣。
<br>[1998] 阿马蒂亚·森(AMARTYA SEN )\r<br>对福利经济学几个重大问题做出了贡献,包括社会选择理论、对福利和贫穷标准的定义、对匮乏的研究等。
<br>[1997]罗伯特·默顿(ROBERT C. MERTON)和迈伦·斯科尔斯(MYRON S. SCHOLES)\r<br>前者对布莱克-斯科尔斯公式所依赖的假设条件做了进一步减弱,在许多方面对其做了推广。后者给出了著名的布莱克-斯科尔斯期权定价公式,该法则已成为金融机构涉及金融新产品的思想方法。
<br>[1996]詹姆斯·莫里斯(JAMES A. MIRRLEES)和 威廉·维克瑞(WILLIAM VICKREY)\r<br>前者在信息经济学理论领域做出了重大贡献,尤其是不对称信息条件下的经济激励理论。 后者在信息经济学、激励理论、博弈论等方面都做出了重大贡献。
<br>[1995]罗伯特·卢卡斯(ROBERT LUCAS)\r<br>倡导和发展了理性预期与宏观经济学研究的运用理论,深化了人们对经济政策的理解,并对经济周期理论提出了独到的见解。
<br>[1994]约翰·纳什(JOHN F.NASH) 约翰·海萨尼(JOHN C. HARSANYI) 莱因哈德·泽尔腾(REINHARD SELTEN)\r<br>这三位数学家在非合作博弈的均衡分析理论方面做出了开创性德贡献,对博弈论和经济学产生了重大影响。
<br>[1993]道格拉斯·诺斯(DOUGLASS C. NORTH)和罗伯特·福格尔(ROBERT W. FOGEL)\r<br>前者建立了包括产权理论、国家理论和意识形态理论在内的"制度变迁理论"。后者用经济史的新理论及数理工具重新诠释了过去的经济发展过程。
<br>[1992]加里·贝克(GARY S. BECKER)\r<br>将微观经济理论扩展到对人类相互行为的分析,包括市场行为。
<br>[1991]罗纳德·科斯(RONALD H.COASE)\r<br>揭示并澄清了经济制度结构和函数中交易费用和产权的重要性。
<br>[1990]默顿·米勒(MERTON M. MILLER) 哈里·马科维茨(HARRY M. MARKOWITZ) 威廉·夏普(WILLIAM F. SHARPE)\r<br>他们在金融经济学方面做出了开创性工作。
<br>[1989]特里夫·哈维默(TRYGVE HAAVELMO)\r<br>建立了现代经济计量学的基础性指导原则。
<br>[1988]莫里斯·阿莱斯(MAURICE ALLAIS)\r<br>他在市场理论及资源有效利用方面做出了开创性贡献。对一般均衡理论重新做了系统阐述。
<br>[1987]罗伯特·索洛(ROBERT M. SOLOW)\r<br>对增长理论做出贡献。提出长期的经济增长主要依靠技术进步,而不是依靠资本和劳动力的投入。
<br>[1986]詹姆斯·布坎南(JAMES M. BUCHANAN, JR)\r<br>将政治决策的分析同经济理论结合起来,使经济分析扩大和应用到社会—政治法规的选择。
<br>[1985]弗兰科·莫迪利安尼(FRANCO MODIGLIANI)\r<br>第一个提出储蓄的生命周期假设。这一假设在研究家庭和企业储蓄中得到了广泛应用。
<br>[1984]理查德·约翰·斯通(RICHARD STONE)\r<br>国民经济统计之父,在国民帐户体系的发展中做出了奠基性贡献,极大地改进了经济实践分析的基础。
<br>[1983]罗拉尔·德布鲁(GERARD DEBREU)\r<br>概括了帕累拖最优理论,创立了相关商品的经济与社会均衡的存在定理。
<br>[1982]乔治·斯蒂格勒(GEORGE J. STIGLER)\r<br>在工业结构、市场的作用和公共经济法规的作用与影响方面,做出了创造性重大贡献。
<br>[1981]詹姆士·托宾(JAMES TOBIN)\r<br>阐述和发展了凯恩斯的系列理论及财政与货币政策的宏观模型。在金融市场及相关的支出决定、就业、产品和价格等方面的分析做出了重要贡献。
<br>[1980]劳伦斯·罗·克莱因(LAWRENCE R. KLEIN)\r<br>以经济学说为基础,根据现实经济中实有数据所作的经验性估计,建立起经济体制的数学模型。
<br>[1979]威廉·阿瑟·刘易斯(ARTHUR LEWIS)和西奥多·舒尔茨(THEODORE W. SCHULTZ )\r<br>在经济发展方面做出了开创性研究,深入研究了发展中国家在发展经济中应特别考虑的问题。
<br>[1978]赫泊特·亚·西蒙(HERBERT A. SIMON)\r<br>对于经济组织内的决策程序进行了研究,这一有关决策程序的基本理论被公认为关于公司企业实际决策的创见解。
<br>[1977]戈特哈德·贝蒂·俄林(BERTIL OHLIN)和詹姆斯·爱德华·米德(JAMES E MEADE)\r<br>对国际贸易理论和国际资本流动作了开创性研究。
<br>[1976]米尔顿·弗里德曼(MILTON FRIEDMAN)\r<br>创立了货币主义理论,提出了永久性收入假说。
<br>[1975]列奥尼德·康托罗为奇(LEONID VITALIYEVICH KANTOROVICH)和佳林·库普曼斯(TJALLING C. KOOPMANS)\r<br>前者在1939年创立了享誉全球的线形规划要点,后者将数理统计学成功运用于经济计量学。他们对资源最优分配理论做出了贡献。
<br>[1974]弗·冯·哈耶克(FRIEDRICH AUGUST VON HAYEK)和纲纳·缪达尔(GUNNAR MYRDAL)\r<br>他们深入研究了货币理论和经济波动,并深入分析了经济、社会和制度现象的互相依赖。
<br>[1973]华西里·列昂惕夫(WASSILY LEONTIEF)\r<br>发展了投入产出方法,该方法在许多重要的经济问题中得到运用。
<br>[1972]约翰·希克斯(JOHN R. HICKS)和肯尼斯·约瑟夫·阿罗(KENNETH J. ARROW)
<br>他们深入研究了经济均衡理论和福利理论。
<br>[1971]西蒙·库兹列茨(SIMON KUZNETS )\r<br>在研究人口发展趋势及人口结构对经济增长和收入分配关系方面做出了巨大贡献。
<br>[1970]保罗·安·萨默尔森(PAUL A SAMUELSON )\r<br>他发展了数理和动态经济理论,将经济科学提高到新的水平。他的研究涉及经济学的全部领域。
<br>[1969]拉格纳·弗里希(RAGNAR FRISCH)和简·丁伯根(JAN TINBERGEN)\r<br>他们发展了动态模型来分析经济进程。前者是经济计量学的奠基人,后者经济计量学模式建造者之父。作者: rn222_2001 时间: 2005-8-3 23:18
都是经济学界的泰斗。呵呵作者: V宝宝 时间: 2005-12-9 01:21
斑竹以后争取进入名单 加油!作者: V宝宝 时间: 2005-12-9 03:54
再具体一下\r<br>
<br>Press Release: The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 2002
<br>
<br>9 October 2002
<br>
<br>The Royal Swedish Academy of Sciences has decided that the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 2002, will be shared between Daniel Kahneman
<br>
<br>Princeton University, USA
<br>
<br>
<br>“for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty”
<br>
<br>
<br>and
<br>Vernon L. Smith
<br>
<br>George Mason University, USA
<br>
<br>“for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms”.
<br>
<br>Psychological and experimental economics
<br>
<br>Traditionally, much of economic research has relied on the assumption of a “homo oeconomicus” motivated by self-interest and capable of rational decision-making. Economics has also been widely considered a non-experimental science, relying on observation of real-world economies rather than controlled laboratory experiments. Nowadays, however, a growing body of research is devoted to modifying and testing basic economic assumptions; moreover, economic research relies increasingly on data collected in the lab rather than in the field. This research has its roots in two distinct, but currently converging, areas: the analysis of human judgment and decision-making by cognitive psychologists, and the empirical testing of predictions from economic theory by experimental economists. This year’s laureates are the pioneers in these two research areas.
<br>
<br>Daniel Kahneman has integrated insights from psychology into economics, thereby laying the foundation for a new field of research. Kahneman’s main findings concern decision-making under uncertainty, where he has demonstrated how human decisions may systematically depart from those predicted by standard economic theory. Together with Amos Tversky (deceased in 1996), he has formulated prospect theory as an alternative, that better accounts for observed behavior. Kahneman has also discovered how human judgment may take heuristic shortcuts that systematically depart from basic principles of probability. His work has inspired a new generation of researchers in economics and finance to enrich economic theory using insights from cognitive psychology into intrinsic human motivation.
<br>
<br>Vernon Smith has laid the foundation for the field of experimental economics. He has developed an array of experimental methods, setting standards for what constitutes a reliable laboratory experiment in economics. In his own experimental work, he has demonstrated the importance of alternative market institutions, e.g., how the revenue expected by a seller depends on the choice of auction method. Smith has also spearheaded “wind-tunnel tests”, where trials of new, alternative market designs – e.g., when deregulating electricity markets – are carried out in the lab before being implemented in practice. His work has been instrumental in establishing experiments as an essential tool in empirical economic analysis.
<br>
<br>Read more about this year's prize:
<br>
<br>Information for the Public
<br>
<br>Advanced Information (pdf)
<br>Links and further reading
<br>
<br>
<br>
<br>--------------------------------------------------------------------------------
<br>
<br>
<br>Daniel Kahneman, born 1934 (68 years) in Tel Aviv, Israel (US and Israeli citizen). PhD from University of California at Berkeley in 1961. Since 1993, Eugene Higgins Professor of Psychology and Professor of Public Affairs at Princeton University, NJ, USA.<a href="http://www.princeton.edu/~psych/PsychSite/fac_kahneman.html" target="_blank"><a href="http://www.princeton.edu/~psych/PsychSite/fac_kahneman.html" target="_blank">www.princeton.edu/~psych/PsychSite/fac_kahneman.html</a></a>
<br>
<br>Vernon L. Smith, born 1927 (75 years) in Wichita, KS, USA (US citizen). PhD from Harvard University in 1955. Since 2001, Professor of Economics and Law at George Mason University, VA, USA.
<br>
<br><a href="http://www.gmu.edu/departments/economics/facultybios/smith.html" target="_blank"><a href="http://www.gmu.edu/departments/economics/facultybios/smith.html" target="_blank">www.gmu.edu/departments/economics/facultybios/smith.html</a></a>
<br>
<br>The Prize amount: SEK 10 million, will be shared equally among the Laureates
<br>Contact persons: Katarina Werner, Information assistant,
<br>phone +46 8 673 95 29, katarina@kva.se and Eva Krutmeijer, Head of information, phone +46 8 673 95 95,
<br>+46 709 84 66 38, evak@kva.se
<br><a href="http://finance.sina.com.cn" target="_blank">http://finance.sina.com.cn</a> 2002年10月09日 22:24 新浪财经
<br>
<br> 瑞典斯德哥尔摩当地时间10月8日15:30(北京时间8日21:30),瑞典皇家科学院宣布,将本年度诺贝尔经济学奖授予美国普林斯顿大学的丹尼尔-卡恩曼(Daniel Kahneman拥有美国和以色列双重国籍)和美国乔治-梅森大学的弗农-史密斯(Vernon L. Smith)。
<br>
<br> 传统意义上的经济学被广泛认为是一种非实验科学,大多数的经济学研究依赖于各种合理的假设,这些假设在决策中具有重要意义。然而,现今越来越多的研究人员开始尝试用试验的方法来研究经济学,修改和验证各种基本的经济学假设,这使得经济学的研究越来越多的依赖于实验和各种数据的搜集,从而变得更加可信。这些研究大多数扎根于两个有着明显区分但目前却融汇在一起的领域,即认知心理学家有关人为判断和决策的分析和实验经济学家对经济学理论的实验性测试。今年的诺贝尔经济学奖获得者就是这两个研究领域的先锋。
<br>
<br> 丹尼尔-卡赫内曼将源于心理学的综合洞察力应用于经济学的研究,从而为一个新的研究领域奠定了基础。卡赫内曼的主要贡献是在不确定条件下的人为判断和决策方面的发现。他展示了人为决策是如何异于标准经济理论预测的结果。他的发现激励了新一代经济学研究人员运用认知心理学的洞察力来研究经济学,使经济学的理论更加丰富。
<br>
<br> 维农-史密斯为实验经济学奠定了基础,他发展了一整套实验研究方法,并设定了经济学研究实验的可靠标准。维农利用实验展示了选择性市场机制的重要性,他还率先采用了“风洞测试”的新方法研究选择性市场设计。维农的成就对于将实验作为一种工具应用于实验经济学分析很有帮助。
<br>
<br> 诺贝尔经济学奖并非诺贝尔遗嘱中提到的五大奖励领域之一,是由瑞典银行在1968年为纪念诺贝尔而增设,全称应为“纪念阿尔弗雷德-诺贝尔瑞典银行经济学奖”,其评选标准与其它奖项是相同的,获奖者由瑞典皇家科学院评选,1969年第一次颁奖,由挪威人弗里希和荷兰人丁伯根共同获得,美国经济学家萨缪尔森、弗里德曼等人均获得过此奖。去年的诺贝尔经济学奖是三位美国人:乔治-阿克洛夫、麦克尔-斯宾塞和约瑟夫-斯蒂格利茨。他们因为在现代信息经济学研究领域所作的突出贡献而获奖。\r<br>作者: V宝宝 时间: 2005-12-9 03:55
Press Release - The 2001 Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>10 October 2001
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 2001, jointly to
<br>
<br>George A. Akerlof
<br>University of California at Berkeley, USA,
<br>
<br>A. Michael Spence
<br>Stanford University, USA, and
<br>
<br>Joseph E. Stiglitz
<br>Columbia University, USA
<br>
<br>"for their analyses of markets with asymmetric information".
<br>
<br> Information for the Public
<br>
<br> Advanced Information
<br>
<br>Markets with asymmetric information
<br>
<br>Many markets are characterized by asymmetric information: actors on one side of the market have much better information than those on the other. Borrowers know more than lenders about their repayment prospects, managers and boards know more than shareholders about the firm's profitability, and prospective clients know more than insurance companies about their accident risk. During the 1970s, this year's Laureates laid the foundation for a general theory of markets with asymmetric information. Applications have been abundant, ranging from traditional agricultural markets to modern financial markets. The Laureates' contributions form the core of modern information economics.
<br>
<br>George Akerlof demonstrated how a market where sellers have more information than buyers about product quality can contract into an adverse selection of low-quality products. He also pointed out that informational problems are commonplace and important. Akerlof's pioneering contribution thus showed how asymmetric information of borrowers and lenders may explain skyrocketing borrowing rates on local Third World markets; but it also dealt with the difficulties for the elderly to find individual medical insurance and with labour-market discrimination of minorities.
<br>
<br>Michael Spence identified an important form of adjustment by individual market participants, where the better informed take costly actions in an attempt to improve on their market outcome by credibly transmitting information to the poorly informed. Spence showed when such signaling will actually work. While his own research emphasized education as a productivity signal in job markets, subsequent research has suggested many other applications, e.g., how firms may use dividends to signal their profitability to agents in the stock market.
<br>
<br>Joseph Stiglitz clarified the opposite type of market adjustment, where poorly informed agents extract information from the better informed, such as the screening performed by insurance companies dividing customers into risk classes by offering a menu of contracts where higher deductibles can be exchanged for significantly lower premiums. In a number of contributions about different markets, Stiglitz has shown that asymmetric information can provide the key to understanding many observed market phenomena, including unemployment and credit rationing.
<br>
<br>
<br>
<br>Read more about this year's prize:
<br>
<br>Information for the Public
<br>Advanced Information
<br>
<br>
<br>--------------------------------------------------------------------------------
<br>
<br>George A. Akerlof, 61 years, born 1940 in New Haven, Connecticut (US citizen). PhD from MIT 1966. Has held professorships at Indian Statistical Institute and London School of Economics. Since 1980 Goldman Professor of Economics at the University of California at Berkeley.
<br><a href="http://emlab.berkeley.edu/users/akerlof/index.html" target="_blank">http://emlab.berkeley.edu/users/akerlof/index.html</a>
<br>
<br>A. Michael Spence, 58 years, born 1943 in Montclair, New Jersey (US citizen). PhD from Harvard 1972. Has held professorships at Harvard and the Graduate School of Business, Stanford and has also been Dean at both these universities.
<br><a href="http://gobi.stanford.edu/facultybios/bio.asp?ID=156" target="_blank">http://gobi.stanford.edu/facultybios/bio.asp?ID=156</a>
<br>
<br>Joseph E. Stiglitz, 58 years, born 1943 in Gary, Indiana (US citizen). PhD from MIT 1967. Has held professorships at Yale, Princeton, Oxford and Stanford, and has been the Chief Economist of the World Bank. Since this year, Professor of Economics, Business and International Affairs at Columbia University.
<br>
<br>The Prize amount:
<br>SEK 10 million, will be shared equally among the Laureates
<br>
<br>Press Officer:
<br>Eva Krutmeijer, phone +46 8 673 95 95, +46 709 84 66 38, mailto:evak@kva.se
<br>Information for the Public The 2001 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>
<br>
<br>For more than two decades, the theory of markets with asymmetric information has been a vital and lively field of economic research. Today, models with imperfect information are indispensable instruments in the researcher's toolbox. Countless applications extend from traditional agricultural markets in developing countries to modern financial markets in developed economies. The foundations for this theory were established in the 1970s by three researchers: George Akerlof, Michael Spence and Joseph Stiglitz. They receive the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 2001, "for their analyses of markets with asymmetric information".
<br>
<br>
<br>
<br>Markets with Asymmetric Information
<br>Why are interest rates often excessively high on local lending markets in Third World countries? Why do people who want to buy a good used car turn to a dealer rather than a private seller? Why does a firm pay dividends even if they are taxed more heavily than capital gains? Why is it advantageous for insurance companies to offer clients a menu of contracts where higher deductibles can be exchanged for lower premiums? Why do rich landowners not bear the entire harvest risk in contracts with poor tenants? These questions exemplify familiar – but seemingly different – phenomena, each of which has posed a challenge to economic theory. This year's Laureates proposed a common explanation and extended the theory when they argumented the theory with the realistic assumption of asymmetric information: agents on one side of the market have much better information than those on the other side. Borrowers know more than the lender about their repayment prospects; the seller knows more than buyers about the quality of his car; the CEO and the board know more than the shareholders about the profitability of the firm; policyholders know more than the insurance company about their accident risk; and tenants know more than the landowner about their work effort and harvesting conditions.
<br>
<br>More specifically, Akerlof showed that informational asymmetries can give rise to adverse selection on markets. Due to imperfect information on the part of lenders or prospective car buyers, borrowers with weak repayment prospects or sellers of low-quality cars crowd out everyone else from the market. Spence demonstrated that under certain conditions, well-informed agents can improve their market outcome by signaling their private information to poorly informed agents. The management of a firm can thus incur the additional tax cost of dividends to signal high profitability. Stiglitz showed that an uninformed agent can sometimes capture the information of a better-informed agent through screening, for example by providing choices from a menu of contracts for a particular transaction. Insurance companies are thus able to divide their clients into risk classes by offering different policies, where lower premiums can be exchanged for a higher deductible.
<br>
<br>
<br>
<br>George Akerlof
<br>Akerlof's 1970 essay, "The Market for Lemons" is the single most important study in the literature on economics of information. It has the typical features of a truly seminal contribution – it addresses a simple but profound and universal idea, with numerous implications and widespread applications.
<br>
<br>Here Akerlof introduces the first formal analysis of markets with the informational problem known as adverse selection. He analyses a market for a good where the seller has more information than the buyer regarding the quality of the product. This is exemplified by the market for used cars; "a lemon" – a colloquialism for a defective old car – is now a well-known metaphor in economists' theoretical vocabulary. Akerlof shows that hypothetically, the information problem can either cause an entire market to collapse or contract it into an adverse selection of low-quality products.
<br>
<br>Akerlof also pointed to the prevalence and importance of similar information asymmetries, especially in developing economies. One of his illustrative examples of adverse selection is drawn from credit markets in India in the 1960s, where local lenders charged interest rates that were twice as high as the rates in large cities. However, a middleman who borrows money in town and then lends it in the countryside, but does not know the borrowers' creditworthiness, risks attracting borrowers with poor repayment prospects, thereby becoming liable to heavy losses. Other examples in Akerlof's article include difficulties for the elderly to acquire individual health insurance and discrimination of minorities on the labor market.
<br>
<br>A key insight in his "lemons paper" is that economic agents may have strong incentives to offset the adverse effects of information problems on market efficiency. Akerlof argues that many market institutions may be regarded as emerging from attempts to resolve problems due to asymmetric information. One such example is guarantees from car dealers; others include brands, chain stores, franchising and different types of contracts.
<br>
<br>A timely example might further illustrate the idea that asymmetric information can generate adverse selection. At first, firms in a new sector – such as today's IT sector – might seem identical to an uninformed bystander, while some "insiders" may have better information about the future profitability of such firms. Firms with lower than average profitability will therefore be overvalued and more inclined to finance new projects by issuing their own shares than high-profitability firms which are undervalued by the market. As a result, low-profitability firms tend to grow more rapidly and the stock market will initially be dominated by "lemons". When uninformed investors eventually discover their mistake, share prices fall – the IT bubble bursts.
<br>
<br>Apart from his research on asymmetric information, Akerlof has developed economic theory with insights from sociology and social anthropology. His most noteworthy contributions in this genre concern efficiency on labor markets. Akerlof points out that emotions such as reciprocity towards an employer or fairness towards colleagues can prompt wages to be set so high as to induce unemployment. He has also examined how social conventions such as the caste system may have unfavorable effects on economic efficiency. As a result of these studies, Akerlof's research is also well known and influential in other social sciences.
<br>
<br>
<br>
<br>Michael Spence
<br>Spence asked how better informed individuals on a market can credibly transmit, "signal", their information to less informed individuals, so as to avoid some of the problems associated with adverse selection. Signaling requires economic agents to take observable and costly measures to convince other agents of their ability or, more generally, of the value or quality of their products. Spence's contribution was to develop and formalize this idea as well as to demonstrate and analyze its implications.
<br>
<br>Spence's pioneering essay from 1973 (based on his PhD thesis) deals with education as a signal of productivity on the labor market. A fundamental insight is that signaling cannot succeed unless the signaling cost differs sufficiently among the "senders", i.e., job applicants. An employer cannot distinguish the more productive applicants from those who are less productive unless the former find it sufficiently less costly to acquire an education that the latter choose a lower level of education. Spence also pointed to the possibility of different "expectations-based" equilibria for education and wages, where e. g. men and white receive a higher wage than women and black with the same productivity.
<br>
<br>Subsequent research contains numerous applications which extend this theory and confirm the importance of signaling on different markets. This covers phenomena such as costly advertising or far-reaching guarantees as signals of productivity, aggressive price cuts as signals of market strength, delaying tactics in wage offers as a signal of bargaining power, financing by debt rather than by issuing new shares as a signal of profitability, and recession-generating monetary policy as a signal of uncompromising commitment to reduce stubbornly high inflation.
<br>
<br>An early example in the literature concerns dividends. Why do firms pay dividends to their shareholders, knowing full well that they are subject to higher taxes (through double taxation) than capital gains? Retaining the profits within the firm would appear as a cheaper way to favor the shareholders through the capital gains of a higher share price. One possible answer is that dividends can act as a signal for favorable prospects. Firms with "insider information" about high profitability pay dividends because the market interprets this as good news and therefore pays a higher price for the share. The higher share price compensates shareholders for the extra tax they pay on the dividends.
<br>
<br>In addition to his research on signaling, Spence was a forerunner in applying the results and insights of the 1996 economics laureates, Vickrey and Mirrlees, to the analysis of insurance markets. During the period 1975-1985, he was one of the pioneers in the wave of game-theory inspired work that clarified many aspects of strategic market behavior within the so-called new theory of industrial organization.
<br>
<br>
<br>
<br>Joseph Stiglitz
<br>One of Stiglitz's classical papers, coauthored with Michael Rothschild, formally demonstrated how information problems can be dealt with on insurance markets where the companies do not have information on the risk situation of individual clients. This work is an obvious complement to Akerlof's and Spence's analyses by examining what actions uninformed agents can take on a market with asymmetric information. Rothschild and Stiglitz show that the insurance company (the uninformed party) can give its clients (the informed party) effective incentives to "reveal" information on their risk situation through so-called screening. In an equilibrium with screening, insurance companies distinguish between different risk classes among their policyholders by offering them to choose from a menu of alternative contracts where lower premiums can be exchanged for higher deductibles.
<br>
<br>Stiglitz and his numerous coauthors have time and again substantiated that economic models may be quite misleading if they disregard informational asymmetries. Their common message has been that in the perspective of asymmetric information, many markets take on a completely different guise, as do the conclusions regarding appropriate forms of public-sector regulation. Stiglitz has analyzed the implications of asymmetric information in many different contexts, varying from unemployment to the design of an optimal tax system. Several of his essays have become important stepping stones for further research.
<br>
<br>One example is Stiglitz's work with Andrew Weiss on credit markets with asymmetric information. Stiglitz and Weiss show that in order to reduce losses from bad loans, it may be optimal for bankers to ration the volume of loans instead of raising the lending rate. Since credit rationing is so common, these insights were important steps towards a more realistic theory of credit markets. They have also had a substantial impact in the domains of corporate finance, monetary theory and macroeconomics.
<br>
<br>In collaboration with Sanford Grossman, Stiglitz analyzed efficiency on financial markets. Their key result is known as the Grossman-Stiglitz paradox: if a market were informationally efficient, i.e., all relevant information is reflected in market prices, then no single agent would have sufficient incentive to acquire the information on which prices are based.
<br>
<br>Stiglitz is also one of the founders of modern development economics. He has shown that asymmetric information and economic incentives are not merely academic abstractions, but highly concrete phenomena with far-reaching explanatory value in the analysis of institutions and market conditions in developing economies. One of his first studies of information problems dealt with sharecropping, an ancient, though still common, form of contracting.
<br>
<br>A sharecropping contract stipulates that the harvest should be divided between a landowner and his tenant in fixed shares (usually half each). Since the landowner is usually richer than the tenant, it would seem advantageous to both parties to let the landowner bear the entire risk. But such a contract would not give the tenant strong enough incentives to cultivate the land efficiently. Considering the landowner's inferior information about harvest conditions and the tenant's work effort, sharecropping is in fact the optimal solution for both parties.
<br>
<br>Joseph Stiglitz's many contributions have transformed the way economists think about the working of markets. Together with the fundamental contributions by George Akerlof and Michael Spence, they make up the core of the modern economics of information.
<br>
<br>The Laureates?
<br>George Akerlof
<br>Economics Department
<br>University of California
<br>549 Evans Hall #3880
<br>Berkeley, CA 94720-3880
<br>USA
<br><a href="http://emlab.berkeley.edu/users/akerlof/index.html" target="_blank">http://emlab.berkeley.edu/users/akerlof/index.html</a> PhD from MIT 1966. Has held professorships at Indian Statistical Institute and London School of Economics. Since 1980 Goldman Professor of Economics at the University of California at Berkeley.
<br>Michael Spence
<br>Stanford Business School
<br>518 Memorial Way
<br>Stanford University
<br>Stanford, CA 94305-5015
<br>USA
<br><a href="http://gobi.stanford.edu/facultybios/bio.asp?ID=156" target="_blank">http://gobi.stanford.edu/facultybios/bio.asp?ID=156</a> PhD from Harvard 1972. Has held professorships at Harvard and the Graduate School of Business, Stanford and has also been Dean at both these universities.
<br>Joseph Stiglitz
<br>Economics Department
<br>Columbia University
<br>1022 International Affairs Building
<br>420 West 118th Street
<br>New York, NY 10027
<br>USA PhD from MIT 1967. Has held professorships at Yale, Princeton, Oxford and Stanford, and has been the Chief Economist of the World Bank. Since this year, Professor of Economics, Business and International Affairs at Columbia University.
<br>作者: V宝宝 时间: 2005-12-9 03:57
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>October 11, 2000
<br>
<br>
<br>The Royal Swedish Academy of Sciences has decided that the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 2000, will be shared between
<br>
<br>James J. Heckman
<br>University of Chicago, USA, and
<br>
<br>Daniel L. McFadden
<br>University of California, Berkeley, USA.
<br>
<br>
<br>In the field of microeconometrics, each of the laureates has developed theory and methods that are widely used in the statistical analysis of individual and household behavior, within economics as well as other social sciences.
<br>
<br>Citation of the Academy:
<br>"to James Heckman for his development of theory and methods for analyzing selective samples and to Daniel McFadden for his development of theory and methods for analyzing discrete choice. "
<br>
<br>
<br>Microeconometrics - on the boundary between economics and statistics - is a methodology for studying micro data, i.e., economic information about large groups of individuals, households, or firms. Greater availability of micro data and increasingly powerful computers have enabled empirical studies of many new issues. For example, what determines whether an individual decides to work and, if so, how many hours? How do economic incentives affect choices of education, occupation, and place of residence? What are the effects of different educational programs on income and employment? James Heckman and Daniel McFadden have resolved fundamental problems that arise in the statistical analysis of micro data. The methods they have developed have solid foundations in economic theory, but have evolved in close interplay with applied research on important social problems. They are now standard tools, not only among economists but also among other social scientists.
<br>
<br>Available micro data often entail selective samples. Data on wages, for instance, cannot be sampled randomly if only individuals with certain characteristics - unobservable to the researcher - choose to work or engage in education. If such selection is not taken into account, statistical estimation of economic relationships yields biased results. Heckman has developed statistical methods of handling selective samples in an appropriate way. He has also proposed tools for solving closely related problems with individual differences unobserved by the researcher; such problems are common, e.g. when evaluating social programs or estimating how the duration of unemployment affects chances of getting a job. Heckman is also a leader of applied research in these areas.
<br>
<br>Micro data often reflect discrete choice. For instance, data regarding individuals' occupation or place of residence reflect choices they have made among a limited number of alternatives. Prior to McFadden's contributions, empirical studies of such choices lacked a foundation in economic theory. Evolving from a new theory of discrete choice, the statistical methods developed by McFadden have transformed empirical research. His methods are readily applicable. For example, they prevail in models of transports and are used to evaluate changes in communication systems. Examples of McFadden's extensive applications of his own methods include the design of the San Francisco BART system, as well as investments in phone service and housing for the elderly.
<br>
<br>
<br>
<br>***********************************************************************************
<br>James J. Heckman (US citizen), 56, was born in Chicago, IL in 1944. Since 1995 he is the Henry Schultz Distinguished Service Professor of Economics at the University of Chicago.
<br>
<br>Daniel L. McFadden (US citizen), 63, was born in Raleigh, NC in 1937. Since 1990 he holds the E. Morris Cox Chair in Economics at the University of California, Berkeley.
<br>
<br>The Prize amount, SEK 9 million, will be shared equally between the Laureates.
<br>
<br>
<br>
<br>
<br>--------------------------------------------------------------------------------
<br>
<br>Read also:
<br>
<br>
<br>Information for the Public
<br>
<br>Advanced Information?
<br>
<br>NEXT
<br>
<br>新华社斯德哥尔摩10月11日电(记者吴平)瑞典皇家科学院 11日在这里宣布,美国经济学家詹姆斯·赫克曼和丹尼尔·麦克法登因在微观计量经济学领域所作出的杰出贡献而荣获2000年诺贝尔经济学奖。
<br>
<br>瑞典皇家科学院说,两位获奖者在微观计量经济学领域的主要贡献是,他们在70年代发展了已被广泛用来对个人和家庭行为进行统计分析的理论和方法。其中赫克曼发展了对选择性抽样数据进行分析的理论和方法,麦克法登发展了对自行选择行为进行分析的理论和方法。他们将分享900万瑞典克朗(约合100万美元)的诺贝尔经济学奖金。
<br>
<br>瑞典皇家科学院介绍说,介于经济学和统计学之间的微观计量经济学是一门用来研究微观数据的方法学。随着人们所能获得的微观数据的增加和计算机功率的增大,经济学家们已能对许多新的问题进行经验研究,如决定人们去工作和工作时间的因素是什么,经济动力是如何影响人们对教育、职业和居住地所进行的选择,以及不同教育计划对收入和就业产生什么影响等。这些问题都可以用赫克曼和麦克法登发展的理论和方法来进行分析和研究。
<br>
<br>瑞典皇家科学院认为,赫克曼和麦克法登已解决了对微观数据进行统计分析中出现的基本问题。他们所发展的分析方法不仅在经济理论方面具有牢固的基础,而且还在重大社会问题的实用研究领域发生了很大的影响。这些方法现已成为经济学家和社会学家分析问题的“标准工具”。
<br>
<br>瑞典皇家科学院进一步介绍说,人们所能获得的微观数据往往来源于选择性的抽样调查数据,如有关工资的数据就无法通过随意抽样的方式获得。如果这样的选择性因素不被考虑进去,那么对经济关系进行的统计评估结果将会发生偏差。赫克曼发展了用一种合适的方式来处理选择性抽样数据的方法,并提出了用来解决与此密切相关问题的方法。他在这些领域进行的应用研究也具有领先地位。
<br>
<br>瑞典皇家科学院说,微观数据还能反映人们的自行选择行为,如有关人们对职业或居住地的数据能反映他们如何在有限的范围内进行选择。以前,经济学界对这样的选择进行的经验研究缺乏经济理论基础。麦克法登根据一种有关自行选择的新理论,发展了对自行选择进行统计分析的方法。这些方法已使经验研究发生变革,并已在交通运输和通信等领域得到了广泛的实际应用。
<br>
<br>赫克曼1944年出生于美国的芝加哥,现为芝加哥大学经济学教授。麦克法登1937年出生于美国的罗利,现供职于美国加利福尼亚大学。
<br>The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel 2000
<br>
<br>
<br>作者: V宝宝 时间: 2005-12-9 03:58
James Heckman and Daniel McFadden have each developed theory and methods that are widely used in the statistical analysis of individual and household behavior, within economics as well as other social sciences.
<br>
<br>Microeconometrics and Microdata
<br>
<br>Microeconometrics is an interface between economics and statistics. It encompasses economic theory and statistical methods used to analyze microdata, i.e., economic information about individuals, households and firms. Microdata appear as cross-section data which refer to conditions at the same point in time, or as longitudinal data (panel data) which refer to the same observational units over a succession of years. During the last three decades, the field of microeconometrics has expanded rapidly due to the creation of large databases containing microdata.
<br>
<br>Greater availability of microdata and increasingly powerful computers have opened up entirely new possibilities of empirically testing microeconomic theory. Researchers have been able to examine many new issues at the individual level. For example: what factors determine whether an individual decides to work and, if so, how many hours? How do economic incentives affect individual choices regarding education, occupation or place of residence? What are the effects of different labor-market and educational programs on an individual's income and employment?
<br>
<br>The use of microdata has also given rise to new statistical problems, owing primarily to the limitations inherent in such (non-experimental) data. Since the researcher can only observe certain variables for particular individuals or households, a sample might not be random and thereby not representative. Even when samples are representative, some characteristics that affect individuals' behavior remain unobservable, which makes it difficult, or impossible, to explain some of the variation among individuals.
<br>
<br>This year's laureates have each shown how one can resolve some fundamental statistical problems associated with the analysis of microdata. James Heckman's and Daniel McFadden's methodological contributions share a solid foundation in economic theory. They emerged in close interaction with applied empirical studies, where new databases served as a definitive prerequisite. The microeconometric methods developed by Heckman and McFadden are now part of the standard tool kit, not only of economists, but also of other social scientists.
<br>
<br>James J. Heckman
<br>
<br>James Heckman has made many significant contributions to microeconometric theory and methodology, with different kinds of selection problems as a common denominator. He developed his methodological contributions in conjunction with applied empirical research, particularly in labor economics. Heckman's analysis of selection problems in microeconometric research has had profound implications for applied research in economics as well as in other social sciences.
<br>
<br>Selection Bias and Self-selection
<br>
<br>Selection problems are legion in microeconometric studies. They can arise when a sample available to researchers does not randomly represent the underlying population. Selective samples may be the result of rules governing collection of data or the outcome of economic agents' own behavior. The latter situation is known as self-selection. For example, wages and working hours can only be observed in the case of individuals who have chosen to work; the earnings of university graduates can only be observed for those who have completed their university education, etc. The absence of information regarding the wage an individual would earn, had he or she chosen otherwise, creates problems in many empirical studies.
<br>
<br>The problem of selection bias may be illustrated by the following figure, where w denotes an individual's wage and x is a factor that affects this wage, such as the individual's level of education. Each point in the figure represents individuals with the same education and wage levels in a large and representative sample of the population. The solid line shows the statistical (and true) relationship that we would estimate if we could indeed observe wages and education for all these individuals. Now assume - in accordance with economic theory - that only those individuals whose market wages exceed some threshold value (the reservation wage) choose to work. If this is the case, individuals with relatively high wages and relatively long education will be overrepresented in the sample we actually observe: the dark points in the figure. This selective sample creates a problem of selection bias in the sense that we will estimate the relation between wage and education given by the dashed line in the figure. We thus find a relationship weaker than the true one, thereby underestimating the effect of education on wages.
<br>
<br>
<br>
<br>
<br>High resolution (JPG 80,8 kb)
<br>
<br>
<br>Heckman's Contributions
<br>
<br>Heckman's methodological breakthroughs regarding self-selection took place in the mid-1970s. They are closely related to his studies of individuals' decisions about their labor-force participation and hours worked. As we observe variations in hours of work solely among those who have chosen to work, we could - again - encounter samples tainted by self-selection. In an article on the labor supply of married women, published in 1974, Heckman devised an econometric method to handle such self-selection problems. This study is an excellent illustration of how microeconomic theory can be combined with microeconometric methods to clarify an important research topic.
<br>
<br>In subsequent work, Heckman proposed yet another method for handling self-selection: the well-known Heckman correction (the two-stage method, Heckman's lambda or the Heckit method). This method has had a vast impact because it is so easy to apply. Suppose that a researcher - as in the example above - wants to estimate a wage relation using individual data, but only has access to wage observations for those who work. The Heckman correction takes place in two stages. First, the researcher formulates a model, based on economic theory, for the probability of working. Statistical estimation of the model yields results that can be used to predict this probability for each individual. In the second stage, the researcher corrects for self-selection by incorporating these predicted individual probabilities as an additional explanatory variable, along with education, age, etc. The wage relation can then be estimated in a statistically appropriate way.
<br>
<br>Heckman's achievements have generated a large number of empirical applications in economics as well as in other social sciences. The original method has subsequently been generalized, by Heckman and by others.
<br>
<br>Duration Models
<br>
<br>Duration models have a long tradition in the engineering and medical sciences. They are frequently used by social scientists, such as demographers, to study mortality, fertility and migration. Economists apply them, for instance, to examine the effects of the duration of unemployment on the probability of getting a job. A common problem in such studies is that individuals with poor labor-market prospects might be overrepresented among those who remain unemployed. Such selection bias gives rise to problems similar to those encountered in self-selected samples: when the sample of unemployed at a given point in time is affected by unobserved individual characteristics, we may obtain misleading estimates of "duration dependence" in unemployment. In collaboration with Burton Singer, Heckman has developed econometric methods for resolving such problems. Today, this methodology is widely used throughout the social sciences.
<br>
<br>Evaluation of Active Labor Market Programs
<br>
<br>Along with the spread of active labor-market policy - such as labor-market training or employment subsidies - in many countries, there is a growing need to evaluate these programs. The classical approach is to determine how participation in a specific program affects individual earnings or employment, compared to a situation where the individual did not participate. Since the same individual cannot be observed in two situations simultaneously, information about non-participation has to be used, thereby - once again - giving rise to selection problems. Heckman is the world's leading researcher on microeconometric evaluation of labor-market programs. In collaboration with various colleagues, he has extensively analyzed the properties of alternative non-experimental evaluation methods and has explored their relation to experimental methods. Heckman has also offered numerous empirical results of his own. Even though results vary a great deal across programs and participants, the results are often quite pessimistic: many programs have only had small positive - and sometimes negative - effects for the participants and do not meet the criterion of social efficiency.
<br>
<br>Daniel L. McFadden
<br>
<br>Daniel McFadden's most significant contribution is his development of the economic theory and econometric methodology for analysis of discrete choice, i.e., choice among a finite set of decision alternatives. A recurring theme in McFadden's research is his ability to combine economic theory, statistical methods and empirical applications, where his ultimate goal has often been a desire to resolve social problems.
<br>
<br>Discrete Choice Analysis
<br>
<br>Microdata often reflect discrete choices. In a database, information about individuals' occupation, place of residence, or travel mode reflects the choices they have made among a limited number of alternatives. In economic theory, traditional demand analysis presupposes that individual choice be represented by a continuous variable, thereby rendering it inappropriate for studying discrete choice behavior. Prior to McFadden's prizewinning achievements, empirical studies of such choices lacked a foundation in economic theory.
<br>
<br>McFadden's Contributions
<br>
<br>McFadden's theory of discrete choice emanates from microeconomic theory, according to which each individual chooses a specific alternative that maximizes his utility. However, as the researcher cannot observe all the factors affecting individual choices, he perceives a random variation across individuals with the same observed characteristics. On the basis of his new theory, McFadden developed microeconometric models that can be used, for example, to predict the share of a population that will choose different alternatives.
<br>
<br>McFadden's seminal contribution is his development of so-called conditional logit analysis in 1974. In order to describe this model, suppose that each individual in a population faces a number (say, J) of alternatives. Let X denote the characteristics associated with each alternative and Z the characteristics of the individuals that the researcher can observe in his data. In a study of the choice of travel mode, for instance, where the alternatives may be car, bus or subway, X would then include information about time and costs, while Z might cover data on age, income and education. But differences among individuals and alternatives other than X and Z, although unobservable to the researcher, also determine an individual's utility-maximizing choice. Such characteristics are represented by random "error terms". McFadden assumed that these random errors have a specific statistical distribution (termed an extreme value distribution) in the population. Under these conditions (plus some technical assumptions), he demonstrated that the probability that individual i will choose alternative j can be written as:
<br>
<br>
<br>
<br>
<br>
<br>
<br>In this so-called multinomial logit model, e is the base of the natural logarithm, while?and?are (vectors of) parameters. In his database, the researcher can observe the variables X and Z, as well as the alternative the individual in fact chooses. As a result, he is able to estimate the parameters?and?using well-known statistical methods. Even though logit models had been around for some time, McFadden's derivation of the model was entirely new and was immediately recognized as a fundamental breakthrough.
<br>
<br>Such models are highly useful and are routinely applied in studies of urban travel demand. They can thus be used in traffic planning to examine the effects of policy measures as well as other social and/or environmental changes. For example, these models can explain how changes in price, improved accessibility or shifts in the demographic composition of the population affect the shares of travel using alternative means of transportation. The models are also relevant in numerous other areas, such as in studies of the choice of dwelling, place of residence, and education. McFadden has applied his own methods to analyze a number of social issues, such as the demand for residential energy, telephone services and housing for the elderly.
<br>
<br>Methodological Elaboration
<br>Conditional logit models have the peculiar property that the relative probabilities of choosing between two alternatives, say, travel by bus or car, are independent of the price and quality of other transportation options. This property - called independence of irrelevant alternatives (IIA) - is unrealistic in certain applications. McFadden not only devised statistical tests to ascertain whether IIA is satisfied, but also introduced more general models, such as the so-called nested logit model. Here, it is assumed that individuals' choices can be ordered in a specific sequence. For instance, when studying decisions regarding place of residence and type of housing, an individual is assumed to begin by choosing the location and then the type of dwelling.
<br>
<br>Even with these generalizations, the models are sensitive to the specific assumptions about the distribution of unobserved characteristics in the population. Over the last decade, McFadden has elaborated on simulation models (the method of simulated moments) for statistical estimation of discrete choice models allowing much more general assumptions. Increasingly powerful computers have enhanced the practical applicability of these numerical methods. As a result, individuals' discrete choices can now be portrayed with greater realism and their decisions predicted more accurately.
<br>
<br>Other Contributions
<br>
<br>In addition to discrete choice analysis, McFadden has made influential contributions in several other fields. In the 1960s, he devised econometric methods to assess production technologies and examine the factors behind firms' demand for capital and labor. During the 1990s, McFadden contributed to environmental economics, in particular to the literature on contingent-valuation methods for estimating the value of natural resources. A key example is his study of welfare losses due to the environmental damage along the Alaskan coast caused by the oil spill from the tanker Exxon Valdez in 1989. This study provides yet another example of McFadden's masterly skill in integrating economic theory and econometric methodology in empirical studies of important social problems.
<br>
<br>
<br>
<br>********************************************************************************
<br>James J. Heckman was born in Chicago, IL in 1944. After completing his undergraduate education at Colorado College, having majored in Mathematics, he attended Princeton University for graduate studies in Economics and received his Ph.D. there in 1971. Since then, Heckman has held professorships at Columbia University and Yale University. Since 1995, he is Henry Schultz Distinguished Professor of Economics at the University of Chicago.
<br>
<br>James Heckman
<br>Department of Economics
<br>University of Chicago
<br>1126 East 59th Street
<br>Chicago, IL 60637
<br>USA
<br><a href="http://lily.src.uchicago.edu/" target="_blank">http://lily.src.uchicago.edu/</a>
<br>
<br>
<br>Daniel L. McFadden was born in Raleigh, NC in 1937. He attended the University of Minnesota, where he received both his undergraduate degree, with a major in Physics and, after postgraduate studies in Economics, his Ph.D. in 1962. McFadden has held professorships at the University of PittS*urgh, Yale University and MIT. Since 1990, he is E. Morris Cox Professor of Economics at the University of California, Berkeley.
<br>
<br>Daniel McFadden
<br>Department of Economics
<br>University of California
<br>Berkeley, CA 94720
<br>USA
<br><a href="http://emlab.berkeley.edu/users/mcfadden/index.html" target="_blank">http://emlab.berkeley.edu/users/mcfadden/index.html</a>
<br>作者: V宝宝 时间: 2005-12-9 03:59
<br>Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>13 October 1999
<br>
<br>
<br>The Royal Swedish Academy of Sciences awarded the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1999, to
<br>
<br>Professor Robert A. Mundell, Columbia University, New York, USA
<br>
<br>for his analysis of monetary and fiscal policy under different exchange rate regimes and his analysis of optimum currency areas.
<br>
<br>Economic policy exchange rates and capital mobility
<br>
<br>Robert Mundell has established the foundation for the theory which dominates practical policy considerations of monetary and fiscal policy in open economies. His work on monetary dynamics and optimum currency areas has inspired generations of researchers. Although dating back several decades, Mundell's contributions remain outstanding and constitute the core of teaching in international macroeconomics.
<br>
<br>Mundell's research has had such a far-reaching and lasting impact because it combines formal - but still accessible - analysis, intuitive interpretation and results with immediate policy applications. Above all, Mundell chose his problems with uncommon - almost prophetic - accuracy in terms of predicting the future development of international monetary arrangements and capital markets. Mundell's contributions serve as a superb reminder of the significance of basic research. At a given point in time academic achievements might appear rather esoteric; not long afterwards, however, they may take on great practical importance.
<br>
<br>
<br>
<br>
<br>
<br>*************************************************************************************
<br>
<br>How are the effects of monetary and fiscal policy related to the integration of international capital markets? How do these effects depend on whether a country fixes the value of its currency or allows it to float freely? Should a country even have a currency of its own? By posing and answering questions such as these, Robert Mundell has reshaped macroeconomic theory for open economies. His most important contributions were made in the 1960s. During the latter half of that decade, Mundell was among the intellectual leaders in the creative research environment at the University of Chicago. Many of his students from this period have become successful researchers in the same field, building on Mundell's foundational work.
<br>
<br>Mundell's scientific contributions are original. Yet they quickly transformed the research in international macroeconomics and attracted increasing attention in the practically oriented discussion of stabilization policy and exchange rate systems. A sojourn at the research department of the International Monetary Fund, 1961-1963, apparently stimulated Mundell's choice of research problems; it also gave his research additional leverage among economic policymakers.
<br>
<br>
<br>The Effects of Stabilization Policy
<br>In several papers published in the early 1960s - reprinted in his book International Economics (1968) - Robert Mundell developed his analysis of monetary and fiscal policy, so-called stabilization policy, in open economies.
<br>
<br>The Mundell-Fleming Model
<br>A pioneering article (1963) addresses the short-run effects of monetary and fiscal policy in an open economy. The analysis is simple, but the conclusions are numerous, robust and clear. Mundell introduced foreign trade and capital movements into the so-called IS-LM model of a closed economy, initially developed by the 1972 economics laureate Sir John Hicks. This allowed him to show that the effects of stabilization policy hinge on the degree of international capital mobility. In particular, he demonstrated the far-reaching importance of the exchange rate regime: under a floating exchange rate, monetary policy becomes powerful and fiscal policy powerless, whereas the opposite is true under a fixed exchange rate.
<br>
<br>In the interesting special case with high capital mobility, foreign and domestic interest rates coincide (given that the exchange rate is expected to be constant). Under a fixed exchange rate, the central bank must intervene on the currency market in order to satisfy the public's demand for foreign currency at this exchange rate. As a result, the central bank loses control of the money supply, which then passively adjusts to the demand for money (domestic liquidity). Attempts to implement independent national monetary policy by means of so-called open market operations are futile because neither the interest rate nor the exchange rate can be affected. However, increased government expenditures, or other fiscal policy measures, can raise national income and the level of domestic activity, thereby escaping the impediments of rising interest rates or a stronger exchange rate.
<br>
<br>A floating exchange rate is determined by the market since the central bank refrains from currency intervention. Fiscal policy now becomes powerless. Under unchanged monetary policy, increased government expenditures give rise to a greater demand for money and tendencies towards higher interest rates. Capital inflows strengthen the exchange rate to the point where lower net exports eliminate the entire expansive effect of higher government expenditures. Under floating exchange rates, however, monetary policy becomes a powerful tool for influencing economic activity. Expansion of the money supply tends to promote lower interest rates, resulting in capital outflows and a weaker exchange rate, which in turn expand the economy through increased net exports.
<br>
<br>Floating exchange rates and high capital mobility accurately describe the present monetary regime in many countries. But in the early 1960s, an analysis of their consequences must have seemed like an academic curiosity. Almost all countries were linked together by fixed exchange rates within the so-called Bretton Woods System. International capital movements were highly curtailed, in particular by extensive capital and exchange rate controls. During the 1950s, however, Mundell's own country - Canada - had allowed its currency to float against the US dollar and had begun to ease restrictions. His far-sighted analysis became increasingly relevant over the next ten years, as international capital markets opened up and the Bretton Woods System broke down.
<br>
<br>Marcus Fleming (who died in 1976) was Deputy Director of the research department of the International Monetary Fund for many years; he was already a member of this department during the period of Mundell's affiliation. At approximately the same time as Mundell, Fleming presented similar research on stabilization policy in open economies. As a result, today's textbooks refer to the Mundell-Fleming Model. In terms of depth, range and analytical power, however, Mundell's contribution predominates.
<br>
<br>The original Mundell-Fleming Model undoubtedly had its limitations. For instance, as in all macroeconomic analysis at the time, it makes highly simplified assumptions about expectations in financial markets and assumes price rigidity in the short run. These shortcomings have been remedied by later researchers, who have shown that gradual price adjustment and rational expectations can be incorporated into the analysis without significantly changing the results.
<br>
<br>Monetary Dynamics
<br>In contrast to his colleagues during this period, Mundell's research did not stop at short-run analysis. Monetary dynamics is a key theme in several significant articles. He emphasized differences in the speed of adjustment on goods and asset markets (called the principle of effective market classification). Later on, these differences were highlighted by his own students and others to show how the exchange rate can temporarily "overshoot" in the wake of certain disturbances.
<br>
<br>An important problem concerned deficits and surpluses in the balance of payments. In the postwar period, research on these imbalances had been based on static models and emphasized real economic factors and flows in foreign trade. Inspired by David Humes's classic mechanism for international price adjustment which focused on monetary factors and stock variables, Mundell formulated dynamic models to describe how prolonged imbalances could arise and be eliminated. He demonstrated that an economy will adjust gradually over time as the money holdings of the private sector (and thereby its wealth) change in response to surpluses or deficits. Under fixed exchange rates, for example, when capital movements are sluggish, an expansive monetary policy will reduce interest rates and raise domestic demand. The subsequent balance of payments deficit will generate monetary outflows, which in turn lower demand until the balance of payments returns to equilibrium. This approach, which was adopted by a number of researchers, became known as the monetary approach to the balance of payments. For a long time it was regarded as a kind of long-run benchmark for analyzing stabilization policy in open economies. Insights from this analysis have frequently been applied in practical economic policymaking - particularly by IMF economists.
<br>
<br>Prior to another of Mundell's contributions, the theory of stabilization policy had not only been static, it had also assumed that all economic policy in a country is coordinated and assembled in a single hand. By contrast, Mundell used a simple dynamic model to examine how each of the two instruments, monetary and fiscal policy, should be directed towards either of two objectives, external and internal balance, in order to bring the economy closer to these objectives over time. This implies that each of two different authorities - the government and the central bank - is given responsibility for its own stabilization policy instrument. Mundell's conclusion was straightforward: to prevent the economy from becoming unstable, the linkage has to accord with the relative efficiency of the instruments. In his model, monetary policy is linked to external balance and fiscal policy to internal balance. Mundell's primary concern was not decentralization itself. But by explaining the conditions for decentralization, he anticipated the idea which, long afterwards, has become generally accepted, i.e., that the central bank should be given independent responsibility for price stability.
<br>
<br>Mundell's contributions on dynamics proved to be a watershed for research in international macroeconomics. They introduced a meaningful dynamic approach, based on a clear-cut distinction between stock and flow variables, as well as an analysis of their interaction during the adjustment of an economy to a stable long-run situation. Mundell's work also initiated the necessary rapprochement between Keynesian short-run analysis and classical long-run analysis. Subsequent researchers have extended Mundell's findings. The models have been extended to incorporate forward-looking decisions of household and firms, additional types of financial assets and richer dynamic adjustments of prices and the current account. Despite these modifications, most of Mundell's results stand up.
<br>
<br>The short-run and long-run analyses carried out by Mundell arrive at the same fundamental conclusion regarding the conditions for monetary policy. With (i) free capital mobility, monetary policy can be oriented towards either (ii) an external objective - such as the exchange rate - or (iii) an internal (domestic) objective - such as the price level - but not both at the same time. This incompatible trinity has become self-evident for academic economists; today, this insight is also shared by the majority of participants in the practical debate.
<br>
<br>
<br>Optimum Currency Areas
<br>
<br>As already indicated, fixed exchange rates predominated in the early 1960s. A few researchers did in fact discuss the advantages and disadvantages of a floating exchange rate. But a national currency was considered a must. The question Mundell posed in his article on "optimum currency areas" (1961) therefore seemed radical: when is it advantageous for a number of regions to relinquish their monetary sovereignty in favor of a common currency?
<br>
<br>Mundell's article briefly mentions the advantages of a common currency, such as lower transaction costs in trade and less uncertainty about relative prices. The disadvantages are described in greater detail. The major drawback is the difficulty of maintaining employment when changes in demand or other "asymmetric shocks" require a reduction in real wages in a particular region. Mundell emphasized the importance of high labor mobility in order to offset such disturbances. He characterized an optimum currency area as a set of regions among which the propensity to migrate is high enough to ensure full employment when one of the regions faces an asymmetric shock. Other researchers extended the theory and identified additional criteria, such as capital mobility, regional specialization and a common tax and transfer system. The way Mundell originally formulated the problem has nevertheless continued to influence generations of economists.
<br>
<br>Mundell's considerations, several decades ago, seem highly relevant today. Due to increasingly higher capital mobility in the world economy, regimes with a temporarily fixed, but adjustable, exchange rate have become more fragile; such regimes are also being called into question. Many observers view a currency union or a floating exchange rate - the two cases Mundell's article dealt with - as the most relevant alternatives. Needless to say, Mundell's analysis has also attracted attention in connection with the common European currency. Researchers who have examined the economic advantages and disadvantages of EMU have adopted the idea of an optimum currency area as an obvious starting point. Indeed, one of the key issues in this context is labor mobility in response to asymmetric shocks.
<br>
<br>
<br>Other Contributions
<br>
<br>Mundell has made other contributions to macroeconomic theory. He has shown, for example, that higher inflation can induce investors to lower their cash balances in favor of increased real capital formation. As a result, even expected inflation might have a real economic effect - which has come to be known as the Mundell-Tobin effect. Mundell has also made lasting contributions to international trade theory. He has clarified how the international mobility of labor and capital tends to equalize commodity prices among countries, even if foreign trade is limited by trade barriers. This may be regarded as the mirror image of the well-known Heckscher-Ohlin-Samuelson result that free trade of goods tends to bring about equalization of the rewards to labor and capital among countries, even if international capital movements and migration are limited. These results provide a clear prediction: trade barriers stimulate international mobility of labor and capital, whereas barriers to migration and capital movements stimulate commodity trade.
<br>
<br>
<br>
<br>
<br>*************************************************************************************
<br>
<br>Further Reading
<br>
<br>
<br>Additional background information
<br>
<br>
<br>Mundell, R.A. (1961), "A Theory of Optimum Currency Areas", American Economic Review 51: 657-665.
<br>
<br>
<br>Mundell, R.A. (1963), "Capital Mobility and Stabilization Policy under Fixed and Flexible Exchange Rates", Canadian Journal of Economics 29: 475-485.
<br>
<br>
<br>Mundell, R.A. (1968), International Economics (New York: MacMillan).
<br>
<br>
<br>
<br>************************************************************************************
<br>
<br>
<br>Robert A. Mundell was born in Canada in 1932. After completing his undergraduate education at the University of British Columbia he began his postgraduate studies at University of Washington and continued it at M.I.T. and London School of Economics. Mundell received his Ph.D. from M.I.T. in 1956 with a thesis on international capital movements. After having held several professorships, he has been affiliated with Columbia University in New York since 1974.
<br>
<br>Professor Robert A. Mundell
<br>Economics Department
<br>Columbia University
<br>1022 International Affairs Building
<br>420 West 118th Street
<br>New York, NY 10027
<br>USA
<br>
<br>
<br>The amount of the Prize Award is SEK 7, 900, 000.
<br>作者: V宝宝 时间: 2005-12-9 03:59
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>14 October 1998
<br>
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the 1998 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, to
<br>
<br>
<br>
<br>Professor Amartya Sen, Trinity College, Cambridge, U.K. (citizen of India)
<br>
<br>for his contributions to welfare economics.
<br>
<br>
<br>Social Choice, Welfare Distributions, and Poverty
<br>
<br>Amartya Sen has made several key contributions to the research on fundamental problems in welfare economics. His contributions range from axiomatic theory of social choice, over definitions of welfare and poverty indexes, to empirical studies of famine. They are tied closely together by a general interest in distributional issues and a particular interest in the most impoverished members of society. Sen has clarified the conditions which permit aggregation of individual values into collective decisions, and the conditions which permit rules for collective decision making that are consistent with a sphere of rights for the individual. By analyzing the available information about different individuals' welfare when collective decisions are made, he has improved the theoretical foundation for comparing different distributions of society's welfare and defined new, and more satisfactory, indexes of poverty. In empirical studies, Sen's applications of his theoretical approach have enhanced our understanding of the economic mechanisms underlying famines.
<br>
<br>************************************************************************************
<br>
<br>Can the values which individual members of society attach to different alternatives be aggregated into values for society as a whole, in a way that is both fair and theoretically sound? Is the majority principle a workable decision rule? How should income inequality be measured? When and how can we compare the distribution of welfare in different societies? How should we best determine whether poverty is on the decline? What are the factors that trigger famines? By answering questions such as these, Amartya Sen has made a number of noteworthy contributions to central fields of economic science and opened up new fields of study for subsequent generations of researchers. By combining tools from economics and philosophy, he has restored an ethical dimension to the discussion of vital economic problems.
<br>
<br>Individual Values and Collective Decisions
<br>
<br>When there is general agreement, the choices made by society are uncontroversial. When opinions differ, the problem is to find methods for bringing together different opinions in decisions which concern everyone. The theory of social choice is preoccupied precisely with this link between individual values and collective choice. Fundamental questions are whether - and, if so, in what way - preferences for society as a whole can be consistently derived from the preferences of its members. The answers are crucial for the feasibility of ranking, or otherwise evaluating, different social states and thereby constructing meaningful measures of social welfare.
<br>
<br>Majority rule
<br>Majority voting is perhaps the most common rule for making collective decisions. A long time ago, this rule was found to have serious deficiencies, in addition to the fact that it may allow a majority to suppress a minority. In some situations it may pay off to vote strategically (i.e. by not voting for the preferred alternative), or to manipulate the order in which different alternatives are voted upon. Voting between pairs of alternatives sometimes fails to produce a clear result in a group. A majority may thus prefer alternative a to alternative b whereas a (second) majority prefers b to c ; meanwhile, a (third) majority prefers c to a. In the wake of this kind of "intransitivity", the decision rule cannot select an alternative that is unambiguously best for any majority. In collaboration with Prasanta Pattanaik, Amartya Sen has specified the general conditions that eliminate intransitivities of majority rule.
<br>
<br>In the early 1950s, such problems associated with rules for collective choice motivated economics laureate Kenneth Arrow (1972) to examine possible rules for aggregating individual preferences (values, votes), where majority rule was only one of many alternatives. His surprising but fundamental result was that no aggregation (decision) rule exists that fulfills five conditions (axioms), each of which appears very reasonable on its own.
<br>
<br>This so-called impossibility theorem seemed to be an insurmountable obstacle to progress in the normative branch of economics for a long time. How could individual preferences be aggregated and different social states evaluated in a theoretically satisfactory way? Sen's contributions from the mid-1960s onwards were instrumental in alleviating this pessimism. His work not only enriched the principles of social choice theory; they also opened up new and important fields of study. Sen's monograph Collective Choice and Social Welfare from 1970 was particularly influential and inspired many researchers to renew their interest in basic welfare issues. Its style, interspersing formally and philosophically oriented chapters, gave the economic analysis of normative problems a new dimension. In the book as well as many separate articles, Sen treated problems such as: majority rule, individual rights, and the availability of information about individual welfare.
<br>
<br>Individual rights
<br>A self-evident prerequisite for a collective decision-making rule is that it should be "non-dictatorial"; that is, it should not reflect the values of any single individual. A minimal requirement for protecting individual rights is that the rule should respect the individual preferences of at least some people in at least some dimension, for instance regarding their personal sphere. Sen pointed to a fundamental dilemma by showing that no collective decision rule can fulfill such a minimal requirement on individual rights and the other axioms in Arrow's impossibility theorem. This finding initiated an extensive scientific discussion about the extent to which a collective decision rule can be made consistent with a sphere of individual rights.
<br>
<br>Information about the welfare of individuals
<br>Traditionally, the theory of social choice had only assumed that every individual can rank different alternatives, without assuming anything about interpersonal comparability. This assumption certainly avoided the difficult question of whether the utility individuals attach to different alternatives can really be compared. Unfortunately, it also precluded saying anything worthwhile about inequality. Sen initiated an entirely new field in the theory of social choice, by showing how different assumptions regarding interpersonal comparability affect the possibility of finding a consistent, non-dictatorial rule for collective decisions. He also demonstrated the implicit assumptions made when applying principles proposed by moral philosophy to evaluate different alternatives for society. The utilitarian principle, for instance, appeals to the sum of all individuals' utility when evaluating a specific social state; this assumes that differences in the utility of alternative social states can be compared across individuals. The principle formulated by the American philosopher John Rawls - that the social state should be evaluated only with reference to the individual who is worst off - assumes that the utility level of each individual can be compared to the utility of every other individual. Later developments in social choice rely, to a large extent, on Sen's analysis of the information about, and interpersonal comparability of, individual utilities.
<br>
<br>Indexes of Welfare and Poverty
<br>
<br>In order to compare distributions of welfare in different countries, or to study changes in the distribution within a given country, some kind of index is required that measures differences in welfare or income. The construction of such indexes is an important application of the theory of social choice, in the sense that inequality indexes are closely linked to welfare functions representing the values of society. Serge Kolm, Anthony Atkinson and - somewhat later - Amartya Sen were the first to derive substantial results in this area. Around 1970, they clarified the relation between the so-called Lorentz curve (that describes the income distribution), the so-called Gini coefficient (that measures the degree of income inequality), and society's ordering of different income distributions. Sen has later made valuable contributions by defining poverty indexes and other welfare indicators.
<br>
<br>Poverty indexes
<br>A common measure of poverty in a society is the share of the population, H , with incomes below a certain, predetermined, poverty line. But the theoretical foundation for this kind of measure was unclear. It also ignored the degree of poverty among the poor; even a significant boost in the income of the poorest groups in society does not affect H as long as their incomes do not cross the poverty line. To remedy these deficiencies, Sen postulated five reasonable axioms from which he derived a poverty index: P = H · [I + (1 - I) · G]. Here, G is the Gini coefficient, and I is a measure (between 0 and 1) of the distribution of income, both computed only for the individuals below the poverty line. Relying on his earlier analysis of information about the welfare of single individuals, Sen clarified when the index can and should be applied; comparisons can, for example, be made even when data are problematic, which is often the case in poor countries where poverty indexes have their most intrinsic application. Sen's poverty index has subsequently been applied extensively by others. Three of the axioms he postulated have been used by those researchers, who have proposed alternative indexes.
<br>
<br>Welfare indicators
<br>A problem when comparing the welfare of different societies is that many commonly used indicators, such as income per capita, only take average conditions into account. Sen has developed alternatives, which also encompass the income distribution. A specific alternative - which, like the poverty index, he derived from a number of axioms - is to use the measure y · (1 - G), where y is income per capita and G is the Gini coefficient.
<br>
<br>Sen has emphasized that what creates welfare is not goods as such, but the activity for which they are acquired. According to this view, income is significant because of the opportunities it creates. But the actual opportunities - or capabilities, as Sen calls them - also depend on a number of other factors, such as health; these factors should also be considered when measuring welfare. Alternative welfare indicators, such as the UN's Human Development Index, are constructed precisely in this spirit.
<br>
<br>Amartya Sen has pointed out that all well-founded ethical principles presuppose equality among individuals in some respect. But as the ability to exploit equal opportunity varies across individuals, the distribution problem can never be fully solved; equality in some dimension necessarily implies inequality in others. In which dimension we advocate equality and in which dimensions we have to accept inequality obviously depends on how we evaluate the different dimensions of welfare. In analogy with his approach to welfare measurement, Sen maintains that capabilities of individuals constitute the principal dimension in which we should strive for equality. At the same time, he observes a problem with this ethical principle, namely that individuals make decisions which determine their capabilities at a later stage.
<br>
<br>Welfare of the Poorest
<br>
<br>In his very first articles Sen analyzed the choice of production technology in developing countries. Indeed, almost all of Sen's works deal with development economics, as they are often devoted to the welfare of the poorest people in society. He has also studied actual famines, in a way quite in line with his theoretical approach to welfare measurement.
<br>
<br>Analysis of famine
<br>Sen's best-known work in this area is his book from 1981: Poverty and Famines: An Essay on Entitlement and Deprivation. Here, he challenges the common view that a shortage of food is the most important (sometimes the only) explanation for famine. On the basis of a careful study of a number of such catastrophes in India, Bangladesh, and Saharan countries, from the 1940s onwards, he found other explanatory factors. He argues that several observed phenomena cannot in fact be explained by a shortage of food alone, e.g. that famines have occurred even when the supply of food was not significantly lower than during previous years (without famines), or that faminestricken areas have sometimes exported food.
<br>
<br>Sen shows that a profound understanding of famine requires a thorough analysis of how various social and economic factors influence different groups in society and determine their actual opportunities. For example, part of his explanation for the Bangladesh famine of 1974 is that flooding throughout the country that year significantly raised food prices, while work opportunities for agricultural workers declined drastically as one of the crops could not be harvested. Due to these factors, the real incomes of agricultural workers declined so much that this group was disproportionately stricken by starvation.
<br>
<br>Later works by Sen (summarized in a book from 1989 with Jean Drèze) discuss - in a similar spirit - how to prevent famine, or how to limit the effects of famine once it has occurred. Even though a few critics have questioned the validity of some empirical results in Poverty and Famines, the book is undoubtedly a key contribution to development economics. With its emphasis on distributional issues and poverty, the book rhymes well with the common theme in Amartya Sen's research.
<br>
<br>
<br>
<br>************************************************************************************
<br>
<br>
<br>Further Reading
<br>
<br>Additional background material
<br>Sen, A. K., 1970, Collective Choice and Social Welfare, San Fransisco: Holden Day , also London: Oliver and Boyd (reprinted Amsterdam: North-Holland).
<br>Sen, A. K. , 1973, On Economic Inequality, Oxford: Clarendon Press.
<br>Sen, A. K., 1981, Poverty and Famines: An Essay on Entitlement and Deprivation, Oxford: Clarendon Press.
<br>
<br>******
<br>
<br>******************************************************************************
<br>Amartya Sen was born in Bengal in 1933 (citizen of India). He received his doctorate from the University of Cambridge, U.K. in 1959 and has been professor in India, the U.K. and the U.S. In 1998 he left his professorships in economics and philosophy at Harvard University to become Master of Trinity College, Cambridge U.K.
<br>
<br>Professor Amartya Sen
<br>Trinity College
<br>Cambridge, CB2 1TQ, U.K.
<br>
<br>作者: V宝宝 时间: 2005-12-9 04:00
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>14 October 1997
<br>
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1997, to
<br>
<br>Professor Robert C. Merton, Harvard University, Cambridge, USA and
<br>Professor Myron S. Scholes, Stanford University, Stanford, USA
<br>
<br>for a new method to determine the value of derivatives.
<br>
<br>Robert C. Merton and Myron S. Scholes have, in collaboration with the late Fischer Black, developed a pioneering formula for the valuation of stock options. Their methodology has paved the way for economic valuations in many areas. It has also generated new types of financial instruments and facilitated more efficient risk management in society.
<br>
<br>************************************************************************************
<br>
<br>In a modern market economy it is essential that firms and households are able to select an appropriate level of risk in their transactions. This takes place on financial markets which redistribute risks towards those agents who are willing and able to assume them. Markets for options and other so-called derivatives are important in the sense that agents who anticipate future revenues or payments can ensure a profit above a certain level or insure themselves against a loss above a certain level. (Due to their design, options allow for hedging against one-sided risk - options give the right, but not the obligation, to buy or sell a certain security in the future at a prespecified price.) A prerequisite for efficient management of risk, however, is that such instruments are correctly valued, or priced. A new method to determine the value of derivatives stands out among the foremost contributions to economic sciences over the last 25 years.
<br>
<br>This year磗 laureates, Robert Merton and Myron Scholes, developed this method in close collaboration with Fischer Black, who died in his mid-fifties in 1995. These three scholars worked on the same problem: option valuation. In 1973, Black and Scholes published what has come to be known as the Black-Scholes formula. Thousands of traders and investors now use this formula every day to value stock options in markets throughout the world. Robert Merton devised another method to derive the formula that turned out to have very wide applicability; he also generalized the formula in many directions.
<br>
<br>Black, Merton and Scholes thus laid the foundation for the rapid growth of markets for derivatives in the last ten years. Their method has more general applicability, however, and has created new areas of research - inside as well as outside of financial economics. A similar method may be used to value insurance contracts and guarantees, or the flexibility of physical investment projects.
<br>
<br>
<br>The problem
<br>
<br>Attempts to value derivatives have a long history. As far back as 1900, the French mathematician Louis Bachelier reported one of the earliest attempts in his doctoral dissertation, although the formula he derived was flawed in several ways. Subsequent researchers handled the movements of stock prices and interest rates more successfully. But all of these attempts suffered from the same fundamental shortcoming: risk premia were not dealt with in a correct way.
<br>
<br>The value of an option to buy or sell a share depends on the uncertain development of the share price to the date of maturity. It is therefore natural to suppose - as did earlier researchers - that valuation of an option requires taking a stance on which risk premium to use, in the same way as one has to determine which risk premium to use when calculating present values in the evaluation of a future physical investment project with uncertain returns. Assigning a risk premium is difficult, however, in that the correct risk premium depends on the investor磗 attitude towards risk. Whereas the attitude towards risk can be strictly defined in theory, it is hard or impossible to observe in reality.
<br>
<br>The method
<br>
<br>Black, Merton and Scholes made a vital contribution by showing that it is in fact not necessary to use any risk premium when valuing an option. This does not mean that the risk premium disappears; instead it is already included in the stock price.
<br>
<br>The idea behind their valuation method can be illustrated as follows:
<br>Consider a so-called European call option that gives the right to buy one share in a certain firm at a strike price of $ 50, three months from now. The value of this option obviously depends not only on the strike price, but also on today磗 stock price: the higher the stock price today, the greater the probability that it will exceed $ 50 in three months, in which case it pays to exercise the option. As a simple example, let us assume that if the stock price goes up by $ 2 today, the option goes up by $ 1. Assume also that an investor owns a number of shares in the firm in question and wants to lower the risk of changes in the stock price. He can actually eliminate that risk completely, by selling (writing) two options for every share that he owns. Since the portfolio thus created is risk-free, the capital he has invested must pay exactly the same return as the risk-free market interest rate on a three-month treasury bill. If this were not the case, arbitrage trading would begin to eliminate the possibility of making a risk-free profit. As the time to maturity approaches, however, and the stock price changes, the relation between the option price and the share price also changes. Therefore, to maintain a risk-free option-stock portfolio, the investor has to make gradual changes in its composition.
<br>
<br>One can use this argument, along with some technical assumptions, to write down a partial differential equation. The solution to this equation is precisely the Black-Scholes作者: V宝宝 时间: 2005-12-9 04:01
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>8 October 1996
<br>
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1996, to
<br>
<br>
<br>Professor James A. Mirrlees, University of Cambridge, U.K. and
<br>Professor William Vickrey, Columbia University, New York, USA,
<br>(deceased October 10, 1996)
<br>
<br>for their fundamental contributions to the economic theory of incentives under asymmetric information.
<br>
<br>
<br>Information and Incentives
<br>
<br>One of the most important and liveliest areas of economic research in recent years addresses situations where decision-makers have different information. Such informational asymmetries occur in a great many contexts. For example, a bank does not have complete information about borrowers' future income; the owners of a firm may not have the same detailed information about costs and competitive conditions as the managing director; an insurance company cannot fully observe policyholders' responsibility for insured property and external events which affect the risk of damage; an auctioneer does not have complete information about the willingness to pay of potential buyers; the government has to devise an income tax system without much knowledge about the productivity of individual citizens; etc.
<br>
<br>Incomplete and asymmetrically distributed information has fundamental consequences, particularly in the sense that an informational advantage can often be exploited strategically. Research on the economics of information has therefore focused on the question of how contracts and institutions can be designed to handle different incentive and control problems. This has generated a better understanding of insurance markets, credit markets, auctions, the internal organization of firms, wage forms, tax systems, social insurance, competitive conditions, political institutions, etc.
<br>
<br>This year's laureates have laid the foundation for examining these seemingly quite disparate areas through their analytical work on issues where informational asymmetries are a key component. An essential part of William Vickrey's research has concerned the properties of different types of auctions, and how they can best be designed so as to generate economic efficiency. His endeavors have provided the basis for a lively field of research which, more recently, has also been extended to practical applications such as auctions of treasury bonds and band spectrum licenses. In the late 1940s, Vickrey also formulated a model indicating how income taxation can be designed to attain a balance between efficiency and equity. A quarter of a century later, interest in this model was renewed when James Mirrlees found a more thorough solution to the problems associated with optimal income taxes. Mirrlees soon realized that his method could also be applied to many other similar problems. It has become a principal constituent of the modern analysis of complex information and incentive problems. Mirrlees's approach has become particularly valuable in situations where it is impossible to observe another agent's actions, so-called moral hazard.
<br>
<br>Income Taxation
<br>
<br>Philosophers, economists and political scientists have studied the principles of income taxation for a long time. Different principles of justice have governed the structure of taxation. In a classical essay published in 1897, Oxford professor Francis Y. Edgeworth adopted a utilitarian welfare perspective; he concluded that all differences in income should be neutralized, which requires strongly progressive tax rates. Vickrey's analysis, in the mid-1940s, emphasized that a progressive tax schedule would affect individuals' incentives to exert themselves. He therefore reformulated the problem with respect to both incentive problems - that each individual takes the tax schedule into account when choosing his work effort - and asymmetric information - that, in practice, the productivity of individuals is not known to the government. He formulated a solution to the problem in principle, but did not succeed in mastering its mathematical complications.
<br>
<br>It was not until 25 years later that the problem was reconsidered by James Mirrlees, who solved it in a way which has established a paradigm for analyzing a broad spectrum of economic issues where asymmetric information is a prime component. Mirrlees identified a critical condition (known as single crossing) which drastically simplifies the problem and enables a solution. His analysis also proved to contain the germ of a general principle: the revelation principle. According to this principle, the solution to incentive problems under incomplete information belongs to the relatively limited class of so-called allocation mechanisms which induce all individuals to reveal their privat information truthfully, in a way which does not conflict with their self-interest. By applying this principle, it becomes much easier to design optimal contracts and other solutions to incentive problems. It has therefore had a large bearing on the treatment of many issues of economic theory.
<br>
<br>Moral Hazard
<br>
<br>For a long time, a well-known problem in connection with insurance is that damage to insured objects depends not only on external factors such as weather and attempted theft, but also on the care taken by the policyholder, which is costly for an insurance company to monitor. Corresponding problems also arise regarding different kinds of social insurance, such as health and disability insurance. Generous insurance coverage can exaggerate risktaking and affect the way individuals care for themselves and their property. Many other two-party relations involve an outcome that is observable to both parties, where the outcome depends on one party's (the agent's) actions, which cannot be observed by the other party (the principal), as well as on a random variable. In the relation between the owner and the management of a firm, for instance, the action would be the executive's work effort, the outcome would be the firm's profit and the random variable could be the firm's market or production conditions. The owners of both the insurance company and the firm want to choose terms of compensation, a "contract", which gives the agent incentives to act in accordance with the principal's interests, for example, by maximizing the owner's expected profits.
<br>
<br>The technical difficulties encountered in analyzing these so-called moral hazard problems are similar to the income tax problems emphasized by Vickrey and solved by Mirrlees. In the mid-1970s, by means of an apparently simple reformulation of the problem, Mirrlees paved the way for an increasingly powerful analysis. He noted that an agent's actions indirectly imply a choice of the probabilities that different outcomes will occur. The conditions for the optimal terms of compensation thus provide "probability information" about the agent's choice and the extent to which insurance protection has to be restricted in order to provide the agent with suitable incentives. In designing an incentive scheme, the principal has to take into account the costs of giving the agent incentives to act in accordance with the principal's interests. The higher the agent's sensitivity to punishment and the larger the amount of information about the agent's choice contained in the outcome, the lower these costs. This is stipulated in a contract; the agent bears part of the cost of undesirable outcomes or receives part of the profits from favorable outcomes. The policyholder takes care of the insured object almost as if it were uninsured, and the executive manages the firm almost as if it were his own.
<br>
<br>Auctions
<br>
<br>Asymmetric information is also an essential component of auctions, where potential buyers have limited knowledge about the value of the asset or rights up for sale. Vickrey analyzed the properties of different kinds of auctions in two papers in 1961 and 1962. He attached particular importance to the second-price auction or, as it is now often called, the Vickrey auction. In such an auction, an object is auctioned off in sealed bidding, where the highest bidder gets to buy the item, but only pays the next highest price offered. This is an example of a mechanism which elicits an individual's true willingness to pay. By bidding above his own willingness to pay, an individual runs the risk that someone else will bid likewise, and he is forced to buy the object at a loss. And vice versa, if an individual bids below his own willingness to pay, he runs the risk of someone else buying the item at a lower price than the amount he himself is willing to pay. Therefore, in this kind of auction, it is in the individual's best interest to state a truthful bid. The auction is also socially efficient. The object goes to the person with the highest willingness to pay, and the person in question pays the social opportunity cost which is the second highest bid. Other researchers have later developed analogous principles, for example in order to elicit the true willingness to pay for public projects. Thus, Vickrey's analysis has not only been momentous for the theory of auctions; it has also conveyed fundamental insights into the design of resource allocation mechanisms aimed at providing socially desirable incentives.
<br>
<br>Other Contributions
<br>
<br>In addition, both James Mirrlees and William Vickrey have made noteworthy contributions to other areas of economics. In collaboration with the U.S. economist Peter Diamond, Mirrlees analyzed the structure of consumption taxes in a world where tax wedges give rise to social inefficiency. They arrived at an unambiguous and highly universal result by showing that under relatively general conditions, it is worthwhile to maintain full production efficiency. In concrete terms, this means that small open economies should not impose tariffs on foreign trade and that taxes on factors of production such as labor and capital should not be levied on the production side, but at the consumption stage. The latter result has had important consequences for project appraisal and economic policy in developing countries. In work with the British economist Ian Little and based on his research with Diamond, Mirrlees himself has set up criteria for evaluating development projects.
<br>
<br>Efficient pricing of public services permeates Vickrey's scientific production. He has not only made significant theoretical contributions, but - unlike most excellent theorists - he has also followed up on his proposals all the way to their practical application. An example is Vickrey's famous study of the New York subway fare system in the 1950s. His proposal was an early attempt at efficient pricing of public services, under the restriction that the authorities should receive full cost coverage. His study represents more than an improvement on the basic pricing principle (so-called Ramsey pricing); it is also fascinating in its wealth of detail.
<br>
<br>
<br>
<br>
<br>--------------------------------------------------------------------------------
<br>
<br>James A. Mirrlees was born in 1936 in Minnigaff, Scotland. He received his M.S. in Mathematics in Edinburgh in 1957, and his Ph.D. from the University of Cambridge in 1963. He was Edgeworth Professor of Economics at Oxford University between 1969 and 1995, and currently holds a professorship in Economics at the University of Cambridge.
<br>
<br>Professor James A. Mirrlees
<br>Department of Economics and Politics
<br>University of Cambridge
<br>Sidgwick Avenue
<br>Cambridge CB3 9DD
<br>U.K.
<br>
<br>William Vickrey was born in 1914 in Victoria, British Columbia, Canada. He received his B.S. from Yale University in 1935. He then began postgraduate studies at Columbia University, New York, where he received his Master's degree in 1937 and his Ph.D. in 1947. He has been affiliated with the faculty of Columbia University since 1946, and also served as a tax advisor between 1937 and 1947. He was Professor Emeritus at Columbia University.
<br>
<br>
<br>
<br>
<br>--------------------------------------------------------------------------------
<br>
<br>Additional background material on the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1996 .
<br>作者: V宝宝 时间: 2005-12-9 04:02
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>10 October 1995
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1995, to
<br>
<br>Professor Robert E. Lucas, Jr., University of Chicago, USA,
<br>
<br>for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy.
<br>
<br>Rational Expectations Have Transformed Macroeconomic Analysis and Our Understanding of Economic Policy
<br>Robert Lucas is the economist who has had the greatest influence on macroeconomic research since 1970. His work has brought about a rapid and revolutionary development: Application of the rational expectations hypothesis, emergence of an equilibrium theory of business cycles, insights into the difficulties of using economic policy to control the economy, and possibilities of reliably evaluating economic policy with statistical methods. In addition to his work in macroeconomics, Lucas's contributions have had a very significant impact on research in several other fields.
<br>
<br>Rational Expectations
<br>
<br>Expectations about the future are highly important to economic decisions made by households, firms and organizations. One among many examples is wage formation, where expectations about the inflation rate and the demand for labor in the future strongly affect the contracted wage level which, in turn, affects future inflation. Similarly, many other economic variables are to a large extent governed by expectations about future conditions.
<br>
<br>Despite the major importance of expectations, economic analysis paid them only perfunctory attention for a long time. Twenty years ago, it was not unusual to assume arbitrarily specified or even static expectations, for example that the expected future price level was regarded as the same as today's price level. Or else adaptive expectations were assumed, such that the expected future price level was mechanically adjusted to the deviation between today's price level and the price level expected earlier.
<br>
<br>Instead, rational expectations are genuinely forward-looking. The rational expectations hypothesis means that agents exploit available information without making the systematic mistakes implied by earlier theories. Expectations are formed by constantly updating and reinterpreting this information. Sometimes the consequences of rational expectations formation are dramatic, as in the case of economic policy. The first precise formulation of the rational expectations hypothesis was introduced by John Muth in 1961. But it did not gain much prominence until the 1970s, when Lucas extended it to models of the aggregate economy. In a series of path-breaking articles, Lucas demonstrated the far-reaching consequences of rational expectations formation, particularly concerning the effects of economic policy and the evaluation of these effects using econometric methods, that is, statistical methods specifically adapted for examining economic relationships. Lucas also applied the hypothesis to several fields other than macroeconomics.
<br>
<br>
<br>The Phillips Curve Example
<br>
<br>The change in our understanding of the so-called Phillips curve is an excellent example of Lucas's contributions. The Phillips curve displays a positive relation between inflation and employment. In the late 1960s, there was considerable empirical support for the Phillips curve; it was regarded as one of the more stable relations in economics. It was interpreted as an option for government authorities to increase employment by pursuing an expansionary policy which raises inflation. Milton Friedman and Edmund Phelps criticized this interpretation and claimed that the expectations of the general public would adjust to higher inflation and preclude a lasting increase in employment: Only the short-run Phillips curve is sloping, whereas the long-run curve is vertical. This criticism was not quite convincing, however, because Friedman and Phelps assumed adaptive expectations. Such expectations do in fact imply a permanent rise in employment if inflation is allowed to increase over time. In a study published in 1972, Lucas used the rational expectations hypothesis to provide the first theoretically satisfactory explanation for why the Phillips curve could be sloping in the short run but vertical in the long run. In other words, regardless of how it is pursued, stabilization policy cannot systematically affect long-run employment. Lucas formulated an ingenious theoretical model which generates time series such that inflation and employment indeed seem to be positively correlated. A statistician who studies these time series might easily conclude that employment could be increased by implementing an expansionary economic policy. Nevertheless, Lucas demonstrated that any endeavor, based on such policy, to exploit the Phillips curve and permanently increase employment would be futile and only give rise to higher inflation. This is because agents in the model adjust their expectations and hence price and wage formation to the new, expected policy. Experience during the 1970s and 1980s has shown that higher inflation does not appear to bring about a permanent increase in employment. This insight into the long-run effects of stabilization policy has become a commonly accepted view; it is now the foundation for monetary policy in a number of countries in their efforts to achieve and maintain a low and stable inflation rate.
<br>
<br>The short-run sloping and long-run vertical Phillips curve illustrates the pitfalls of uncritically relying on statistically estimated so-called macroeconometricmodels to draw conclusions about the effects of changes in economic policy. In a 1976 study, introducing what is now known as the "Lucas critique", Lucas demonstrated that relations which had so far been regarded as "structural" in econometric analysis were in fact influenced by past policy. Two decades ago, virtually all macroeconometric models contained relations which, on closer examination, could be shown to depend on the fiscal and monetary policy carried out during the estimation period. Obviously, then, the same relations cannot be used in simulations designed to predict the effect of another fiscal or monetary policy. Yet this was exactly how the models were often used.
<br>
<br>The Lucas critique has had a profound influence on economic-policy recommendations. Shifts in economic policy often produce a completely different outcome if the agents adapt their expectations to the new policy stance. Nowadays, when evaluating the consequences of shifts in economic-policy regimes - for example, a new exchange rate system, a new monetary policy, a tax reform or new rules for unemployment benefits - it is more or less self-evident to consider changes in the behavior of economic agents due to revised expectations.
<br>
<br>How could researchers avoid the mistakes forewarned by the Lucas critique? Lucas's own research provided the answer by calling for a new research program. The objective of the program was to formulate macroeconometric models such that their relations are not sensitive to policy changes; otherwise, the models cannot contribute to a reliable assessment of economic-policy alternatives. It is easy to formulate this principle: the models should be "equilibrium models" with rational expectations. This means that all important variables should be determined within the model, on the basis of interaction among rational agents who have rational expectations and operate in a well-specified economic environment. In addition, the models should be formulated so that they only incorporate policy-independent parameters (those coefficients which describe the relations of the models). This, in turn, requires sound microeconomic foundations, i.e., the individual agents' decision problems have to be completely accounted for in the model. The parameters are then estimated using econometric methods developed for this purpose. Interesting attempts to derive and estimate such models have subsequently been made in several different areas, such as the empirical analysis of investment, consumption and employment, as well as of asset pricing on financial markets. The program can be difficult to implement in practice however, and not all attempts have been successful.
<br>
<br>A Large Following
<br>
<br>Lucas formulated powerful and operational methods for drawing conclusions from models with rational expectations. These methods provided the means for rapid development of macroeconomic analysis and eventually became part of the standard toolbox. Without them, the outcome of the rational expectations hypothesis would have been limited to general insights into the importance of expectations instead of clear-cut statements in specific situations. Rational expectations have now been accepted as the natural basis for further studies of expectation formation with respect to limited rationality, limited computational capacity and gradual learning.
<br>
<br>Lucas has established new areas of research. After his pioneering work on the Phillips curve, the so-called equilibrium theory of business cycles has become an extensive and dynamic field, where the effects of real and monetary disturbances on the business cycle have been carefully examined. The equilibrium theory of business cycles initially relied on the assumption of completely flexible prices and immediate adjustment to equilibrium on goods and labor markets with perfect competition. However, Lucas's methodological approach is not incompatible with sticky prices and various market failures such as imperfect competition and imperfect information. Nevertheless, these frictions and imperfections should not be introduced in an arbitrary way, but should be explained as a result of rational agents' decisions and interaction in a well-specified choice situation. Interpreted in this way, Lucas's methodological approach has been accepted by nearly all macroeconomists. Indeed, the greatest advances in modeling frictions and market imperfections seem to have been made precisely when this methodological approach has been followed.
<br>
<br>Lucas's pioneering work has created an entirely new field of econometrics, known as rational expectations econometrics. There, the rational expectations hypothesis is used to identify the most efficient statistical methods for estimating economic relations where expectations are the key components. A number of researchers have subsequently made important contributions to this new field.
<br>
<br>
<br>Other Contributions
<br>
<br>In addition to his work in macroeconomics, Lucas has made outstanding contributions to investment theory, financial economics, monetary theory, dynamic public economics, international finance and, most recently, the theory of economic growth. In each of these fields, Lucas's studies have had a significant impact; they have launched new ideas and generated an extensive new literature.
<br>作者: V宝宝 时间: 2005-12-9 04:02
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>11 October 1994
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1994, jointly to
<br>
<br>Professor John C. Harsanyi, University of California, Berkeley, CA, USA,
<br>Dr. John F. Nash, Princeton University, Princeton, NJ, USA,
<br>Professor Dr. Reinhard Selten, Rheinische Friedrich-Wilhelms-Universit鋞, Bonn, Germany,
<br>
<br>for their pioneering analysis of equilibria in the theory of non-cooperative games.
<br>
<br>Games as the Foundation for Understanding Complex Economic Issues
<br>Game theory emanates from studies of games such as chess or poker. Everyone knows that in these games, players have to think ahead - devise a strategy based on expected countermoves from the other player(s). Such strategic interaction also characterizes many economic situations, and game theory has therefore proved to be very useful in economic analysis.
<br>
<br>The foundations for using game theory in economics were introduced in a monumental study by John von Neumann and Oskar Morgenstern entitled Theory of Games and Economic Behavior (1944). Today, 50 years later, game theory has become a dominant tool for analyzing economic issues. In particular, non-cooperative game theory, i.e., the branch of game theory which excludes binding agreements, has had great impact on economic research. The principal aspect of this theory is the concept of equilibrium, which is used to make predictions about the outcome of strategic interaction. John F. Nash, Reinhard Selten and John C. Harsanyi are three researchers who have made eminent contributions to this type of equilibrium analysis.
<br>
<br>John F. Nash introduced the distinction between cooperative games, in which binding agreements can be made, and non-cooperative games, where binding agreements are not feasible. Nash developed an equilibrium concept for non-cooperative games that later came to be called Nash equilibrium.
<br>
<br>Reinhard Selten was the first to refine the Nash equilibrium concept for analyzing dynamic strategic interaction. He has also applied these refined concepts to analyses of competition with only a few sellers.
<br>
<br>John C. Harsanyi showed how games of incomplete information can be analyzed, thereby providing a theoretical foundation for a lively field of research - the economics of information - which focuses on strategic situations where different agents do not know each others' objectives.
<br>
<br>Strategic Interaction
<br>
<br>Game theory is a mathematical method for analyzing strategic interaction. Many classical analyses in economics presuppose such a large number of agents that each of them can disregard the others' reactions to their own decision. In many cases, this assumption is a good description of reality, but in other cases it is misleading. When a few firms dominate a market, when countries have to make an agreement on trade policy or environmental policy, when parties on the labor market negotiate about wages, and when a government deregulates a market, privatizes companies or pursues economic policy, each agent in question has to consider other agents' reactions and expectations regarding their own decisions, i.e., strategic interaction.
<br>
<br>As far back as the early nineteenth century, beginning with Auguste Cournot in 1838, economists have developed methods for studying strategic interaction. But these methods focused on specific situations and, for a long time, no overall method existed. The game-theorethic approach now offers a general toolbox for analyzing strategic interaction.
<br>
<br>Game Theory
<br>
<br>Whereas mathematical probability theory ensued from the study of pure gambling without strategic interaction, games such as chess, cards, etc. became the basis of game theory. The latter are characterized by strategic interaction in the sense that the players are individuals who think rationally. In the early 1900s, mathematicians such as Zermelo, Borel and von Neumann had already begun to study mathematical formulations of games. It was not until the economist Oskar Morgenstern met the mathematician John von Neumann in 1939 that a plan originated to develop game theory so that it could be used in economic analysis.
<br>
<br>The most important ideas set forth by von Neumann and Morgenstern in the present context may be found in their analysis of two-person zero-sum games. In a zero-sum game, the gains of one player are equal to the losses of the other player. As early as 1928, von Neumann introduced the minimax solution for a two-person zero-sum game. According to the minimax solution, each player tries to maximize his gain in the outcome which is most disadvantageous to him (where the worst outcome is determined by his opponent's choice of strategy). By means of such a strategy, each player can guarantee himself a minimum gain. Of course, it is not certain that the players' choices of strategy will be consistent with each other. von Neumann was able to show, however, that there is always a minimax solution, i.e., a consistent solution, if so-called mixed strategies are introduced. A mixed strategy is a probability distribution of a player's available strategies, whereby a player is assumed to choose a certain "pure" strategy with some probability.
<br>
<br>John F. Nash
<br>
<br>John Nash arrived at Princeton University in 1948 as a young doctoral student in mathematics. The results of his studies are reported in his doctoral dissertation entitled Noncooperative Games (1950). The thesis gave rise to Equilibrium Points in n-person Games (Proceedings of the National Academy of Sciences of the USA 1950), and to an article entitled Non-cooperative Games, (Annals of Mathematics 1951).
<br>
<br>In his dissertation, Nash introduced the distinction between cooperative and non-cooperative games. His most important contribution to the theory of non-cooperative games was to formulate a universal solution concept with an arbitrary number of players and arbitrary preferences, i.e., not solely for two-person zero-sum games. This solution concept later came to be called Nash equilibrium. In a Nash equilibrium, all of the players' expectations are fulfilled and their chosen strategies are optimal. Nash proposed two interpretations of the equilibrium concept: one based on rationality and the other on statistical populations. According to the rationalistic interpretation, the players are perceived as rational and they have complete information about the structure of the game, including all of the players' preferences regarding possible outcomes, where this information is common knowledge. Since all players have complete information about each others' strategic alternatives and preferences, they can also compute each others' optimal choice of strategy for each set of expectations. If all of the players expect the same Nash equilibrium, then there are no incentives for anyone to change his strategy. Nash's second interpretation - in terms of statistical populations - is useful in so-called evolutionary games. This type of game has also been developed in biology in order to understand how the principles of natural selection operate in strategic interaction within and among species. Moreover, Nash showed that for every game with a finite number of players, there exists an equilibrium in mixed strategies.
<br>
<br>Many interesting economic issues, such as the analysis of oligopoly, originate in non-cooperative games. In general, firms cannot enter into binding contracts regarding restrictive trade practices because such agreements are contrary to trade legislation. Correspondingly, the interaction among a government, special interest groups and the general public concerning, for instance, the design of tax policy is regarded as a non-cooperative game. Nash equilibrium has become a standard tool in almost all areas of economic theory. The most obvious is perhaps the study of competition between firms in the theory of industrial organization. But the concept has also been used in macroeconomic theory for economic policy, environmental and resource economics, foreign trade theory, the economics of information, etc. in order to improve our understanding of complex strategic interactions. Non-cooperative game theory has also generated new research areas. For example, in combination with the theory of repeated games, non-cooperative equilibrium concepts have been used successfully to explain the development of institutions and social norms. Despite its usefulness, there are problems associated with the concept of Nash equilibrium. If a game has several Nash equilibria, the equilibrium criterion cannot be used immediately to predict the outcome of the game. This has brought about the development of so-called refinements of the Nash equilibrium concept. Another problem is that when interpreted in terms of rationality, the equilibrium concept presupposes that each player has complete information about the other players' situation. It was precisely these two problems that Selten and Harsanyi undertook to solve in their contributions.
<br>
<br>Reinhard Selten
<br>
<br>The problem of numerous non-cooperative equilibria has generated a research program aimed at eliminating "uninteresting" Nash equilibria. The principal idea has been to use stronger conditions not only to reduce the number of possible equilibria, but also to avoid equilibria which are unreasonable in economic terms. By introducing the concept of subgame perfection, Selten provided the foundation for a systematic endeavor in Spieltheoraetische Behandlung eirzes Oligopolmodelle mit Nachiragetragheit, (Zeitschrift fin die Gesamte Staatswissenschaft 121, 301-24 and 667-89, 1965).
<br>
<br>An example might help to explain this concept. Imagine a monopoly market where a potential competitor is deterred by threats of a price war. This may well be a Nash equilibrium - if the competitor takes the threat seriously, then it is optimal to stay out of the market - and the threat is of no cost to the monopolist because it is not carried out. But the threat is not credible if the monopolist faces high costs in a price war. A potential competitor who realizes this will establish himself on the market and the monopolist, confronted with fait accompli, will not start a price war. This is also a Nash equilibrium. In addition, however, it fulfills Selten's requirement of subgame perfection, which thus implies systematic formalization of the requirement that only credible threats should be taken into account.
<br>
<br>Selten's subgame perfection has direct significance in discussions of credibility in economic policy, the analysis of oligopoly, the economics of information, etc. It is the most fundamental refinement of Nash equilibrium. Nevertheless, there are situations where not even the requirement of subgame perfection is sufficient. This prompted Selten to introduce a further refinement, usually called the "trembling-hand" equilibrium, in Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games (International Journal of Game Theory 4, 25-55, 1975). The analysis assumes that each player presupposes a small probability that a mistake will occur, that someone's hand will tremble. A Nash equilibrium in a game is "trembling-hand perfect" if it is robust with respect to small probabilities of such mistakes. This and closely related concepts, such as sequential equilibrium (Kreps and Wilson, 1982), have turned out to be very fruitful in several areas, including the theory of industrial organization and macroeconomic theory for economic policy.
<br>
<br>John C. Harsanyi
<br>
<br>In games with complete information, all of the players know the other players' preferences, whereas they wholly or partially lack this knowledge in games with incomplete information. Since the rationalistic interpretation of Nash equilibrium is based on the assumption that the players know each others' preferences, no methods had been available for analyzing games with incomplete information, despite the fact that such games best reflect many strategic interactions in the real world.
<br>
<br>This situation changed radically in 1967-68 when John Harsanyi published three articles entitled Games with Incomplete Information Played by Bayesian Players, (Management Science 14, 159-82, 320-34 and 486-502). Harsanyi's approach to games with incomplete information may be viewed as the foundation for nearly all economic analysis involving information, regardless of whether it is asymmetric, completely private or public.
<br>
<br>Harsanyi postulated that every player is one of several "types", where each type corresponds to a set of possible preferences for the player and a (subjective) probability distribution over the other players' types. Every player in a game with incomplete information chooses a strategy for each of his types. Under a consistency requirement on the players' probability distributions, Harsanyi showed that for every game with incomplete information, there is an equivalent game with complete information. In the jargon of game theory, he thereby transformed games with incomplete information into games with imperfect information. Such games can be handled with standard methods.
<br>
<br>An example of a situation with incomplete information is when private firms and financial markets do not exactly know the preferences of the central bank regarding the tradeoff between inflation and unemployment. The central bank's policy for future interest rates is therefore unknown. The interactions between the forrnation of expectations and the policy of the central bank can be analyzed using the technique introduced by Harsanyi. In the most simple case, the central bank can be of two types, with adherent probabilities: Either it is oriented towards fighting inflation and thus prepared to pursue a restrictive policy with high rates, or it will try to combat unemployment by means of lower rates. Another example where similar methods can be applied is regulation of a monopoly firm. What regulatory or contractual solution will produce a desirable outcome when the regulator does not have perfect knowledge about the firm's costs?
<br>
<br>Other Contributions of the Laureates
<br>
<br>In addition to his contributions to non-cooperative game theory, John Nash has developed a basic solution for cooperative games, usually referred to as Nash's bargaining solution, which has been applied extensively in different branches of economic theory. He also initiated a project that subsequently came to be called the Nash program, a research program designed to base cooperative game theory on results from non-cooperative game theory. In addition to his prizewinning achievements, Reinhard Selten has contributed powerful new insights regarding evolutionary games and experimental game theory. John Harsanyi has also made significant contributions to the foundations of welfare economics and to the area on the boundary between economics and moral philosophy. Harsanyi and Selten have worked closely together for more than 20 years, sometmes in direct collaboration.
<br>
<br>
<br>Through their contributions to equilibrium analysis in non-cooperative game theory, the three laureates constitute a natural combination: Nash provided the foundations for the analysis, while Selten developed it with respect to dynamics, and Harsanyi with respect to incomplete information.
<br>作者: V宝宝 时间: 2005-12-9 04:03
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>12 October 1993
<br>
<br>THIS YEAR's PRIZEWINNEERS ARE LEADING FIGURES WITHIN THE FIELD OF "NEW ECONOMIC HISTORY"
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for 1993 jointly to
<br>
<br>Professor Robert W. Fogel, University of Chicago, USA,
<br>and
<br>Professor Douglass C. North, Washington University, St. Louis, USA,
<br>
<br>"for having renewed research in economic history by applying economic theory and quantiative methods in order to explain economic and institutional change."
<br>
<br>
<br>Modern economic historians have contributed to the development of economic sciences in at least two ways: by combining theory with quantitative methods, and by constructing and reconstructing databases or creating new ones. This has made it possible to question and to reassess earlier results, which has not only increased our knowledge of the past, but has also contributed to the elimination of irrelevant theories. It has shown that traditional theories must be supplemented or modified to enable us to understand economic growth and change. Economic historians often consider far reaching problems, the estimation of which demand an integration of economics, sociology, statistics and history. Robert Fogel and Douglass North are the economic historians that have come furthest in such a scientific integration. They were pioneers in the branch of economic history that has been called the "new economic history", or cliometrics, i. e. research that combines economic theory, quantitative methods, hypothesis testing, counterfactual alternatives and traditional techniques of economic history, to explain economic growth and decline. Their work has deepened our knowledge and understanding within fundamental areas of research, as to how, why and when economic change occurs.
<br>
<br>Robert Fogel's foremost work concerns the role of the railways in the economic development of the United States, the importance of slavery as an institution and its economic role in the USA, and studies in historical demography.
<br>
<br>Douglass North has studied the long term development of Europe and the United States, and has in recent work analysed the role institutions play in economic growth.
<br>
<br>Robert W. Fogel's scientific breakthrough was his book (1964) on the role of the railways in the American economy. Joseph Schumpeter and Walt W. Rostow had earlier, with general agreement, asserted that modern economic growth was due to certain important discoveries having played a vital role in development. Fogel tested this hypothesis with extraordinary exactitude, and rejected it. The sum of many specific technical changes, rather than a few great innovations, determined the economic development. We find it intuitively plausible that the great transport systems play a decisive role in development. Fogel constructed a hypothetical alternative, a so called counterfactual historiography; that is he compared the actual course of events with the hypothetical to allow a judgement of the importance of the railways. He found that they were not absolutely necessary in explaining economic development and that their effect on the growth of GNP was less than three per cent. Few books on the subject of economic history have made such an impression as Fogel's. His use of counterfactual arguments and cost-benefit analysis made him an innovator of economic historical methodology.
<br>
<br>Fogel's painstaking criticism of his sources, and his use of the most varying kinds of historical material, made it difficult for his critics to argue against him on purely empirical grounds. As Fogel has stressed, it is the lack of relevant data rather than the lack of relevant theory that is often the greater problem for research workers. Fogel's use of counterfactual analysis of the course of events and his masterful treatment of quantitative techniques in combination with economic theory, have had a substantial influence on the understanding of economic change.
<br>
<br>Fogel's second work of importance (1974), which aroused great attention and bitter controversies, treated slavery as an institution and its role in the economic development of the United States. Fogel showed that the established opinion that slavery was an ineffective, unprofitable and pre-capitalist organisation was incorrect. The institution did not fall to pieces due to its economic weakness but collapsed because of political decisions. He showed that the system, in spite of its inhumanity, had been economically efficient.
<br>
<br>His exceedingly careful testing of all possible sources and his pioneering methodological approach have allowed Fogel to both increase our knowledge of an institution's operation and disintegration and to renew our methods of research. Both his book on the railways and that on slavery have forced researchers to reconsider earlier generally accepted results, and few books in economic history have been scrutinised in such detail by critical colleagues.
<br>
<br>Fogel's third area of research has been economic demography, and in particular the changing rate of mortality over long periods of time and its relation to changes in the standard of living during recent centuries. This project is less controversial than the other two, and is both interdisciplinary and international, with fellow workers from many countries. His conclusion is that less than half of the decrease in mortality can be explained by better standards of nourishment, before the breakthroughs of modern medicine. This leaves the greater part of the decline unexplained. According to Fogel, a systematic analysis demands an integrated study of mortality rates, morbidity rates, food intake and individual body weights and statures. A combination of biomedical and economic techniques is required to achieve this, something that he has at present set about accomplishing. It is already apparent that his analyses will affect research in economic history at many levels.
<br>
<br>Douglass North presented in 1961 an explanatory model for American economic growth before 1860, that came to affect the direction of research not only in the USA. Starting from an export base model he had previously formulated himself, North analyses how one sector (the cotton plantations) stimulated development in other branches, and led to a specialisation and interregional trade.
<br>
<br>In 1968 North presented an article on productivity in ocean shipping, which has become one of the most quoted research works in economic history. In this article he shows that organisational changes played a greater role than technical changes. North has more and more pointed out that economic, political and social factors must be taken into account if we are to understand the development of those institutions that have played a role for economic growth, and how these institutions have been affected by ideological and noneconomic factors. North maintains that if political economics is a theory of choice under certain specific assumptions and restrictions, then the purpose of economic history is to theorise about the development of these. North has pointed out that there is a risk that economic analyses may become ahistoric if the time factor and the conflicts in society are not taken into account. A systematic reintroduction of institutional explanations in the historic analysis is an attempt to correct this deficiency.
<br>
<br>In a number of books (1971, 1973 and 1981), North demonstrated the role played by institutions, including property rights. He is one of the pioneers in "the new institutional economics". Putting it simply, North maintains that new institutions arise, when groups in society see a possibility of availing themselves of profits that are impossible to realise under prevailing institutional conditions. If external factors make an increase in income possible, but institutional factors prevent this from happening, then the chances are good that new institutional arrangements will develop. North tested his hypotheses on development in the USA during the nineteenth century, and showed how agricultural policy, banking, transport, etc. could be explained by the institutional arrangements. In a following book, he considered the economic development of Western Europe from the middle ages to the eighteenth century, and showed that economic incentives, based upon individual property rights, were a prerequisite for economic growth. Changes in relative prices and fluctuations in population growth led to institutional changes. The speedier industrialisation in England and the Netherlands depended upon the fact that certain conservative institutions, such as the guilds, were weak. Private property rights were also guaranteed in these countries, as opposed to the case of Spain where the lack of institutional innovation led to a century long stagnation. Innovations, technical changes and other factors that are generally regarded as explanations, are not considered to be sufficient by North. They are themselves a part of the growth process and cannot explain it. Effective economic organisations are the key to economic change. "Institutions are sets or rules, compliance procedures, and moral and ethical behaviour of individuals in the interest of maximizing the wealth or utility of principals".
<br>
<br>In his latest book (1990), North poses the fundamental question of why some countries are rich and others poor. "Institutions provide the basic structure by which human beings throughout history have created order and attempted to reduce uncertainty in exchange. Together with the technology employed, they determine transaction and transformation costs and hence the profitability and feasibility of engaging in economic activity." Greater institutional changes occur slowly, since institutions are the result of historical change, which has moulded individual behaviour. The greater the institutional uncertainty, the greater become the transaction costs. The lack of opportunity of entering binding contracts and other institutional arrangements is a cause of economic stagnation, both in today's developing countries and the former socialistic states. North has tried to explain the difficulties that meet these countries by focusing his analysis on the political and legal framework for economic growth. In his book he poses fundamental questions concerning the connection between economic change, technical development, and institutional conditions. He shows both the difficulties that neo-classical theory has had in explaining growth, and the strength of using this theory in combination with the approaches he has proposed, North has forced economists to rethink, to be conscious of when economic "laws" are sufficient as an explanation of a given problem, and of when other factors must be taken into account.
<br>
<br>North has, like Fogel, inspired a large number of research workers. His persistent stressing of the importance of stringent theory, together with his emphasis on the role of institutions, has influenced not only economic historians, but also economists and political scientists. Fogel is an empiricist, who never leaves any sources unexplored. North can be compared to those prize winners who have previously received the prize for purely theoretical works. North is an inspirer, a producer of ideas, who identifies new problems and shows how economists can solve the old ones more effectively.
<br>
<br>
<br>Fogel and North have thus in different ways renewed research in economic history, by making it more stringent and more theoretically conscious.
<br>作者: V宝宝 时间: 2005-12-9 04:03
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>13 October 1992
<br>
<br>THIS YEAR's LAUREATE HAS EXTENDED THE SPHERE OF ECONOMIC ANALYSIS TO NEW AREAS OF HUMAN BEHAVIOR AND RELATIONS.
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, 1992, to
<br>
<br>Professor Gary S. Becker, University of Chicago, USA,
<br>
<br>for having extended the domain of microeconomic analysis to a wide range of human behavior and interaction, including nonmarket behavior.
<br>
<br>
<br>Gary Becker's research contribution consists primarily of having extended the domain of economic theory to aspects of human behavior which had previously been dealt with - if at all - by other social science disciplines such as sociology, demography and criminology. In so doing, he has stimulated economists to tackle new problems.
<br>
<br>Gary Becker's research program is founded on the idea that the behavior of an individual adheres to the same fundamental principles in a number of different areas. The same explanatory model should thus, according to Becker, be applicable in analyzing highly diverse aspects of human behavior. The explanatory model which Becker has chosen to work with is based on what he calls an economic approach, which he has applied to one area after another. This approach is characterized by the fact that individual agents - regardless of whether they are households, firms or other organizations - are assumed to behave rationally, i.e., purposefully, and that their behavior can be described as if they maximized a specifc objective function, such as utility or wealth. Gary Becker has applied the principle of rational, optimizing behavior to areas where researchers formerly assumed that behavior is habitual and often downright irrational. Becker has borrowed an aphorism from Bernard Shaw to describe his methodological philosophy: "Economy is the art of making the most of life".
<br>
<br>Becker's applications of his basic model to different types of human behavior can be accounted for by distinguishing among four research areas: (i) investments in human capital; (ii) behavior of the family (or household), including distribution of work and allocation of time in the family; (iii) crime and punishment; and (iv) discrimination on the markets for labor and goods.
<br>
<br>Human Capital
<br>
<br>Gary Becker's most noteworthy contribution is perhaps to be found in the area of human capital, i.e., human competence, and the consequences of investments in human competence. The theory of human capital is considerably older than Becker's work in this field. His foremost achievement is to have formulated and formalized the microeconomic foundations of the theory. In doing so, he has developed the human-capital approach into a general theory for determining the distribution of labor income. The predictions of the theory with respect to the wage structure have been formulated in so-called human-capital- earnings functions, which specify the relation between earnings and human capital. These contributions were first presented in some articles in the early 1960s and were developed further, both theoretically and empirically, in his book, Human Capital, written in 1964.
<br>
<br>The theory of human capital has created a uniform and generally applicable analytical framework for studying not only the return on education and on-the-job training, but also wage differentials and wage profiles over time. Other important applications, pursued by various economists, include a breakdown into components of the factors underlying economic growth, migration, as well as investments and earnings in the health sector. The human-capital approach also helps explain trade patterns across countries; in fact, differences in the supply of human capital among countries have been shown to have more explanatory power than differences in the supply of real capital.
<br>
<br>Practical applications of the theory of human capital have been facilitated dramatically by the increased availability of microdata, for example, panel data, on wages and different characteristics of labor. This development has also been stimulated by Becker's theoretical and empirical studies. It is hardly an overstatement to say that the human-capital approach is one of the most empirically applied theories in economics today.
<br>
<br>Household and Family
<br>
<br>Gary Becker has carried out an even more radical extension of the applicability of economic theory in his analysis of relations among individuals outside of the market system. The most notable example is his analysis of the functions of the family. These studies are summarized in his book, A Treatise on the Family, written in 1981.
<br>
<br>A basic idea in Becker's analysis is that a household can be regarded as a "small factory" which produces what he calls basic goods, such as meals, a residence, entertainment, etc., using time and input of ordinary market goods, "semi-manufactures", which the household purchases on the market. In this type of analysis, prices of basic goods have two components. The first is comprised of the direct costs of purchasing intermediate goods on the market. The second is the time expenditure for production and consumption of the good in question for a specific good, this time expenditure is equivalent to wages multiplied by the time spent per unit of the good produced in the household. This implies that an increase in the wage of one member of the household gives rise not only to changed incentives for work on the market, but also to a shift from more to less time-intensive product on and consumption of goods produced by the household, i.e., basic goods. Instead of an analysis in terms of the traditional dichotomy between work and leisure, Becker's model provides a general theory for the household's allocation of time, as exemplified in the essay, A Theory of the Allocation of Time, from 1965. This approach has turned out to be a highly useful foundation for examining many different issues associated with household behavior.
<br>
<br>Becker has gone even further. He has formulated a general theory for behavior of the family - including not only the distribution of work and the allocation of time in the family, but also decisions regarding marriage, divorce and children. As real wages increase, along with the possibilities of substituting capital for labor in housework, labor is released in the household, so that it becomes more and more uneconomical to let one member of the household specialize wholly in household production (for instance, child care). As a result, some of the family's previous social and economic functions are shifted to other institutions such as firms, schools and other public agencies. Becker has argued that these processes explain not only the increase in married women's job participation outside the home, but also the rising tendency toward divorce; see his article, Human Capital and the Rise and Fall of Families (coauthored by N. Tomes), 1986.
<br>
<br>Alongside Becker's analysis of the distribution of labor and allocation of time in the household, his most influential contribution in the context of the household and the family is probably his studies on ferdlity, which were initiated in an essay entitled, An Economic Analysis of Fertility, 1960. Parents are assumed to have preferences regarding both the number and educational level of their children, where the educational level is affected by the amount of time and other resources that parents spend on their children. Investments in children's human capital may then be derived as a function of income and prices. As wages rise, parents increase their investments in human capital, combined with a decrease in the number of children. Becker uses this theory to explain, for example, the historical decline in fertility in industrialized countries, as well as the variations in fertility among different countries and between urban and rural areas. In particular, the highly extensive family policy in Sweden, to which Becker often refers, suggests the merits of an economic approach to the analysis of these issues.
<br>
<br>Crime and Punishment
<br>
<br>The third area where Gary Becker has applied the theory of rational behavior and human capital is "crime and punishment". A criminal, with the exception of a limited number of psychopaths, is assumed to react to different stimuli in a predictable ("rational") way, both with respect to returns and costs, such as in the form of expected punishment. Instead of regarding criminal activity as irrational behavior associated with the specific psychological and social status of an offender, criminality is analyzed as rational behavior under uncertainty. These ideas are set forth, for example, in Becker's essay, Crime and Punishment: An Economic Approach, 1968, and in Essays in the Economics of Crime and Punishment, 1974.
<br>
<br>Empirical studies related to this approach indicate that the type of crime committed by a certain group of individuals may to a large extent be explained by an individual's human capital (and hence, education). These empirical studies have also shown that the probability of getting caught has a more deterrent effect on criminality than the term of the punishment.
<br>
<br>Economic Discrimination
<br>
<br>Another example of Becker's unconventional application of the theory of rational, optimizing behavior is his analysis of discrimination on the basis of race, sex, etc. This was Becker's first significant research contribution, published in his book entitled, The Economics of Discrimination, 1957. Discrimination is defined as a situation where an economic agent is prepared to incur a cost in order to refrain from an economic transaction, or from entering into an economic contract, with someone who is characterized by traits other than his/her own with respect to race or sex. Becker demonstrates that such behavior, in purely analytical terms, acts as a "tax wedge" between social and private economic rates of return. The explanation is that the discriminating agent behaves as if the price of the good or service purchased from the discriminated agent were higher than the price actually paid, and the selling price to the discriminated agent is lower than the price actually obtained. Discrimination thus tends to be economically detrimental not only to those who are discriminated against, but also to those who practice discrimination.
<br>
<br>Becker's Influence
<br>
<br>Gary Becker's analysis has often been controversial and hence, at the outset, met with scepticism and even distrust . Despite this, he was not discouraged, but persevered in developing his research, gradually gaining increasing acceptance among economists for his ideas and methods.
<br>
<br>
<br>A not insignificant influence may also be discerned in other social sciences. Various aspects of demography constitute one example, particularly in regard to fertility, parents' efforts to ensure their children's education and development, as well as inheritance. Additional examples are research on discrimination in the labor market, and crime and punishment. But Becker has also had an indirect impact on scientific approaches in social sciences other than economics; more frequently than in the past, sociologists and political scientists work with models based on theories of "rational choice".
<br>作者: V宝宝 时间: 2005-12-9 04:04
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>15 October 1991
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel, 1991, to
<br>
<br>Professor Ronald Coase, University of Chicago, USA,
<br>
<br>for his discovery and clarification of the significance of transaction costs and property rights for the institutional structure and functioning of the economy.
<br>
<br>Breakthrough in Understanding the Institutional Structure of the Economy
<br>
<br>Until recently, basic economic analysis concentrated on studying the functioning of the economy in the framework of an institutional structure which was taken as given. Efforts to explain the institutional structure were usually considered unnecessary or futile. For instance, the existence of organizations of the type we call firms seemed almost self-evident. Observed variations in contract forms in the economic sphere were also regarded as a given fact, and the laws and rules of the legal system were perceived as an externally imposed setting for economic activity.
<br>
<br>By means of a radical extension of economic micro theory, Ronald Coase succeeded in specifying principles for explaining the institutional structure of the economy, thereby also making new contributions to our understanding of the way the economy functions. His achievements have provided legal science, economic history and organization theory with powerful impulses and are therefore also highly significant in an interdisciplinary context. Coase's contributions are the result of methodical research work, where each segment was gradually added to the next over a period of many years. It took a long time for his approach to gain a foothold. When the breakthrough finally occurred during the 1970s and 1980s, it was all the more emphatic. Today Coase's theories are among the most dynamic forces behind research in economic science and jurisprudence.
<br>
<br>Coase showed that traditional basic microeconomic theory was incomplete because it only included production and transport costs, whereas it neglected the costs of entering into and executing contracts and managing organizations. Such costs are commonly known as transaction costs and they account for a considerable share of the total use of resources in the economy. Thus, traditional theory had not embodied all of the restrictions which bind the allocations of economic agents. When transaction costs are taken into account, it turns out that the existence of firms, different corporate forms, variations in contract arrangements, the structure of the financial system and even fundamental features of the legal system can be given relatively simple explanations. By incorporating different types of transaction costs, Coase paved the way for a systematic analysis of institutions in the economic system and their significance.
<br>
<br>Coase also demonstrated that the power and precision of analysis may be enhanced if it is carried out in terms of rights to use goods and factors of production instead of the goods and factors themselves. These rights, which came to be called "property rights" in economic analysis, may be comprised of full ownership, different kinds of usership rights or specific and limited decision and disposal rights, defined by clauses in contracts or by internal rules in organizations. The definition of property rights and their distribution among individuals by law, contract clauses and other rules determine economic decisions and their outcome. Coase showed that every given distribution of property rights among individuals tends to be reallocated through contracts if it is to the mutual advantage of the parties and not prevented by transaction costs, and that institutional arrangements other than contracts emerge if they imply lower transaction costs. Modifications of legal rules by courts and legislators are also encompassed by these arrangements. Property rights thus constitute a basic component in analyses of the institutional structure of the economy. In perhaps somewhat pretentious terminology, Coase may be said to have identified a new set of "elementary particles" in the economic system. Other researchers, to some extent under the influence of Coase, have also made pioneering contributions to the study of property rights.
<br>
<br>Coase's Contributions: First Stage
<br>
<br>In his first major study entitled, The Nature of the Firm, Coase posed two questions which had seldom been the objects of strict economic analysis and, prior to Coase, lacked robust and valid solutions, i.e. , why are there organizations of the type represented by firms and why is each firm of a certain size? A key result in traditional theory was to show the ability of the price system (or the market mechanism) to coordinate the use of resources. The applicability of this theory was diminished by the fact that a large proportion of total use of resources was deliberately withheld from the price mechanism in order to be coordinated administratively within firms.
<br>
<br>This is the point at which Coase introduced transaction costs and illustrated their crucial importance. Alongside production costs, there are costs for preparing, entering into and monitoring the execution of all kinds of contracts, as well as costs for implementing allocative measures within firms in a corresponding way. If these circumstances are taken into account, it may be concluded that a firm originates when allocative measures are carried out at lower total production, contract and administrative costs within the firm than by means of purchases and sales on the market. Similarly, a firm expands to the point where an additional allocative measure costs more internally than it would through a contract on markets. If transaction costs were zero, no firms would arise. All allocation would take place through simple contracts between individuals.
<br>
<br>An important element in the model is that there are two types of contracts: those which stipulate the parties' total obligations (or, the reverse, rights) and those which are deliberately made incomplete by not specifying all obligations, but intentionally allow a free margin for unilateral decisions by one of the parties. Such "open" agreements may be exemplified by employment contracts, which usually leave room for direction and giving orders. According to Coase's theory, the firm is characterized by the latitude for decision created by a particular cluster of such open contracts. The firm in fact consists of this array of contracts and is related to the rest of the world by other fully specified contracts regarding purchases of inputs, sales of products, and loans under prescribed terms.
<br>
<br>Coase's formulation has proved to be exceedingly practicable and has given rise to intensive examination of the contract relations which characterize firms. It is now clear that every type of firm is comprised of a distinctive contract structure and thereby a specific distribution of rights and obligations (property rights). Coase's work on the firm has become the basis for rapidly expanding research on principal-agent relations. It has also influenced vital aspects of financial economics, such as the lively research devoted to explaining the pattern of financial intermediaries.
<br>
<br>Coase's Contributions: Second Stage
<br>
<br>In retrospect, it is easy to realize that these examinations of firms' basic characteristics would provide a basis for more general conclusions regarding the institutional structure of the economic system. Coase himself laid the groundwork in a subsequent stage.
<br>
<br>In another major study entitled, The Problem of Social Cost , Coase introduced the set-up in terms of rights or property rights. He postulated that if a property right is well defined, if it can be transferred, and if the transaction costs in an agreement which transfers the right from one holder to another are zero, then the use of resources does not depend on whether the right was initially allotted to one party or the other (except for the difference which can arise if the distribution of wealth between the two parties is affected). If the initial holding entailed an unfavorable total result, the better result would be brought about spontaneously through a voluntary contract, as it can be executed at no cost and both parties gain from it. In other words, all legislation which deals with granting rights to individuals would be meaningless in terms of the use of resources; parties would "agree themselves around" every given distribution of rights if it is to their mutual advantage. Thus, a large amount of legislation would serve no material purpose if transaction costs are zero. This thesis is a direct parallel to the conclusion in The Nature of the Firm that firms under the same conditions are superfluous. All allocations could be effectuated through simple, uncomplicated agreements without administrative features, i.e. , through frictionless markets.
<br>
<br>This led Coase to conclude that it is the fact that transaction costs are never zero which indeed explains the institutional structure of the economy, including variations in contract forms and many kinds of legislation. Or, more exactly, the institutional structure of the economy may be explained by the relative costs of different institutional arrangements, combined with parties' efforts to keep total costs at a minimum. Alongside price formation, the formation of the institutional structure is regarded as an integral step in the process of resource distribution. Hence, economic institutions do not require a "separate" theory. It is sufficient to render existing theory complete and formulate it in terms of the primary components, i.e., property rights.
<br>
<br>These conclusions concerning the radical effects of ever prevalent transaction costs are thus the main result of Coase's analysis. Somewhat paradoxically, circumstances have ordained that it is the preceding conclusion about the consequences of overlooking transaction costs which has come to be called the "Coase Theorem". Of course, the situation without transaction costs is only a hypothetical norm of comparison. However, it can facilitate the analysis of real-world conditions. It may also inspire studies of contracting which can actually be observed, in areas where earlier theory prematurely took it for granted that transaction costs are so high that contracts are inconceivable. Further examinations by Coase himself or students and others inspired by him have shown that in some such cases, transaction costs are not so high as to preclude a contract. Such contracts are found to have strong peculiarities, created by the parties in order to alleviate the drawbacks of high transaction costs. These observations are wholly in line with Coase's main conclusion. In cases where transaction costs absolutely prevent a contract, there is - as inferred by the theorem - a tendency for other institutional arrangements to arise, for example a firm or amended legislation. The circle is closed; this is exactly the message conveyed by The Nature of the Firm.
<br>
<br>
<br>As regards legislation, in The Problem of Social Cost , Coase developed a hypothesis concerning the behavior of courts in rather frequent cases where two (or more) parties dispute rights and where agreements are impossible or extremely difficult because of high transaction costs. Coase found that courts probably try to distribute the rights among the parties so as to realize the solution which would have been the outcome of an agreement, if such an agreement had been possible. The underlying idea is that this is a natural and rational way for a court to reason if it is more intent on setting a precedent to generate expedient incentives for the future than solving a particular dispute. This means that common pleas courts serve as an extension of the market mechanism to areas where it cannot function due to transaction costs. This hypothesis has become immensely important because, along with the general formulation in terms of rights or property rights, it has become the impetus for developing the new discipline of "law and economics" and, in prolongation, for renewal of many aspects of legal science.
<br>作者: V宝宝 时间: 2005-12-9 04:05
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>16 October 1990
<br>
<br>THIS YEAR'S LAUREATES ARE PIONEERS IN THE THEORY OF FINANCIAL ECONOMICS AND CORPORATE FINANCE
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the 1990 Alfred Nobel Memorial Prize in Economic Sciences with one third each, to
<br>
<br>Professor Harry Markowitz, City University of New York, USA,
<br>Professor Merton Miller, University of Chicago, USA,
<br>Professor William Sharpe, Stanford University, USA,
<br>
<br>for their pioneering work in the theory of financial economics.
<br>
<br>Harry Markowitz is awarded the Prize for having developed the theory of portfolio choice;
<br>William Sharpe, for his contributions to the theory of price formation for financial assets, the so-called, Capital Asset Pricing Model (CAPM); and
<br>Merton Miller, for his fundamental contributions to the theory of corporate finance.
<br>
<br>Summary
<br>Financial markets serve a key purpose in a modern market economy by allocating productive resources among various areas of production. It is to a large extent through financial markets that saving in different sectors of the economy is transferred to firms for investments in buildings and machines. Financial markets also reflect firms' expected prospects and risks, which implies that risks can be spread and that savers and investors can acquire valuable information for their investment decisions.
<br>
<br>The first pioneering contribution in the field of financial economics was made in the 1950s by Harry Markowitz who developed a theory for households' and firms' allocation of financial assets under uncertainty, the so-called theory of portfolio choice. This theory analyzes how wealth can be optimally invested in assets which differ in regard to their expected return and risk, and thereby also how risks can be reduced.
<br>
<br>A second significant contribution to the theory of financial economics occurred during the 1960s when a number of researchers, among whom William Sharpe was the leading figure, used Markowitz's portfolio theory as a basis for developing a theory of price formation for financial assets, the so-called Capital Asset Pricing Model, or CAPM.
<br>
<br>A third pioneering contribution to financial economics concerns the theory of corporate finance and the evaluation of firms on markets. The most important achievements in this field were made by Merton Miller, initially in collaboration with Franco Modigliani (who received the Alfred Nobel Memorial Prize in Economic Sciences in 1985 mainly for other contributions). This theory explains the relation (or lack of one) between firms' capital asset structure and dividend policy on one hand and their market value on the other.
<br>
<br>Harrv M. Markowitz
<br>The contribution for which Harry Markowitz now receives his award was first published in an essay entitled "Portfolio Selection" (1952), and later, more extensively, in his book, Portfolio Selection: Efficient Diversification (1959). The so-called theory of portfolio selection that was developed in this early work was originally a normative theory for investment managers, i.e., a theory for optimal investment of wealth in assets which differ in regard to their expected return and risk. On a general level, of course, investment managers and academic economists have long been aware of the necessity of taking returns as well as risk into account: "all the eggs should not be placed in the same basket". Markowitz's primary contribution consisted of developing a rigorously formulated, operational theory for portfolio selection under uncertainty - a theory which evolved into a foundation for further research in financial economics.
<br>
<br>Markowitz showed that under certain given conditions, an investor's portfolio choice can be reduced to balancing two dimensions, i.e., the expected return on the portfolio and its variance. Due to the possibility of reducing risk through diversification, the risk of the portfolio, measured as its variance, will depend not only on the individual variances of the return on different assets, but also on the pairwise covariances of all assets.
<br>
<br>Hence, the essential aspect pertaining to the risk of an asset is not the risk of each asset in isolation, but the contribution of each asset to the risk of the aggregate portfolio. However, the "law of large numbers" is not wholly applicable to the diversification of risks in portfolio choice because the returns on different assets are correlated in practice. Thus, in general, risk cannot be totally eliminated, regardless of how many types of securities are represented in a portfolio.
<br>
<br>In this way, the complicated and multidimensional problem of portfolio choice with respect to a large number of different assets, each with varying properties, is reduced to a conceptually simple two-dimensional problem - known as mean-variance analysis. In an essay in 1956, Markowitz also showed how the problem of actually calculating the optimal portfolio could be solved. (In technical terms, this means that the analysis is formulated as a quadratic programming problem; the building blocks are a quadratic utility function, expected returns on the different assets, the variance and covariance of the assets and the investor's budget restrictions.) The model has won wide acclaim due to its algebraic simplicity and suitability for empirical applications.
<br>
<br>Generally speaking, Markowitz's work on portfolio theory may be regarded as having established financial micro analysis as a respectable research area in economic analysis.
<br>
<br>William F. Sharpe
<br>With the formulation of the so-called Capital Asset Pricing Model, or CAPM, which used Markowitz's model as a "positive" (explanatory) theory, the step was taken from micro analysis to market analysis of price formation for financial assets. In the mid-1960s, several researchers - independently of one another - contributed to this development. William Sharpe's pioneering achievement in this field was contained in his essay entitled, Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk (1964).
<br>
<br>The basis of the CAPM is that an individual investor can choose exposure to risk through a combination of lending-borrowing and a suitably composed (optimal) portfolio of risky securities. According to the CAPM, the composition of this optimal risk portfolio depends on the investor's assessment of the future prospects of different securities, and not on the investors' own attitudes towards risk. The latter is reflected solely in the choice of a combination of a risk portfolio and risk-free investment (for instance treasury bills) or borrowing. In the case of an investor who does not have any special information, i.e., better information than other investors, there is no reason to hold a different portfolio of shares than other investors, i.e., a so-called market portfolio of shares.
<br>
<br>What is known as the "beta value" of a specific share indicates its marginal contribution to the risk of the entire market portfolio of risky securities. Shares with a beta coefficient greater than 1 have an above-average effect on the risk of the aggregate portfolio, whereas shares with a beta coefficient of less than 1 have a lower than average effect on the risk of the aggregate portfolio. According to the CAPM, in an efficient capital market, the risk premium and thus also the expected return on an asset, will vary in direct proportion to the beta value. These relations are generated by equilibrium price formation on efficient capital markets.
<br>
<br>An important result is that the expected return on an asset is determined by the beta coefficient on the asset, which also measures the covariance between the return on the asset and the return on the market portfolio. The CAPM shows that risks can be shifted to the capital market, where risks can be bought, sold and evaluated. In this way, the prices of risky assets are adjusted so that portfolio decisions become consistent.
<br>
<br>The CAPM is considered the backbone of modern price theory for financial markets. It is also widely used in empirical analysis, so that the abundance of financial statistical data can be utilized systematically and efficiently. Moreover, the model is applied extensively in practical research and has thus become an important basis for decision-making in different areas. This is related to the fact that such studies require information about firms' costs of capital, where the risk premium is an essential component. Risk premiums which are specific to an industry can thus be determined using information on the beta value of the industry in question.
<br>
<br>Important examples of areas where the CAPM and its beta coefficients are used routinely, include calculations of costs of capital associated with investment and takeover decisions (in order to arrive at a discount factor); estimates of costs of capital as a basis for pricing in regulated public utilities; and judicial inquiries related to court decisions regarding compensation to expropriated firms whose shares are not listed on the stock market. The CAPM is also applied in comparative analyses of the success of different investors.
<br>
<br>Along with Markowitz' portfolio model, the CAPM has also become the framework in textbooks on financial economics throughout the world.
<br>
<br>Merton Miller
<br>While the model of portfolio choice and the CAPM focus on financial investors, Merton Miller - initially in collaboration with Franco Modigliani - established a theory for the relation, via the capital market, between the capital asset structure and dividend policy of production firms on one hand and firms' market value and costs of capital on the other.
<br>
<br>The theory is based on the assumption that stockholders themselves have access to the same capital market as arms. This implies that within the limits of their asset portfolios, investors themselves can find their own balance between returns and risk. As a result, firms do not have to adjust their decisions to different stockholders' risk preferences. Corporate managers can best safeguard the interests of stockholders simply by maximizmg the firm's net wealth. In other words, it is not in the investors' interest that firms reduce risks through diversification, as the stockholders can accomplish this themselves through their own portfolio choice.
<br>
<br>The basic model was formulated in Miller's and Modigliani's essay entitled "The Cost of Capital, Corporation Finance and the Theory of Investment" (1958); it was followed by two other important essays in 1963 and 1966. Using this basic model, Miller and Modigliani derived two so-called invariance theorems, now known as the MM theorems.
<br>
<br>The first invariance theorem states that (i) the choice between equity financing and borrowing does not affect a firm's market value and average costs of capital, and (ii) the expected return on a firm's shares (and hence the cost of equity capital) increases linearly with the ratio between the firm's liabilities and equity, i.e., the well-known leverage effect. The second invariance theorem states that under the same assumptions, a firm's dividend policy does not affect its market value.
<br>
<br>In retrospect, the intuition underlying the MM theorems appears simple. The effects of every change in a firm's financial asset structure on the stockholders' portfolios can be "counteracted" by changes in the stockholders' own portfolios. Investors are quite simply not prepared to "pay extra" for an "indirect" loan from a firm which increases its borrowing when the investor himself can borrow on equal terms on the market.
<br>
<br>The intuition behind MM's second invariance theorem, i.e., that dividend policy does not affect the market value of the firm in equilibrium, is also apparent in retrospect. An additional dollar in dividends lowers the net wealth of the firm by one dollar which, in efficient stock markets, implies that the stockholders' units are worth one dollar less. This relation is not quite as simple as it seems. As in the case of the first invariance theorem, the mechanism which generates this conclusion is that investors in the capital market can "counteract" changes in firms' financial structure.
<br>
<br>Both of the invariance theorems were originally derived under highly simplified assumptions. Therefore, subsequent research has to a large extent dealt with the consequences of various deviations from the conditions on which the MM theorems were based. This research has been in progress since the mid-1960s, with Merton Miller as its leading figure.
<br>
<br>Miller thus showed how the design of different tax structures affects the relation between firms' capital asset structure and market value, after taking into account the indirect market effects of taxes through equilibrium price formation on financial markets. Similarly, Miller analyzed the importance of bankruptcy costs for the relation between a firm's financial asset structure and dividend policy on one hand and its stock-market value on the other.
<br>
<br>
<br>The main message of the MM theorems may be expressed as follows: if there is an optimal capital asset structure and dividend policy for firms, i.e., if the asset structure and dividend policy affect a firm's market value, then this reflects the consequences of taxes or other explicitly identified market imperfections. The MM theorems have therefore become the natural basis, or norm of comparison for theoretical and empirical analysis in corporate finance. Merton Miller is the researcher who has dominated this analysis during the last two decades. He has thus made a unique contribution to modern theory of corporate finance.
<br>作者: V宝宝 时间: 2005-12-9 04:05
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>11 October 1989
<br>
<br>THIS YEAR'S LAUREATE IN ECONOMICS SHOWED HOW ECONOMIC THEORIES CAN BE TESTED
<br>
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the 1989 Alfred Nobel Memorial Prize in Economic Sciences to
<br>
<br>Professor Trygve Haavelmo, Oslo, Norway,
<br>
<br>for his clarification of the probability theory foundations of econometrics and his analyses of simultaneous economic structures.
<br>
<br>Summary
<br>
<br>This year's prize in economic sciences is awarded to Trygve Haavelmo for his fundamental contributions to econometrics.
<br>
<br>During the 1930s, noteworthy attempts were made to test economic theories empirically. The results of these attempts called attention to two fundamental problems associated with the possibility of testing economic theories. First, economic relations often refer to large aggregates of individuals or firms. Theories regarding such relations can never be expected to conform fully with available data, even in thc absence of measurement errors. The difficult question then is to determine what should be considered sufficiently good, or bettcr conformity. Second, economists can seldom or never carry out controlled experiments in the same way as natural scientists. Available observations of market outcomes, etc., are results of a multitude of different behavior and relations which have mutually interacting effects. This gives rise to interdependence problems, i.e., difficulties in using observed data to identify, estimate, and test the underlying relations in an unambiguous way.
<br>
<br>In his dissertation from 1941 and a number of subsequent studies, Trygve Haavelmo was able to show convincingly that both fundamental problems could be solved if economic theories were formulated in probabilistic terms. Mcthods used in mathematical statistics could then be applied to draw stringent conclusions about underlying relations from thc random sample of empirical observations. Haavelrno demonstrated how these methods can be utilized to estimate and test economic theories and use them in forecasting. He also showed that misleading interpretations of individual relations due to interdependence cannot be avoided unless all relations in a theoretical model are estimated simultaneously.
<br>
<br>Haavelmo's doctoral thesis had a swift and pathbreaking influence on the development of econometrics. His probability theory research prograrn attracted a number of outstanding economists - among them, subsequent Nobel laureates such as Koopmans and Klein. This gave rise to extraordinarily rapid methodological development, primarily during thc 1940s. The foundation of modern econometric methods had thus been established
<br>
<br>The Probability Theory Revolution in Econometrics
<br>
<br>Econometric research has been carried out since the beginning of this century. Initially, U.S. economists such as Moore and Schultz worked on econometric determination of supply and demand on individual markets. During the 1930s, Tinbergen and Haavelmo's own teacher, Ragnar Frisch, made the first attempts to apply corresponding methods to test various macrodynamic relations. These estimates touched on several problems which Haavelmo later analyzed in his dissertation.
<br>
<br>Prior to Haavelmo's thesis, researchers lacked a common conceptual system for formulating, analyzing and solving econometric problems. At the time, few econometric methods were based on probability theory and therefore could not utilize statistical inference to draw conclusions from data. To the extent that calculations contained any random variations at all, they usually referred to measurement errors in the variables. Simple statistical methods - mainly regression analysis - were used in most instances, without any clear probability theory assumptions whatsoever. During this period, most prominent economists including Keynes - rejected more extensive use of probability theory in empirical research on the grounds that, e.g., economic processes were irreversible. Many of the leading econometricians of the day - such as Frisch - were also skeptical about the possibility of applying statistical inference methods to economic data.
<br>
<br>In his dissertation, Haavelmo refuted these various objections and showed that in order for economic theories to be testable, probability theory formulation is not only a prerequisite, but also extremely reasonable. Economists analyze results of millions of decisions made by individuals and firms. According to Haavelmo, it is unreasonable to believe that economists could ever "fully" explain or predict such individual decisions on the basis of necessarily simplified assumptions. Decisions are in fact affected by individual characteristics and numerous temporary conditions which change over time. Therefore, economists' explanations of decisions always have to encompass a stochastic term which summarizes these different kinds of "disturbances". As economic theories in general do not refer to individual decisions but are concerned with relations which comprise long sequences of decisions and a multitude of decision-makers, there are frequent opportunities to make relatively simple assumptions about the probability distribution of these aggregate relations.
<br>
<br>Haavelmo also demonstrated that by formulating theories in probability theory terms, statistical inference methods could be applied to estimate and test economic theories and use them in forecasting. Most of the problems he dealt with and analyzed are associated with interdependence in economic relations.
<br>
<br>Interdependence Problems
<br>
<br>In economic life, every individual decision may be regarded as affecting all other decisions through a chain of market relations. This economic interdependence creates problems in empirical research because an observed market outcome is the result of a large number of simultaneous or previous decisions and behavioral relations. Thus, an underlying relation can never be observed, as it were, in isolation, but only as conditioned by a number of other simultaneous relations and circumstances in the economy. As Haavelmo showed, interdependence gives rise to difficulties in specifying, identifying and estimating economic relations.
<br>
<br>The difficulty of specifying economic explanatory models, lies in choosing among numerous models or systems of relations which might explain the observed market outcome. When the relations in the model are interdependent, then a set of model equations can be used to derive a multitude of other equation systems which produce the same observable result. Haavelmo emphasized the importance of trying to choose a set of relations which are each as "autonomous" as possible, i.e., which are not affected by changes in other parts of the system. For example, in order to determine the effect of a decrease in household incomes, due to changes in fiscal policy, on private consumption, then obviously the estimate of the propensity to consume used in the computation should not be conditioned by previous fiscal policy. The choice of autonomous relations in explanatory models is primarily a matter of adequate knowledge and intuition regarding the basic mechanisms of the economy. However, Haavelmo also discussed the need for statistical invariance tests and not too long ago, researchers succeeded in developing a method whereby the autonomy in different relations can be tested statistically.
<br>
<br>The fact that many different types and forms of explanatory models can explain observed data, also gives rise to an identification problem. For example, if a theory is intended to explain the observed relations between price and sales on a market, the relations have to be aufficiently specified so as to be identifiable as demand and supply relations, respectively, with some given form of probability distribution. Haavelmo was the first to provide an explicit mathematical formulation and solution of the identification problem. Further development of identification criteria has been based on his formulation.
<br>
<br>Interdependence also creates what Haavelmo called simultaneity problems in estimating models with several different structural relations. Since the combined relations limit the possible variations in the input variables, isolated estimates of individual relations can be highly misleading. Using a probability theory framework, Haavelmo provided a generally valid formulation and method of measuring this bias in isolated estimates of individual relations in an interdependent system. He also showed that the problem could be avoided by using a method of simultaneous estimates of interdependent models. Haavelmo's analysis of simultaneity problems has had a decisive influence on later work with econometric models.
<br>
<br>From Econometrics to Economic Theory
<br>
<br>Once the foundation of probabilistic econometrics had been established, Haavelmo's next important research effort involved attempts to transform various components of economic theory so that the new econometric methods would be applicable. According to Haavelmo, the prerequisites for achieving this purpose were not only additional assumptions about probability distributions, but also in many instances a more dynamic theoretical formulation. There are two areas in particular - investment theory and economic development theory - where Haavelmo's approach has resulted in influential and far-reaching contributions. In addition to these main lines of research, Haavelmo's achievements include valuable contributions in numerous areas - from analysis of macroeconomic fluctuations and fiscal policy to price theory and the history of economic thought.
<br>
<br>Major Publications
<br>
<br>Haavelmo's most influential study is his doctoral dissertation entitled, The Probabilitv Approach in Econometrics , presented at Harvard University in April 1941, although not published until 1944 as a supplement to Econometrica. The argument in his thesis was later extended and exemplified in numerous publications, among which may be mentioned two articles in Econometrica: "The Statistical Implications of a System of Simultancous Equations" (1943) and "Statistical Analysis of the Demand for Food" (1947, co-authored by M.A. Girshick).
<br>
<br>
<br>Haavelmo's early research on economic development theory is summarized in his book entitled, A Study in the Theory of Economic Evolution (1954). A corresponding resume of Haavelmo's contribution to investment theory is given in A Studv in the Theory of Investment (1960).
<br>作者: V宝宝 时间: 2005-12-9 04:06
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>18 October 1988
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the 1988 Alfred Nobel Memorial Prize in Economic Sciences to
<br>
<br>Professor Maurice Allais, Ecole Nationale Supérieure des Mines de Paris, France,
<br>
<br>for his pioneering contributions to the theory of markets and efficient utilization of resources.
<br>
<br>One of the principal tasks of basic research in economics is to formulate a rigorous model of equilibrium in markets and examine the efficiency of this equilibrium. The problem dates back to Adam Smith and his theory of the "invisible hand" which coordinates - to all appearances - a chaotic structure comprised of a multitude of independent and individual decisions based on self-interest. Paradoxically, this chaos gives rise to coordinated equilibria based on market prices. Firms' production decisions will correspond to consumers' planned consumption.
<br>
<br>Adam Smith formulated his theory in the verbal and somewhat expository manner that was common in the social sciences during the latter part of the 18th century. About a hundred years later, other scholars tried to reformulate Smith's basic problems in mathematical terms. As a result, modern price theory as it emerged in the late 19th century, differed radically from previous conceptions of "just" prices or prices based exclusively on production costs for labor.
<br>
<br>The missing link in the development of a more rigorous theory was provided in the 1870s by the French economist Léon Walras. He formulated his model ot the economic system as a large system of equations which described individuals' demand for goods and services and their supply of labor and other productive input along with firms' supply of goods and their demand for various factors of production. A set of prices which gave rise to equilibrium between supply and demand could, in fact, be regarded as a solution to this extremely large and complex system of equations. Later on, Walras's model was developed further by, among others, the Italian economist and sociologist Vilfredo Pareto. The Swedish economist Gustav Cassel formulated a somewhat simplified version which had a significant impact internationally.
<br>
<br>The foremost contribution of Maurice Allais was made in the 1940s when he continued to develop Walras's and Pareto's work by providing increasingly rigorous mathematical formulations of market equilibrium and the efficiency properties of markets. On the basis of mathematical models of households' and firms' planning and choice, he introduced a very general formulation of the conditions for market equilibrium. Allais's two pioneering works are A la Recherche d'une Discipline Economique, published during the war in 1943, and Economie et Interet, 1947. A second edition of the first book appeared in 1952 as Traite d'Economie Pure. Each of these studies was extensive; the first comprised about 900 pages and the second approximately 800.
<br>
<br>Traite d'Economie Pure contains a general and rigorous formulation of the two basic propositions of welfare theory. An economic situation with equilibrium prices is socially efficient in the sense that no one can become better off without someone else becoming worse off. In addition, under certain reasonable conditions, each such socially efficient situation can be achieved through redistribution of initial resources and a system of equilibrium prices. These propositions are important not only as results of basic research, but also as guidelines for planning in e.g., the public sector by means of prices (instead of direct regulation). Allais also formulated a generalization which covers the case where various kinds of returns to scale may give rise to natural monopolies. Through his analysis of market equilibrium and social efficiency, Allais laid the foundations for the school of postwar French economists who not only analyzed the conditions for efficient use of resources in large public monopolies (such as Electricité de France or SNCF, the state-owned railroad), but also in many instances applied the theory to business management.
<br>
<br>Allais's two monumental works also contain many results which represent very early contributions in areas that were not explored until much later on. He used new mathematical methods to analyze the stability of equilibria, i.e., the conditions under which an economy - after a disturbance - will return to equilibrium through price formation. In his 1948 study, Allais anticipated important results in research which led to the modern theory of economic growth in the late 1950s and early 1960s.
<br>
<br>Allais's distinguished contribution may, to some extent, be regarded as a parallel to two important works published around the same time in the Anglo-Saxon research community: Value and Capital (1939) by Sir John Hicks, and Foundations of Economic Analvsis (1947) by Paul A. Samuelson. Hicks was awarded the Nobel memorial prize in economic sciences in 1972 and Samuelson in 1970. The similarity lies primarily in the objective of providing a comprehensive and rigorous interpretation of economic theory. The main difference is perhaps that Allais's formulation is more general and includes an analysis of, e.g. households' and firms' long-run (or intertemporal) planning. The work of Allais served as a basis for the analysis of market equilibrium and social efficiency using more advanced mathematical methods carried out by his pupil, Gerard Debreu (laureate in 1983), concurrently, and sometimes in collaboration with, Kenneth Arrow (laureate in 1972).
<br>
<br>Allais's outstanding achievements may be characterized as basic research in economics. By his links to an older French tradition in economic research, Maurice Allais is the most prominent figure in modern economic research in France as regards basic theory and applications to public-sector planning. Even though his fundamental research has been relatively little known beyond the French-speaking sphere, Allais has had a far-reaching indirect impact through younger French economists who have been strongly influenced by his work.
<br>
<br>Maurice Allais has also made distinguished, pioneering and often highly original contributions in other areas of economic research. At an early stage, he carried out theoretical and empirical studies on the significance and determinants of the volume of money. He was thus an initiator of monetary macrodynamic analyses. Outside of a rather small circle of economists, he is perhaps best known for his studies of risk theory and the so-called Allais paradox. He has shown that the theory of maximization of expected utility, which has been accepted for more than forty years, does not apply to many empirically realistic decisions under risk and uncertainty.
<br>
<br>During the past two decades, Allais has tried to generalize market theory by emphasizing its dynamic aspects. The impetus for consumers' and producers' economic behavior consists of efforts to use any surpluses that may arise in an economy through previously unexploited exchange opportunities. Equilibrium is reached when these surpluses have been exhausted. Allais summarized many of his early and more recent research contributions in La Theorie Générale des Surplus (1981).
<br>
<br>
<br>The sum ot Allais's productive achievements in economic theory is considerable. Moreover, he has carried out various applied studies in, e.g., operations research and has participated extensively in debates in the French press. Alongside his accomplishments as an economist, Allais has published studies in history and physics, particularly geophysics.
<br>作者: V宝宝 时间: 2005-12-9 04:06
Presentation Speech - The Sveriges RikS*ank (Bank of Sweden) Prize in Economic Sciences in Memory of Alfred Nobel
<br>
<br>KUNGL. VETENSKAPSAKADEMIEN? THE ROYAL SWEDISH ACADEMY OF SCIENCES
<br>
<br>21 October 1987
<br>
<br>The Royal Swedish Academy of Sciences has decided to award the 1987 Alfred Nobel Memorial Prize in Economic Sciences to
<br>
<br>Professor Robert M. Solow, Massachusetts Institute of Technology, Cambridge, USA,
<br>
<br>for his contributions to the theory of economic growth.
<br>
<br>The study of the factors which permit production growth and increased welfare has been a central feature in economic research for many years. Robert M. Solow's prize recognizes his exceptional contributions in this area.
<br>
<br>It is eminently reasonable to imagine that increased per capita production in a country may be the result of more machines and more factories (a greater stock of real capital). But this increased production may also be due to improved machines and more efficient production methods (which may be termed technical development). In addition, better education and training, and improved methods of organizing production may also give rise to increased productivity. The discovery of fresh natural resources, or improvements in a country's position on the world market, may also lead to higher standards of living. Solow has created a theoretical framework which can be used in discussing the factors which lie behind economic growth in both quantitative and theoretical terms. This framework can also be exploited to measure empirically the contributions made by various production factors in economic growth.
<br>
<br>Solow's Growth Model
<br>
<br>Solow's growth model was presented in an article entitled, A Contribution to the Theory of Economic Growth (1956). The article contains a mathematical model (in the form of a differential equation) describing how increased capital stock generates greater per capita production. Solow's starting point is that society saves a given constant proportion of its incomes. The population and the supply of labor, grow at a constant rate and capital intensity (capital per employee) can be regulated. Capital intensity is determined by the prices of production factor. Due to diminishing yields, however, additional capital injections (increasing capital intensity) will make ever smaller contributions to production. This means that, in the long term, the economy will approach a condition of identical growth rates for capital, labor and total production (on condition that there is no technological progress). This involves a situation in which per capita production and real wage no longer increase. An increase in the proportion of incomes which is saved cannot, therefore, lead to a permanent increase in the rate of growth. In contrast, an economy with a higher savings ratio, experiences higher per capita production, and thus higher real income. But, in the absence of technological progress, the rate of growth will be the same, irrespective of the savings quotient, and will be purely dependent on an increased supply of labor.
<br>
<br>As a result, technological development will be the motor for economic growth in the long run. In Solow's model, if continuous technological progress can be assumed, growth in real incomes will be exclusively determined by technological progress.
<br>
<br>The preceding discussion has assumed that a given proportion of economic income is saved and that savings correspond to an equivalent amount of planned investment. Solow proves, however, that if corporations had perfect foresight and if the labor and capital markets function satisfactorily, corporations will wish to invest to the extent that their total investment plans correspond to the given value of savings. This means that Solow ignores the conditions that may underlie, for example, a Keynesian analysis of unemployment. However, while Keynesians focus on short term instability, Solow is interested in an analysis of long term development.
<br>
<br>Solow's theoretical model had an enormous impact on economic analysis. From simply being a tool for the analysis of the growth process, the model has been generalized in several different directions. It has been extended by the introduction of other types of production factors and it has been reformulated to include stochastic features. The design of dynamic links in certain "numerical" models employed in general equilibrium analysis has also been based on Solow's model. But, above all, Solow's growth model constitutes a framework within which modern macroeconomic theory can be structured.
<br>
<br>Empirical Growth Analysis
<br>
<br>The empirical estimation of the contributions of various production factors to GNP is linked with the work of several other economists. Solow's contributions in two articles, Technical Change and the Aggregate Production Function, published in 1957, and in Investment and Technical Progress, from 1960, laid the foundations for what was later to develop into "growth accounting".
<br>
<br>In his first article, Solow based his model on time series figures for total production, the total input of labor and the cost shares of these factors in total production. Solow thus achieved a measure for continuous change in production technology over time by calculating the difference between the relative development of production and the development of the supply of labor and capital, weighted by factor shares. On the basis of this estimated series, Solow could assess the production function, (ie the mathematical relationship between production, on the one hand, and the input of production factors, on the other).
<br>
<br>The change in production technology (the change in production which could not be interpreted as changed inputs of labor and capital) was interpreted as the result of changes in production techniques, that is to say, technical progress.
<br>
<br>Solow's analysis showed that technical improvements were neutral over time (the distribution of GNP between earnings and capital yield was not affected by technical change). He also demonstrated that only a small proportion of annual growth could be explained by increased inputs of labor and capital.
<br>
<br>Solow's study had a dramatic impact - similar analyses were undertaken in a great many other countries. Access to better statistical data in the form of time series for capital and labor has permitted more reliable results to be achieved.
<br>
<br>The first attempts at measuring the contributions of production factors to total production were based on given series for the supply of labor and the stock of capital. Both these aggregates are somewhat controversial, however. Robert Solow participated actively in lengthy discussions about the measurement of aggregated capital stock (the "capital controversy" of the 1960s and 1970s). In an article published in 1960, Investment and Technical Progress, Solow presents a new method of studying the role played by capital formation in economic growth. His basic assumption was that technical progress is "built into" machines and other capital goods and that this must be taken into account when making empirical measurements of the role played by capital. This idea then gave birth to the "vintage approach" (a similar idea was discussed by Leif Johansen in Norway at about the same time). The vintage approach assumes that new investments are characterized by the most modern technology and that the capital that is formed as a result does not change in qualitative terms over its remaining life. Thus, the investment decision ties up future technology to some extent, since technological knowledge is rooted in the physical capital object. Solow's formulation of a mathematical model based on these ideas enabled him to develop a theory which permitted empirical calculations to be made. In principle, the model established a new way of aggregating capital from different periods. Solow's empirical results naturally gave the formation of capital a markedly higher status in explaining the increase in production per employee.
<br>
<br>The most important aspect of Solow's article was not so much the empirical outcome, but the method of analysing "vintage capital". Nowadays, the vintage capital concept has many other applications and is no longer solely employed in analyses of the factors underlying economic growth. For example, many numerical general equilibrium models utilize Solow's approach in the study of the sensitivity of economies to certain types of disruptive effects. The vintage approach has proved invaluable, both from the theoretical point of view and in applications such as the analysis of the development of industrial structures.
<br>
<br>Other Works
<br>
<br>Professor Robert Solow has worked actively within many vital areas of economic theory. For example, he has published important contributions in the area of natural resource economics. Conventional economic growth theories assume that the only factors which can affect economic growth are labor, capital and technology. In recent years, the role of natural resources has also attracted considerable attention. Is it possible to imagine continued economic growth when we know that natural resources are finite? Solow studied this question from a theoretical perspective in an article published in 1974 and found that the key to this problem lay in assumptions made about the substitution elasticity for capital and natural resource inputs. Solow has also studied the environmental consequences of growth in other works.
<br>
<br>
<br>Over the last decade, Professor Solow has largely devoted his research efforts to macroeconomic questions involving unemployment and economic policy and he has been a member of the US President's Council of Economic Advisers.
<br>作者: 咸鱼の依旧 时间: 2006-2-2 22:41
汗~~~~~还是英文