Students / Subjects


Experimenters
Email:

Password:


Forgot password?

Register

Glossary


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (Show all)

A

a fortiori

Latin for "even stronger". Can be used to compare two theorems or proofs. Could be interpreted to mean "in the same way."

Source: econterms

A trait

is a relatively permanent disposition of an individual. Traits are inferred from behaviour and are considered to be continuous dimensions on which individual differences can be arranged quantitatively (e.g. extraversion, introversion). Traits are to be distinguished from states.

Source: SFB 504

A-D equilibrium

abbreviation for Arrow-Debreu equilibrium.

Source: econterms

AAEA

American Agricultural Economics Association. See their web site at http://www.aaea.org.

Source: econterms

Ability to pay principle

A notion claiming that those who can afford to pay the tax should bear a greater weight of paying the tax.

Source: EconPort

abnormal returns

Used in the context of stock returns; means the return to a portfolio in excess of the return to a market portfolio. Contrast excess returns which means something else. Note that abnormal returns can be negative.
Example: Suppose average market return to a stock was 10% for some calendar year, meaning stocks overall were 10% higher at the end of the year than at the beginning, and suppose that stock S had risen 12% in that period. Then stock S's abnormal return was 2%.

Source: econterms

absolute risk aversion

An attribute of a utility function. See Arrow-Pratt measure.

Source: econterms

absorptive capacity

A limit to the rate or quantity of scientific or technological information that a firm can absorb. If such limits exist they provide one explanation for firms to develop internal R&D capacities. R&D departments can not only conduct development along lines they are already familiar with, but they have formal training and external professional connections that make it possible for them to evaluate and incorporate externally generated technical knowledge into the firm better than others in the firm can. In other words a partial explanation for R&D investments by firms is to work around the absorptive capacity constraint.

This term comes from Cohen and Levinthal (1990).

Source: econterms

abstracting from

a phrase that generally means "leaving out". A model abstracts from some elements of the real world in its demonstration of some specific force.

Source: econterms

Abstractness

The information contained in a prototype is an abstraction across several instances of the concept.

Source: SFB 504

accelerator principle

That it is the growth of output that induces continuing net investment. That is, net investment is a function of the change in output not its level.

Source: econterms

acceptance region

Occurs in the context of hypothesis testing. Let T be a test statistic. Possible values of T can be divided into two regions, the acceptance region and the rejection region. If the value of T comes out to be in the acceptance region, the null hypothesis being tested is not rejected. If T falls in the rejection region, the null hypothesis is rejected.

The terms 'acceptance region' and 'rejection region' may also refer to the subsets of the sample space that would produce statistics T in the acceptance region or rejection region as defined above.

Source: econterms

Accessibility and availability

Accessibility ? Construct accessibility is the readiness with which a stored construct is utilized in information processing; that is, construct accessibility is concerned with stored constructs, their utilization in information processing, and the likelihood of such utilization.

Source: SFB 504

Accounting costs

The explicit costs of production; these include monetary payments to cover the costs of fixed or variable inputs (i.e., fixed costs and variable costs). Some examples include utilities, rent, wages of labor, property taxes, the cost of raw materials, etc. Unlike economic costs, they do not include implicit (opportunity) costs.

Source: EconPort
See also: economic costs, implicit costs, opportunity cost. , 

Accounting Profit

The profit made from the total revenue received from the sale of the goods less the (explicit) costs of producing these goods. It is calculated as Total Revenue â?? Explicit Costs.

Source: EconPort

ACIR

Advisory Council on Intergovernmental Relations, in the U.S.

Source: econterms

active measures

In the context of combating unemployment: policies designed to improve the access of the unemployed to the labor market and jobs, job-related skills, and the functioning of the labor market. Contrast passive measures.

Source: econterms

adapted

The stochastic process {Xt} and information sets {Yt} are adapted if {Xt} is a martingale difference sequence with respect to {Yt}.

Source: econterms

AEA

American Economics Association

Source: econterms

AER

An abbreviation for the American Economic Review.

Source: econterms

affiliated

From Milgrom and Weber (Econometrica, 1982, page 1096): Bidders' valuations of a good being auctioned are affiliated if, roughly: "a high value of one bidder's estimate makes high values of the others' estimates more likely."

There may well be good reasons not to use the word correlated in place of affiliated. This editor is advised that there is some mathematical difference.

Source: econterms

affine

adjective, describing a function with a constant slope. Distinguished from linear which sometimes is meant to imply that the function has no constant term; that it is zero when the independent variables are zero. An affine function may have a nonzero value when the independent variables are zero.
Examples: y = 2x is linear in x, whereas y = 2x + 7 is an affine function of x.
And y = 2x + z2 is affine in x but not in z.

Source: econterms

affine pricing

A pricing schedule where there is a fixed cost or benefit to the consumer for buying more than zero, and a constant per-unit cost per unit beyond that. Formally, the mapping from quantity purchased to total price is an affine function of quantity.
Using, mostly, Tirole's notation, let q be the quantity in units purchased, T(q) be the total price paid, p be a constant price per unit, and k be the fixed cost, an example of an affine price schedule is T(q)=k+pq.
For alternative ways of pricing see linear pricing schedule and nonlinear pricing.

Source: econterms

AFQT

Armed Forces Qualifications(?) Test -- a test given to new recruits in the U.S. armed forces. Results from this test are used in regressions of labor market outcomes on possible causes of those outcomes, to control for other causes.

Source: econterms

AGI

An abbreviation for Adjusted Gross Income, a line item which appears on the U.S. taxpayer's tax return and is sometimes used as a measure of income which is consistent across taxpayers. AGI does not include any accounting for deductions from income that reduce the tax due, e.g. for family size.

Source: econterms

agricultural economics

"Agricultural Economics is an applied social science that deals with how producers, consumers, and societies use scarce resources in the production, processing, marketing, and consumption of food and fiber products." (from Penson, Capps, and Rosson (1996), as cited by Hallam 1998).

Source: econterms

AIC

abbreviation for Akaike's Information Criterion

Source: econterms

AJS

An abbreviation for the American Journal of Sociology.

Source: econterms

Akaike's Information Criterion

A criterion for selecting among nested econometric models. The AIC is a number associated with each model:
AIC=ln (sm2) + 2m/T
where m is the number of parameters in the model, and sm2 is (in an AR(m) example) the estimated residual variance: sm2 = (sum of squared residuals for model m)/T. That is, the average squared residual for model m.
The criterion may be minimized over choices of m to form a tradeoff between the fit of the model (which lowers the sum of squared residuals) and the model's complexity, which is measured by m. Thus an AR(m) model versus an AR(m+1) can be compared by this criterion for a given batch of data.
An equivalent formulation is this one: AIC=T ln(RSS) + 2K where K is the number of regressors, T the number of obserations, and RSS the residual sum of squares; minimize over K to pick K.

Source: econterms

alienation

A Marxist term. Alienation is the subjugation of people by the artificial creations of people 'which have assumed the guise of independent things.' Because products are thought of as commodities with money prices, the social process of trade and exchange becomes driven by forces operating independently of human will like natural laws.

Source: econterms

Allais paradox

The Allais paradox is the most prominent example for behavioral inconsistencies related to the von Neumann Morgenstern axiomatic model of choice under uncertainty. The Allais paradox shows that the significant majority of real decision makers orders uncertain prospects in a way that is inconsistent with the postulate that choices are independent of irrelevant alternatives. Basically, it is this postulate that allows to represent preferences over uncertain prospects as a linear functional of the utilities of the basic outcomes, viz. as the expectation of these utilities.

Consider the following choice situation (A) among two lotteries:

? lottery L1 promises a sure win of $30,
? lottery L2 is a 80% chance to win $45 (and zero in 20% of the cases).

Typically, L1 is strictly preferred to L2 (such observed behavior is called a revealed preference).

Now, consider another choice situation (B):

? lottery K1 promises a 25% chance of winning $30,
? lottery K2 is a 20% chance to win $45.

Here, the typical choice is K2 over K1 although situation B differs from situation A only in that in each lottery, three quarters of the original probability of winning a positive amount are cancelled.

Assume the typical subject decides among lotteries in the following way. To each of the basic outcomes, a number is assigned that indicates its attractiveness; say u(0)=0, u(45)=1, and u(30)=v (0 0.8; while the revealed preference of K2 over K1 in situation B shows that 1/4 v < 1/5, or v < 0.8.

In cognitive psychology, this inconsistency is explained as a certainty effect. In situation A, L2 differs from L1 by a winning probability that is 20% lower, just as lottery K2 differs from K1 in situation B (where 4/5 x 25 = 20). Empirically, it seems that cancelling a fixed proportion of winning probability has a higher cognitive impact in a lottery where winning was extremely likely than in a lottery where winning was "a rather unlikely event, anyway."

By accounting for a misperception of probabilities according to a non-linear weighting function (of the utilities of the elementary outcomes), expected utility can be rescued also in view of the Allais paradox (see prospect theory). The Allais paradox, devised in the 1950's, was the first piece in a series of systematic evidence challenging the traditional concept of von Neumann Morgenstern expected utility, leading to the development of generalized models of ("boundedly rational") choice behavior under uncertainty.

Source: SFB 504

Allocation

The accepted purview of economics is the allocation of scarce resources. Allocation comprises production and exchange, reflecting a fundamental division between processes that transform commodities (i.e., production) and those that transfer control (i.e., exchange). In general, optimal allocation ensures that scarce resources are driven to their best use.

For both production and consumption, exchange is essential to the efficient use of resources. It allows decentralization and specialization in production; as to consumption, agents with diverse endowments or preferences (tastes) need exchange to obtain maximal benefits, given their resources. If the preferences of two agents differ (formally, if agents have different rates of substitution among the commodities concerned), then there exists a trade benefitting both. Such trades of private goods take place on markets.

The advantages of barter extend widely, e.g. to trade among nations and among legislators ("vote trading"), but it suffices here to emphasize markets with enforceable contracts for trading private property unaffected by externalities. In such markets, voluntary exchange involves trading bundles of commodities or obligations to the mutual advantage of all parties to the transaction.

Source: SFB 504

Allpay auction

Simultaneous bidding game where the bidder that has submitted the highest bid is awarded the object, and all bidders pay their own bids. A subvariant is the second price all pay auction, also war of attrition, where each bidder pays his own bid but the winner only pays the second highest bid. For example, campaign spending and political lobbying processes are second-price all pay auctions; liekwise, timing decisions on the private provision of public goods have the structure of second price all pay auctions.

Source: SFB 504

almost surely

With probability one. In particular, the statement that a series {Wn} limits to W as n goes to infinity, means that Pr{Wn->W}=1.

Source: econterms

alternative hypothesis

"The hypothesis that the restriction or set of restrictions to be tested does NOT hold." Often denoted H1. Synonym for 'maintained hypothesis.'

Source: econterms

Americanist

A member of a certain subfield of political science.

Source: econterms

AMEX

American Stock Exchange, which is in New York City

Source: econterms

Amos

A statistical data analysis program, discussed at http://www.smallwaters.com/amos.

Source: econterms

analytic

Often means 'algebraic', as opposed to 'numeric'. E.g., in the context of taking a derivative, which could sometimes be calculated numerically on a computer, but is usually done analytically by finding an algebraic expression for the derivative.

Source: econterms

Anchoring and adjustment

People who have to make judgements under uncertainty use this heuristic by starting with a certain reference point (anchor) and then adjust it insufficiently to reach a final conclusion. Example: If you have to judge another personīs productivity, the anchor for your final (adjusted) judgement may be your own level of productivity. Depending on your own level of productivity you might therefore underestimate or overestimate the productivity of this person.

Source: SFB 504

annihilator operator

Denoted []+ with a lag operator polynomial in the brackets. Has the effect of removing the terms with an L to a negative power; that is, future values in the expression. Their expected value is assumed to be zero by whoever applies the operator.

Source: econterms

Annuity formula

If annuity payments over time are (0,P,P,...P) for n periods, and the constant interest rate r>0, then the net present value to the recipient of the annuity can be calculated this way: NPV(A) = (1-(1+r)-n)P/r

Source: econterms

ANOVA

Stands for analysis-of-variance, a statistical model meant to analyze data. Generally the variables in an ANOVA analysis are categorical, not continuous. The term main effect is used in the ANOVA context. The main effect of x seems to mean the result of an F test to see if the different categories of x have any detectable effect on the dependent variable on average. ANOVA is used often in sociology, but rarely in economics as far as this editor can tell. The terms ANCOVA and ANOCOVA mean analysis-of-covariance. When I understand ANCOVA and main effect better, I'll make separate entries for them. From Kennedy, 3rd edition, pp226-227: 'Analysis of variance is a statistical technique designed to determine whether or not a particular classification of the data is meaningful. The total variation of the dependent variable (the sum of squared differences between each observation and the overall mean) can be expressed as the sum of the variation between classes (the sum of the squared differences between the mean of each class and the overall mean, each times the number of observations in that class) and the variation within each class (the sum of the squared difference between each observation and its class mean). This decomposition is used to structure an F test to test the hypothesis that the between-class variation is large relative to the within-class variation, which implies that the classification is meaningful, i.e., that there is a significant variation in the dependent variable between classes. If dummy variables are used the capture these classifications and a regression is run, the dummy variable coefficients turn out to be the class means, the between-class variation is the regression's 'explained' variation, the within-class variation is the regression's 'unexplained' variation, and the analysis of variance F test is equivalent to testing whether or not the dummy variable coefficients are significantly different from one another. The main advantage of the dummy variable regression is that it provides estimates of he magnitudes of class variation influences on the dependent variables (as well as testing whether or not the classification is meaningful). 'Analysis of covariance is an extension of analysis of variance to handle cases in which there are some uncontrolled variables that could not be standardized between classes. These cases can be analyzed by using dummy variables to capture the classifications and regressing the dependent variable on these dummies and the uncontrollable variables. The analysis of covariance F tests are equivalent to testing whether the coefficient of the dummies are significantly different from one another. These tests can be interpreted in terms of changes in the residual sums of squares caused by adding the dummy variables. Johnston (1972, pp 192-207) has a good discussion. In light of the above, it can be concluded that anyone comfortable with regression analysis and dummy variables can eschew analysis of variance and covariance techniques.' [Except that one needs to understand the academic work out there, not just write one's own. -ed.]

Source: econterms

APT

Arbitrage Pricing Theory; from Stephen Ross, 1976-78. Quoting Sargent, "Ross posited a particular statistical process for asset returns, then derived the restrictions on the process that are implied by the hypothesis that there exist no arbitrage possibilities."

The APT includes multiple risk factors, unlike the CAPM.

Source: econterms

AR

Stands for "autoregressive." Describes a stochastic process (denote here, et) that can be described by a weighted sum of its previous values and a white noise error. An AR(1) process is a first-order one, meaning that only the immediately previous value has a direct effect on the current value:
et = ret-1 + ut
where r is a constant that has absolute value less than one, and ut is drawn from a distribution with mean zero and finite variance, often a normal distribution.
An AR(2) would have the form:
et = r1et-1 + r2et-2 + ut
and so on. In theory a process might be represented by an AR(infinity).

Source: econterms

AR(1)

A first-order autoregressive process. See AR for details.

Source: econterms

Arbitrage

Arbitrage plays a critical role in the analysis of securities markets, bringing prices to fundamental values and keeping markets efficient.

Source: SFB 504

ARCH

Stands for Autoregressive Conditional Heteroskedasticity. It's a technique used in finance to model asset price volatility over time. It is observed in much time series data on asset prices that there are periods when variance is high and periods where variance is low. The ARCH econometric model for this (introduced by Engle (1982)) is that the variance of the series itself is an AR (autoregressive) time series, often a linear one.
Formally, per Bollerslev et al 1992 and Engle (1982): An ARCH model is a discrete time stochastic process {et} of the form: et = ztst
where the zt's are iid over time, E(zt)=0, var(zt)=1, and st is positive and time-varying. Usually st is further modeled to be an autoregressive process.

According to Andersen and Bollerslev 1995/6/7, "ARCH models are usually estimated by maximum likelihood techniques." They almost always give a leptokurtic distrbution of asset returns even if one assumes that each period's returns are normal, because the variance is not the same each period. Even ARCH models, however, do not usually generate enough kurtosis in equity returns to match U.S. stock data.

Source: econterms

ARIMA

Describes a stochastic process or a model of one. Stands for "autoregressive integrated moving-average". An ARIMA process is made up of sums of autoregressive and moving-average components, and may not be stationary.

Source: econterms

ARMA

Describes a stochastic process or a model of one. Stands for "autoregressive moving-average". An ARMA process is a stationary one made up of sums of autoregressive and moving-average components.

Source: econterms

Arrovian uncertainty

Measurable risk, that is, measurable variation in possible outcomes, on the basis of knowledge or believed assumptions in advance. Contrast Knightian uncertainty.

Source: econterms

Arrow-Debreu equilibrium

Means, in practice, competitive equilibrium of the kind shown in Debreu's Theory of Value.
The Arrow-Debreu reference may be to a particular paper: "Existence of an Equilibrium for a Competitive Economy", Econometrica. Vol 22 July 1954, pp 265-290. I haven't checked that out.

Source: econterms

Arrow-Pratt measure

An attribute of a utility function.

Denote a utility function by u(c). The Arrow-Pratt measure of absolute risk aversion is defined by:
RA=-u''(c)/u'(c)
This is a measure of the curvature of the utility function. This measure is invariant to affine transformation of the utility function, which is a useful attributed because such transformation do not affect the preferences expressed by u().

If RA() is decreasing in c, then u() displays decreasing absolute risk aversion. If RA() is increasing in c, then u() displays increasing absolute risk aversion. If RA() is constant with respect to changes in c, then u() displays constant absolute risk aversion.

Source: econterms

ASQ

An abbreviation for the journal Administrative Science Quarterly which tends to be closer to sociology than to economics.

Source: econterms

ASR

An abbreviation for the journal American Sociological Review.

Source: econterms

asset pricing models

A way of mapping from abstract states of the world into the prices of financial assets like stocks and bonds. The prices are always conceived of as endogenous; that is, the states of the world cause them, not the other way around, in an asset pricing model.
Several general types are discussed in the research literature. The CAPM is one, distinguished from three that Fama (1991) identifies: (a) the Sharpe-Lintner-Black class of models, (b) the multifactor models like the APT of Ross (1976), and (c) the consumption based models such as Lucas (1978).
An asset pricing model might or might not include the possibility of fads or bubbles.

Source: econterms

asset-pricing function

maps the state of the economy at time t into the price of a capital asset at time t.

Source: econterms

asymptotic

An adjective meaning 'of a probability distribution as some variable or parameter of it (usually, the size of the sample from another distribution) goes to infinity.'
In particular, see asymptotic distribution.

Source: econterms

asymptotic normality

A limiting distribution of an estimator is usually normal. (details!)

This is usually proven with a mean value expansion of the score at the estimated parameter value? (details)

Source: econterms

asymptotic variance

Definition of the asymptotic variance of an estimator may vary from author to author or situation to situation. One standard definition is given in Greene, p 109, equation (4-39) and is described there as "sufficient for nearly all applications." It's

asy var(t_hat) = (1/n) * limn->infinity E[ {t_hat - limn->infinity E[t_hat] }2 ]

Source: econterms

asymptotically equivalent

Estimators are asymptotically equivalent if they have the same asymptotic distribution.

Source: econterms

asymptotically unbiased

"There are at least three possible definitions of asymptotic unbiasedness:
1. The mean of the limiting distribution of n.5(t_hat - t) is zero.
2. limn->infinity E[t_hat] = t.
3. plim t_hat = t."
Usually an estimator will have all three of these or none of them. Cases exist however in which left hand sides of those three are different. "There is no general agreement among authors as to the precise meaning of asymptotic unbiasedness, perhaps because the term is misleading at the outset; asymptotic refers to an approximation, while unbiasedness is an exact result. Nonetheless the majority view seems to be that (2) is the proper definition of asymptotic unbiasedness. Note, though, that this definition relies upon quantities that are generally unknown and that may not exist." -- Greene, p 107

Source: econterms

Attitude

An attitude is "a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor" (Eagly & Chaiken, 1993, p. 1). This tendency can be expressed by different types of evaluative responses. Social psychologists commonly differentiate between affective, cognitive and behavioral responses. Affective responses towards an attitude object manifest themselves in verbal expressions of feelings and physiological changes in the organism (e.g. increase of arousal). Cognitive responses refer to expressions of beliefs (e.g. expectancy-value judgments) and nonverbal reactions such as response latencies. Behavioral responses manifest in behavioral intentions and actions. Attitude theory and research deals with the structure, function, formation and change of attitudes, and is also concerned with the relationship between attitudes and behavior. The model of reasoned action (Fishbein & Ajzen, 1975), for example, provides a comprehensive approach to all of these aspects. In this model, the internal structure of an attitude is described in terms of beliefs (expectations), that relate the attitude object (a behavioral alternative) to evaluated attributes. The function of attitudes is to guide the formation of behavioral intentions. Attitude formation and change is viewed as a process of deliberative evaluation and belief updating. Attitudes are thought to impact behavior indirectly via behavioral intentions. More recent approaches, however, assume that a deliberative calculation of expectancy and values is not a necessary condition for either intention formation or attitude formation and change. There is ample evidence for example, that liking of an attitude object can be enhanced simply by increasing its presentation frequency (Zajonc, 1980) Furthermore, attitudes, if they are frequently activated from memory, tend to become activated automatically in the presence of the attitude object and then directly impact behavioral decisions (Fazio, 1990).

Source: SFB 504

attractor

a kind of steady state in a dynamical system. There are three types of attractor: stable steady states, cyclical attractors, and chaotic attractors.

Source: econterms

Auction

Competitive exchange process in which each trader from one market side submits a bid, the most favorable one of which is selected by the complementary market side for the transaction. Most commonly, the bidders form the potential buyers of a commodity, and the bid-taker is a monopolistic seller. Also common is the converse constellation, where a monopsonic buyer elicits price offers from competing sellers ( procurement auction). In a narrow sense, auctions form a variety of familiar and less familiar selling and buying mechanisms for goods reaching from objects of art and collectibles to natural resources like minerals and agricultural products, to treasury bonds, construction and supply contracts, oil drilling rights and broadcasting licenses. Also take-over battles for firms or conglomerates are an explicit auction.

In a broader sense, auctions provide an explicit description of price formation processes that arise from strategic interaction in markets. More general auction mechanisms with competition on both market sides, so-called double auctions, form exchange institutions that map competing price bids (buying demands) and price asks (selling offers) into an allocation of the goods among the traders. Given the vector of bids and asks (and some matching rule), the terms of trade for various quantities of one or several goods are endogenously determined. A prototypical example for double auctions are institutionalized markets of financial assets and financial derivatives.

Auctions are models of 'thin markets', making precise the sense in which markets 'find prices' that can 'reveal' an underlying economic value. This is shown distinctively by the fact they are the unique exchange mechanism adopted whenever competitive market prices do not exist but the object sold is of particular uniqueness and size, such as in privatizations of government enterprises, in the the sale of complex procurement contracts, or seldom goods of arts; or when the resource in question does not have a price other than the terms of trade which are revealed through strategic interaction of traders, such as financial assets like stock, options, corporate or government bonds.

In a sense, strategic equilibria of competitive bidding games have many efficiency properties that generalize those of competitive market equilibrium. As it is the highest (buying) asks and the lowest (selling) offers that are selected for the transaction, the resulting allocation of commodites and quantities is efficient ex post. Under appropriate conditions, equilibrium outcomes from double auctions are even efficient in an interim sense (see efficiency). Under reasonable informational assumptions, the equilibrium bids of common value auctions (see below) converge to the competitive equilibrium price as the number of bidders grows large (see also competitive market equilibrium).

Auctions are modelled as bidding games of incomplete information. The bidders' (players') strategies are bid functions converting their private information about the objects in sale, and previous bids observed, into a money amount that is bid. Such bidding games provide unified descriptions of many competitive processes from diverse contexts. Together with the most common auction formats, we mention some examples below.

Source: SFB 504

augmented Dickey-Fuller test

A test for a unit root in a time series sample. An augmented Dickey-Fuller test is a version of the Dickey-Fuller test for a larger and more complicated set of time series models.

(Ed.: what follows is only my best understanding.) The augmented Dickey-Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. In one example, with three lags, a value of -3.17 constituted rejection at the p-value of .10.

Source: econterms

Austrian economics

A school of thought which "takes as its central concern the problem of human coordination, through which order emerges not from a dictator, but from the decisions and judgments of numerous individuals in a world of highly disperced and sometimes only tacit knowledge." -- Cass R. Sunstein, "The Road from Serfdom" The New Republic Oct 20, 1997, p 42.

Well-known authors along this line include Carl Menger, Ludwig von Mises, and Friedrich von Hayek. See Deborah L. Walker's essay for a clear account.

Source: econterms

autarky

The state of an individual who does not trade with anyone.

Source: econterms

autocorrelation

the jth autocorrelation of a covariance-stationary process is defined as its jth autocovariance divided by its variance.

In a sample, the kth autocorrelation is the OLS estimate that results from the regression of the data on the kth lags of the data.

Below is Gauss code to calculate autocorrelations from a sample.

  /* This functions calculates autocorrelation estimates for lag k */  
proc autocor(series, k);   
   local rowz,y,x,rho;    
   rowz = rows(series);    
   y = series[k+1:rowz];    
   x = series[1:rowz-k];    
   rho = inv(x'x)*x'y;            /* compute autocorrelation by OLS */    
   retp(rho);  
endp;  

Source: econterms

autocovariance

The jth autocovariance of a stochastic process yt is the covariance between its time t value and the value at time t-j. It is denoted gamma below, and E[] means expectation, or mean:
gammajt = E[(yt - Ey)(yt-j-Ey)]
In that equation the process is assumed to be covariance stationary. If there is a trend, then the second Ey should be the expected value of at the time t-j.

Source: econterms

autocovariance matrix

Defined for a vector random process, denoted yt here. The ij'th element of the autocovariance matrix is cov(yit, yj,t-k).

Source: econterms

Automaticity

Information processing that occurs without conscious control.
Mental processes fall on a continuum from more automatic to more controllable. At the most automatic end is preconscious automaticity, followed by post-conscious automaticity and goal-directed automaticity. Next are spontaneous processes, which are activated without consciousness but processed only with effort. Ruminative processes are slightly more controlled, they are conscious but not deliberately directed by goals. At the most controlled end, intentional thoughts are characterized by people having choices, especially if they make the hard (more effortful) choice, and paying attention to that choice to enact it.
Automatic processing can develop in response to stimuli and environments that people habitually encounter, as a way to save cognitive effort. Automatic responses, especially preconscious automaticity, can be defined with several criteria (Bargh, 1984). First, automatic processes are unintentional; they do not require a goal to be activated. Second, they are involuntary, always occuring in the presence of the relevant cue. Third, they are effortless, using no cognitive capacity. Fourth, they are autonomous, running to completion without any conscious monitoring. Finally, they are outside awareness, meaning they are activated and operated without consciousness.


Source: SFB 504

autoregressive process

See AR.

Source: econterms

Availability

traditionally refers to whether or not a construct is stored in memory (Bruner, 1957). Recently, there has been an increasing tendency to use the term accessibility and availability interchangeably. Some overlap in the application of these terms was introduced in Tversky & Kahnemanīs (1973) description of the availability heuristic, where availability referred to the ease of retrieving construct instances. In the availability heuristic, however, availability also referred to the ease of constructing instances of novel classes and events, which is distinct from the traditional meaning of accessibility (see Higgins & King, 1981).

Source: SFB 504

Availability heuristic

This heuristic is used to evaluate the frequency or likelihood of an event on the basis of how quickly instances or associations come to mind. When examples or associations are easily brought to mind, this fact leads to an overestimation of the frequency or likelihood of this event. Example: People are overestimating the divorce rate if they can quickly find examples of divorced friends.

Source: SFB 504

avar

abbreviation or symbol for the operation of taking the asymptotic variance of an expression, thus: avar().

Source: econterms

Average Cost

The per-unit cost of producing a product, measured as the total cost of production divided by the number of units produced. It is frequently represented on a graph as a U-shaped curve (average costs initially decrease before eventually increasing).

Source: EconPort

Average Fixed Cost

The per-unit share of total fixed costs; it is calculated as the total fixed cost of production divided by the number of units produced. Because fixed costs do not depend upon the quantity produced, average fixed costs decline as more is produced.

Source: EconPort
See also: Fixed Costs , 

Average Revenue

Total Revenue divided by the number of units that are produced.

Source: EconPort

Average Total Cost

The sum of variable and fixed costs divided by the number of units produced. It is calculated as Total Costs divided by the number of units produced.

Source: EconPort

Average Variable Cost

Total variable costs divided by the number of units produced.

Source: EconPort

B

b

b(n,q) is notation for a binomial distribution with parameters n and q, where n is the number of draws and q is the probability that each is a one; the value of X~b(n,q) is a count of the number of ones drawn.

Source: econterms

b Sequential equilibrium

Kind of refinement of Perfect Bayesian Equilibrium that puts sharper requirements on the beliefs which cannot be formed by Bayes' rule, but which are hold after moves off the equilibrium path. These beliefs have to be formed in a 'continuous' way from the information available in the extensive form of the game. Further refinements of Perfect Bayesian equilibrium restrict the players' beliefs about moves off the equilibrium path to the set of those types only for which the observed off-equilibrium move could have been worthwhile at all.

Source: SFB 504

B1

B1 denotes the Borel sigma-algebra of the real line. It will contain every open interval by definition, which implies that it contains every closed interval and every countable union of open, half-open, and closed intervals. What won't it contain? In practice, only obscure sets. Here's an example: Define the equivalence class ~ on the real line such that x~y (read: x is in the same equivalence class as y) if x-y is a rational number. Now consider the set of all numbers in [0,1] such that none of them are in the same equivalence class. How many members of that set are there? Well, it's not a countable number. This set is not in B1.

Source: econterms

balance of payments

A country's balance of payments is the quantity of its own currency flowing out of of the country (for purchases, for example, but also for gifts and intrafirm transfers) minus the amount flowing in.

[Ed: this next part is partly speculation; feel free to correct it.] For some purposes this term refers to a stock value and for others a flow value. It is well defined over a period in the sense that it has changed from time A to time B.

Source: econterms

balanced growth

A macro model exhibits balanced growth if consumption, investment, and capital grow at at a constant rate while hours of work per time period stays constant.

Source: econterms

Banach space

Any complete normed vector space is a Banach space.

Source: econterms

bandwidth

In kernel estimation, a scalar argument to the kernel function that determines what range of the nearby data points will be heavily weighted in making an estimate. The choice of bandwidth represents a tradeoff between bias (which is intrinsic to a kernel estimator, and which increases with bandwidth), and variance of the estimates from the data (which decreases with bandwidth).
Cross-validation is one way to choose the bandwidth as a function of the data.
Has a variety of similar definitions in spectral analysis. Generally, a bandwidth is some way of defining the range of frequencies that will be included by the estimation process. In some estimations it is an argument to the estimation process.

Source: econterms

bank note

In periods of free banking, such as most states in the U.S. from 1839-1863, banks could issue their own money, called bank notes. A bank note was a risky, perpetual debt claim on a bank which paid no interest, and could be redeemed on demand at the original bank, usually in gold. There was a risk that the bank would not be able or willing to redeem it.

Source: econterms

Barriers to Entry

The obstacles to producers of entering a market. These can be manifested as:
- requiring a specialized type of license (such as having to be a certified electrician to perform electrical work);
- a government-imposed monopoly (such as specifying a single provider of power and electricity);
- the costs of a business license;
- patents;
- having to compete with other well-entrenched firms already in the market (having to compete with economies of scale);
- other things that may prevent a new firm from entering a market.

Source: EconPort

barter economy

An economy that does not have a medium of exchange, or money, and where trade occurs instead by exchanging useful goods for useful goods.

Source: econterms

base point pricing

The practice of firms setting prices as if their transportation costs to all locations were the same, even if all the vendors are distant from one another and have substantially different costs of transportation to each location. One might interpret this as a form of monitored collusion between the vendor firms.

Source: econterms

Baserate fallacy

In making probabilistic inferences perceivers ought to take account of general, broadly based information about population characteristics, and more specifically the prior probability of an event occuring. The tendency to under use, sometimes even ignore, such information is called the base rate fallacy. Some authors (Kahneman & Tversky, 1973) explain this phenomenon with respect to the representativeness heuristic. Gigerenzer and Hoffrage (1995) argue that the base-rate fallacy is due to the presentation of the information in probability format and that natural sampling reduces the base-rate fallacy.

Source: SFB 504

basin of attraction

the region of states, in a dynamical system, around a particular stable steady state, that lead to trajectories going to the stable steady state. (E.g. the region inside the event horizon around a black hole.)

Source: econterms

basis point

One-hundredth of a percentage point. Used in the context of interest rates.

Source: econterms

basket

A known set of fixed quantites of known goods, needed for defining a price index.

Source: econterms

Bayes theorem

This theorem deals with the impact of new information on the revision of probability estimates, and provides a normative model to assess how well people use empirical information to update the probability that a hypothesis is true.

P(H|O) = P(H) x P(O|H) / [ P(H) x P(O|H) + P(nonH) x P(O|nonH) ]

Bayes's theorem tells us that the probability that a hypothesis is true given that we have made some observation (called the "posterior odds") P(H|O) is a function of:

P(H) = The probability you would have assigned to the hypothesis before you made the observation, called the "prior probability" of the hypothesis.
P(O|H) = The probability the observation would occur if the hypothesis were true.
P(nonH) = The prior probability the hypothesisis not true, 1-P(H).
P(O|nonH) = The probability the event would have occured even if the hypothesis were not true.

For example, when the baserates of women having breast cancer and having no breast cancer are known to be 1% and 99%, respectively, and the hit rate is given as P(positive mammography/ breast cancer) = 80 %, applying the Bayes theorem leads to a normative prediction as low as P(breast cancer/ positive mammography) = 7.8%. That means that the probability that a woman who has a positive mammography actually has breast cancer is less than 8%. Studies show (e.g. Gigerenzer & Hoffrage, 1995) that subjective estimates clearly exceed the normative prediction and are often very close to the hit rate (80% in the example).

Source: SFB 504

Bayesian analysis

"In Bayesian analysis all quantities, including the parameters, are random variables. Thus, a model is said to be identified in probability if the posterior distribution for [the parameter to be estimated] is proper."

Source: econterms

BayesNash equilibrium

In normal form games of incomplete information, the players have no possibility to update their prior beliefs about their opponents payoff-relevant characteristics, called their types. All that a player knows, except from the game itself (and the priors), is his own type, and the fact that the other players do not know his own type as well. As their best responses, however, depend on the players' actual types, a player must see himself through his opponents' eyes and plan an optimal reaction against the possible strategies of his opponents for each potential type of his own. Thus, a strategy in a Bayesian game of incomplete information must map each possible type of each player into a plan of actions. Then, since the other players' types are unknown, each player forms a best response against the expected strategy of each opponent, where he averages over the (well-specified) reactions of all possible types of an opponent, using his prior probability measure on the type space. Such a profile of type-dependent strategies which are unilaterally unimprovable in expectations over the competing types' strategies forms a Bayes Nash equilibrium. Basically, a Bayes Nash equilibrium is thus a Nash equilibrium 'at the interim stage' where each player selects a best response against the average best responses of the competing players.

Source: SFB 504

Behavioral economics

In neoclassical economic theory, it is assumed that decision makers, given their knowledge of utilities, alternatives, and outcomes, can compute which alternative will yield the greatest subjective (expected) utility. The term bounded rationality is used to designate models of rational choice that take into account the cognitive limitations of both knowledge and cognitive capacity. Bounded rationality is a central theme in behavioral economics. It is concerned with the ways in which the actual decision-making process influences the decisions that are eventually reached. To this end, behavioral economics departs from one or more of the neoclassical assumptions underlying the theory of rational behavior. The two most important questions that can be posed are:


Are the assumptions of utility or profit maximization good approximations of real behavior?
Do individuals maximize subjective expected utility?

Simon (1987b) provides an overview of the literature on these issues.

Research in behavioral economics has adopted specific methodological approaches that complement traditional statistical and econometric tests of economic models. For example, experiments are commonly used in behavioral economics, and survey data are also becoming more important in the process of learning about individuals' actual decision-making processes.

Source: SFB 504

Behavioral finance

Despite strong evidence that securities markets are highly efficient, there have been scores of studies that have documented long-term historical phenomena in securities markets that contradict the efficient market hypothesis and cannot be captured plausibly in models based on perfect investor rationality. Such phenomena are often referred to as stock market anomalies.

Source: SFB 504

Belief

In incomplete information games, in order to predict the optimal behavior of his opponent, a player has to form expectations and assessments of his opponent's type. In a simultaneous game of incomplete information, each player's belief about any other player's type is exogenously given, or it is inferred by Bayes' rule from an intial draw by nature that determines the various types of the players. In sequential games of incomplete information, the players' beliefs about their opponents' types must be updated according to Bayes' rule during the play of the game whenever this is possible by having observed another player's move.

Source: SFB 504

Bellman equation

Any value or flow value equation. For a discrete problem it can generally be of the form:
v(k) = max over k' of { u(k,k') + b*v(k') }
where:
u() is the one-period return function (e.g., a utility function) and
v() is the value function and
k is the current state and
k' is the state to be chosen and
b is a scalar real parameter, the discount rate, generally slightly less than one.

Source: econterms

Benefit Principle

The idea that the tax burden should be proportional to an individual's use of government-supplied goods and services.

Source: EconPort

Bertrand competition

A bidding war in which the bidders end up at a zero-profit price. See Bertrand game.

Source: econterms

Bertrand duopoly

The two firms producing in a market modeled as a Bertrand game.

Source: econterms

Bertrand game

Model of a bidding war between firms each of which can offer to sell a certain good (say, widgets), but no other firms can. Each firm may choose a price to sell widgets at, and must then supply as many as are demanded. Consumers are assumed to buy the cheaper one, or to purchase half from each if the prices are the same. Best for the firms (both collectively and individually) is to cooperate, charge monopoly price, and split the profits. Each firm could seize the whole market by lowering price slightly, however, and the noncooperative Nash equilibrium outcome of a Bertrand game is that both charge a zero-profit price.

Source: econterms

Between subjects design

In a between subjects design the values of the dependent variable for one subject or group of subjects (e.g., the experimental group) are compared with the values for another subject or another group of subjects (e.g., the control group).

Source: SFB 504

Beveridge curve

The graph of the inverse relation of unemployment to job vacancies.

Source: econterms

BHHH

A numerical optimization method from Berndt, Hall, Hall, and Hausman (1974). Used in Gauss, for example. The following discussion of BHHH was posted to the newsgroup sci.econ.research by Paul L. Schumann, Ph.D., Professor of Management at Minnesota State University, Mankato (formerly Mankato State University). It is included here without any explicit permission whatsoever.

  BHHH usually refers to the procedure explained in Berndt, E., Hall, B.,  
Hall, R., & Hausman, J. (1974), 'Estimation and Inference in Nonlinear  
Structural Models,' Annals of Economic and Social Measurement, 3/4: 653-665.    

BHHH provides a method of estimating the asymptotic covariance matrix of a  
Maximum Likelihood Estimator. In particular, the covariance matrix for a MLE  
depends on the second derivatives of the log-likelihood function. However,  
the second derivatives tend to be complicated nonlinear functions. BHHH  
estimates the asymptotic covariance matrix using first derivatives instead  
of analytic second derivatives. Thus, BHHH is usually easier to compute than  
other methods.    

In addition to the original BHHH article referenced above, BHHH is also  
discussed in Greene, W.H., Econometric Analysis, 3rd Edition, Prentice-Hall,  
1997. Greene's econometric software program, LIMDEP, uses BHHH for some of  
the estimation routines.    

Someone (perhaps BHHH themselves?) wrote a Fortran subroutine in the 1970's  
to do BHHH. I do not have a copy of this subroutine at the present time. You  
may want to check out Green's econometric software, LIMDEP, to see if it  
will do what you require, rather than writing your own program to use an  
existing BHHH subroutine. The Web address for LIMDEP is:    
  http://www.limdep.com/index.htm    

Cheers,  
Paul.    
--
   Paul L. Schumann, Ph.D., Professor of Management 
   Minnesota State University, Mankato (formerly Mankato State University) 
   Mankato, MN  56002    
   mailto:paul.schumann@mankato.msus.edu    
   http://krypton.mankato.msus.edu/~schumann/www/welcome.html  

Source: econterms

BHPS

British Household Panel Survey. A British government database going back to 1990. Web page: http://www.iser.essex.ac.uk/bhps/index.php

Source: econterms

bias

the difference between the parameter and the expected value of the estimator of the parameter.

Source: econterms

bidding function

In an auction analysis, a bidding function (often denoted b()) is a function whose value is the bid that a particular player should make. Often it is a function of the player's value, v, of the good being auctioned. Thus the common notation b(v).

Source: econterms

bill of exchange

From the late Middle Ages. A contract entitling an exporter to receive immediate payment in the local currency for goods that would be shipped elsewhere. Time would elapse between payment in one currency and repayment in another, so the interest rate would also be brought into the transaction.

Source: econterms

billon

A mixture of silver and copper, from which small coins were made in medieval Europe. Larger coins were made of silver or gold.

Source: econterms

bimetallism

A commodity money regime in which there is concurrent circulation of coins made from each of two metals and a fixed exchange rate between them. Historically the metals have almost always been gold and silver. Bimetallism was tried many times with varying success but since about 1873 the practice has been generally abandoned.

Source: econterms

BJE

Bell Journal of Economics, the previous name of the RAND Journal of Economics or RJE.

Source: econterms

Black Market

An illegal market. This may include markets for illegal goods and services (for example, illegal drugs or prostitution), or markets for otherwise legal goods that are sold illegally (for example, making cash payments for goods and services to avoid record-keeping and therefore to to avoid paying taxes).

Because of the illegal nature of these markets, they are difficult to study and to obtain a precise measure of the size and extent of black-market activity in an economy.

Source: EconPort

Black-Scholes equation

An equation for option securities prices on the basis of an assumed stochastic process for stock prices.

The Black-Scholes algorithm can produce an estimate the value of a call on a stock, using as input:
-- an estimate of the risk-free interest rate now and in the near future
-- current price of the stock
-- exercise price of the option (strike price)
-- expiration date of the option
-- an estimate of the volatility of the stock's price
Click here for a derivation of Black-Scholes equation. From the Black-Scholes equation one can derive the price of an option.

Source: econterms

BLS

Abbrevation for the U.S. government's Bureau of Labor Statistics, in the Labor Department.

Source: econterms

Bonferroni criterion

Suppose a certain treatment of a patient has no effect. If one runs a test of statistical significance on enough randomly selected subsets of the patient base, one would find some subsets in which statistically significant differences were apparently distinguished by the treatment. The Bonferroni criterion is a redefinition of the statistical signficance criterion for the testing of many subgroups: e.g. if there are five subgroups and one of them shows an effect of the treatment at the .01 significance level, the overall finding is significant at the .05 level. This is discussed in more detail (and probably more correctly) in Bland and Altman (1995) in the statistics notes of the British Medical Journal. Either of these links should go there:
Llink 1.
Link 2; search for Bonferroni.

Source: econterms

bootstrapping

The activity of applying estimators to each of many subsamples of a data sample, in the hope that the distribution of the estimator applied to these subsamples is similar to the distribution of the estimator when applied to the distribution that generated the sample.

It is a method that gives a sense of the sampling variability of an estimator. "After the set of coefficients b0 is computed, M randomly drawn samples of T observations are drawn from the original data set with replacement. T may be less than or equal to n, the sample size. With each such sample the ... estimator is recomputed." -- Greene, p 658-9.
The properties of this distribution of estimates of b0 can then be characterized, e.g. its variance. If the estimates are highly variable, the investigator knows not to think of the estimate of b0 as precise.

Bootstrapping could also be used to estimate by simulation, or empirically, the variance of an estimation procedure for which no algebraic expression for the variance exists.

Source: econterms

Borel set

Any element of a Borel sigma-algebra.

Source: econterms

Borel sigma-algebra

The Borel sigma-algebra of a set S is the smallest sigma-algebra of S that contains all of the open balls in S. Any element of a Borel sigma-algebra is a Borel set.

Example: The set B1 is the Borel sigma-algebra of the real line, and thus contains every open interval.

Example: Consider a filled circle in the unit square. It can be constructed by a countable number of non-overlapping open rectangles (since a series of such rectangles can be defined that would cover every point in the circle but no point outside of it. Therefore it is in the smallest sigma-algebra of open subsets of the unit square.

Source: econterms

bounded rationality

Models of bounded rationality are defined in a recent book by Ariel Rubinstein as those in which some aspect of the process of choice is explicitly modeled.

Source: econterms

Bounded rationality

Rational behavior, in economics, means that individuals maximize some target function under the constraints they face (e.g., their utility function) in pursuit of their self-interest. This is reflected in the theory of (subjective) expected utility (Savage, 1954).

The term bounded rationality is used to designate rational choice that takes into account the cognitive limitations of both knowledge and cognitive capacity. Bounded rationality is a central theme in behavioral economics. It is concerned with the ways in which the actual decision-making process influences decisions. Theories of bounded rationality relax one or more assumptions of standard expected utility theory.

Source: SFB 504

Box-Cox transformation

The Box-Cox transformation, below, can be applied to a regressor, a combination of regressors, and/or to the dependent variable in a regression. The objective of doing so is usually to make the residuals of the regression more homoskedastic and closer to a normal distribution:
{
y(l) = ((y^l) - 1) / l for l not equal to zero
y(l)=log(y)l=0
Box and Cox (1964) developed the transformation.

Estimation of any Box-Cox parameters is by maximum likelihood.

Box and Cox (1964) offered an example in which the data had the form of survival times but the underlying biological structure was of hazard rates, and the transformation identified this.

Source: econterms

Box-Jenkins

A "methodology for identifying, estimating, and forecasting" ARMA models. (Enders, 1996, p 23). The reference in the name is to Box and Jenkins, 1976.

Source: econterms

Box-Pierce statistic

Defined on a time series sample for each natural number k by the sum of the squares of the first k sample autocorrelations. The kth sample autocorrelation is denoted r:
BP(k)=Ss=1k [rs2]
Used to tell if a time series is nonstationary.
Below is Gauss code with a procedure that calculates the Box-Pierce statistic for a set of residuals.

  
/* A series of residuals eps_hat[] is generated from a regression, e.g.: */    

eps_hat = y - X*betaols;    

/* Then the Box-Pierce statistic for each k can be calculated this way: */    

print 'Box-Pierce statistic for k=1 is' BP(eps_hat,1);  
print 'Box-Pierce statistic for k=2 is' BP(eps_hat,2);  
print 'Box-Pierce statistic for k=3 is' BP(eps_hat,3);    

proc BP(series, k);    
   local beep, rho;    
   beep = 0;    
   do until k < 1;      
     rho = autocor(series, k);      
     beep = beep + rho * rho;      
     k = k - 1;    
   endo;    
   beep = beep * rows(series);     /* BP = T* (the sum) */    
   retp(beep);  
endp;    

/* This functions calculates autocorrelation estimates for lag k */  
proc autocor(series, k);    
   local rowz,y,x,rho;    
   rowz = rows(series);    
   y = series[k+1:rowz];    
   x = series[1:rowz-k];    rho = inv(x'x)*x'y;            /* compute autocorrelation by OLS */    
   retp(rho);  
endp;  

Source: econterms

BPEA

An abbreviation for the Brookings Papers on Economic Activity.

Source: econterms

bPerfect Bayesian Nash equilibrium

Parallel to the extension of Nash equilibrium to subgame perfect equilibrium in games of complete information, the concept of Bayesian Nash equilibrium loses much of its bite in extensive form games and is accordingly refined to 'Perfect Bayesian' equilibrium. In a sequential game, it is often the threats about certain reactions 'off the equilibrium path' that force the players' actions to be best responses to one another 'onto the equilibrium path'. In sequential games with incomplete information, where the players hold beliefs about their opponents' types and optimize given their beliefs, a player then effectively 'threatens by the beliefs' he holds about his opponents' types after moves that deviate from the equilibrium path. Different beliefs about other players' types after deviations typically yield different reactions, some of which force the players back on the (candidate) equilibrium path, some of which lead them even farer away. In the first case, the plans of actions are confirmed by the beliefs about them, and the crucial self-confirming property of equilibrium beliefs and equilibrium strategies is met. The concept of Perfect Bayesian equilibrium makes precise this self-confirming 'interaction' of beliefs about types selecting certain actions and their 'actual' strategies. First, it requires that players forms a complete system of beliefs about the opponents' types at each decision node that can be reached. Next, this system of beliefs is updated according to Bayes' rule whenever possible (in particular, 'along the equilibrium path'), and finally, given each player's system of beliefs, the strategies from best responses to one another in the sense of ordinary Bayesian Nash equilibrium. A Bayesian equilibrium thus is a profile of complete strategies and a profile of complete beliefs such that (i) given the beliefs, the strategies are unilaterally unimprovable at each potential decision node that might be reached, and such that (ii) the beliefs are consistent with the actual evolution of play as prescribed by the equilibrium strategies.

Source: SFB 504

Brent method

An algorithm for choosing the step lengths when numerically calculating maximum likelihood estimates.

Source: econterms

Bretton Woods system

The international monetary framework of fixed exchange rates after World War II. Drawn up by the U.S. and Britain in 1944. Keynes was one of the architects.
The system ended on August 15, 1971, when President Richard Nixon ended trading of gold at the fixed price of $35/ounce. At that point for the first time in history, formal links between the major world currencies and real commodities were severed.

Source: econterms

Breusch-Pagan statistic

A diagnostic test of a regression. It is a statistic for testing whether dependent variable y is heteroskedastic as a function of regressors X. If it is, that suggests use of GLS or SUR estimation in place of OLS. The test statistic is always nonnegative. Large values of test statistic reject the hypothesis that y is homoskedastic in X. The meaning of 'large' varies with the number of variables in X.

Quoting almost directly from the Stata manual: The Breusch and Pagan (1980) chi-squared statistic -- a Lagrange multiplier statistic -- is given by

l = T * [Sm=1m=M [Sn=1n=m-1 [rmn2 ]]

where rmn2 is the estimated correlation between the residuals of the M equations and T is the number of observations. It has a chi-squared distribution with M(M-1)/2 degrees of freedom.

Source: econterms

bubbles

A substantial movement in market price away from a price determined by fundamental value. In practice, "bubble" always refers to a situation where the market price is higher than the conjectured fundamentally supported price. The idea of a fundamental value requires some model or outside knowledge of what the security (or other good) is worth.

Bubbles are often described as speculative and it is conjectured that bubbles could be risky ventures for speculators who earn a fair rate of return on them. [ed: I believe these are "rational" bubbles.]
There exist statistical models of a bubbles. For example, stochastic collapsing bubbles are cited to Blanchard and Watson (1982) -- in this form, "the bubble continues with a certain conditional probability and collapses otherwise."

Source: econterms

budget

A budget is a description of a financial plan. It is a list of estimates of revenues to and expenditures by an agent for a stated period of time. Normally a budget describes a period in the future not the past.

Source: econterms

Budget Constraint

A budget constraint is the maximum amount an individual can consume, given current income and prices. For an example, suppose one's income is $100 and there are two goods in the economy; The price of Good A is $2 and the price of Good B is $4. Points on the budget constraint include:
50 units of Good A and 0 units of Good B
48 units of Good A and 1 unit of Good B
46 units of Good A and 2 units of Good B
...
2 units of Good A and 24 units of Good B
0 units of Good A and 25 units of Good B

The budget constraint can be represented graphically, in a table, or in words.

Source: EconPort

budget line

A consumer's budget line characterizes on a graph the maximum amounts of goods that the consumer can afford. In a two good case, we can think of quantities of good X on the horizontal axis and quantities of good Y on the vertical axis. The term is often used when there are many goods, and without reference to any actual graph.

Source: econterms

budget set

The set of bundles of goods an agent can afford. This set is a function of the prices of goods and the agents endownment.

Assuming the agent cannot have a negative quantity of any good, the budget set can be characterized this way. Let e be a vector representing the quantities of the agent's endowment of each possible good, and p be a vector of prices for those goods. Let B(p,e) be the budget set. Let x be an element of R+L; that is, the space of nonnegative reals of dimension L, the number of possible goods. Then:
B(p,e) = {x: px <= pe}

Source: econterms

bureaucracy

A form of organization in which officeholders have defined positions and (usually) titles. Formal rules specify the duties of the officeholders. Personalistic distinctions are usually discouraged by the rules.

Source: econterms

Burr distribution

Has density function (pdf):
f(x) = ckxc-1(1+xc)k+1 for constants c>0, k>0, and for x>0.
Has distribution function (cdf): F(x) = 1 - (1+xc)-k.

Source: econterms

business

business

Source: econterms

business cycle frequency

Three to five years. Called the business cycle frequency by Burns and Mitchell (1946), and this became standard language.

Source: econterms

BVAR

Bayesian VAR (Vector Autoregression)

Source: econterms

C

CAGR

Cumulative Average Growth Rate

Source: econterms

calculus of voting

A model of political voting behavior in which a citizen chooses to vote if the costs of doing so are outweighed by the strength of the citizen's preference for one candidate weighted by the anticipated probability that the citizen's vote will be decisive in the election.

Source: econterms

calibration

NOT SURE WHICH OF THESE (IF EITHER) IS RIGHT:
1. The estimation of some parameters of a model, under the assumption that the model is correct, as a middle step in the study of other parameters. Use of this word suggests that the investigator wishes to give those other parameters of the model a 'fair chance' to describe the data, not to get stuck in a side discussion about whether the calibrated parameters are ideally modeled or estimated.

2. Taking parameters that have been estimated for a similar model into one's own model, solving one's own model numerically, and simulating. Attributed to Edward Prescott.

Source: econterms

call option

A call option conveys the right to buy a specified quantity of an underlying security.

Source: econterms

capital

Something owned which provides ongoing services. In the national accounts, or to firms, capital is made up of durable investment goods, normally summed in units of money. Broadly: land plus physical structures plus equipment. The idea is used in models and in the national accounts.

See also human capital and social capital.

Source: econterms

capital consumption

In national accounts, this is the amount by which gross investment exceeds net investment. It is the same as replacement investment.
-- Oulton (2002, p. 13)

Source: econterms

capital deepening

Increase in capital intensity, normally in a macro context where it is measured by something analogous to the capital stock available per labor hour spent. In a micro context, it could mean the amount of capital available for a worker to use, but this use is rare.

Capital deepening is a macroeconomic concept, of a faster-growing magnitude of capital in production than in labor. Industrialization involved capital deepening - that is, more and more expensive equipment with a lesser corresponding rise in wage expenses.

Capital deepening of a certain input (e.g. a certain kind of capital input, a recent key example being computer equipment) can be measured in the following way. Estimate the growth of the services provided by this input, per unit of labor input, in year T and in year T+1. The growth rate of that ratio is one common measure of the rate of capital deepening. Oulton, p. 31

Source: econterms

capital intensity

Amount of capital per unit of labor input.

Source: econterms

capital ratio

A measure of a bank's capital strength used by U.S. regulatory agencies.

Source: econterms

capital structure

The capital structure of a firm is broadly made up of its amounts of equity and debt.

Source: econterms

capital-augmenting

One of the ways in which an effectiveness variable could be included in a production function in a Solow model. If effectiveness A is multiplied by capital K but not by labor L, then we say the effectiveness variable is capital-augmenting.
For example, in the model of output Y where Y=(AK)aL1-a the effectiveness variable A is capital-augmenting but in the model Y=AKaL1-a it is not.
Another example would be a capital utilization variable as measured say by electricity usage. (E.g., as in Eichenbaum). ----------------- An example: in the context of a railroad, automatic railroad signaling, track-switching, and car-coupling devices are capital-augmenting. From Moses Abramovitz and Paul A. David, 1996. 'Convergence and Deferred Catch-up: productivity leadership and the waning of American exceptionalism.' In Mosaic of Economic Growth, edited by Ralph Landau, Timothy Taylor, and Gavin Wright.

Source: econterms

capitation

The system of payment for each customer served, rather than by service performed. Both are used in various ways in U.S. medical care.

Source: econterms

CAPM

Capital Asset Pricing Model

Source: econterms

CAR

stands for Cumulative Average Return.

A portfolio's abnormal return (AR) at each time is ARt=Sum from i=1 to N of each arit/N. Here arit is the abnormal return at time t of security i.

Over a window from t=1 to T, the CAR is the sum of all the ARs.

Source: econterms

CARA utility

A class of utility functions. Also called exponential utility. Has the form, for some positive constant a:
u(c)=-(1/a)e-ac
"Under this specification the elasticity of marginal utility is equal to -ac, and the instantaneous elasticity of substitution is equal to 1/ac."
The coefficient of absolute risk aversion is a; thus the abbreviation CARA for Constant Absolute Risk Aversion. "Constant absolute risk aversion is usually thought of as a less plausible description of risk aversion than constant relative risk aversion" (that's the CRRA, which see), but it can be more analytically convenient.

Source: econterms

CARs

cumulative average adjusted returns

Source: econterms

cash-in-advance constraint

A modeling idea. In a basic Arrow-Debreu general equilibrium there is no need for money because exchanges are automatic, through a Walrasian auctioneer. To study monetary phenomena, a class of models was made in which money was required to make purchases of other goods. In such a model the budget constraint is written so that the agent must have enough cash on hand to make any consumption purchase. Using this mechanism money can have a positive price in equilibrium and monetary effects can be seen in such models. Contrast money-in-the-utility function for an alternative modeling approach.

Source: econterms

catch-up

''Catch-up' refers to the long-run process by which productivity laggards close the proportional gaps that separate them from the productivity leader .... 'Convergence,' in our usage, refers to a reduction of a measure of dispersion in the relative productivity levels of the array of countries under examination.' Like Barro and Sala-i-Martin (92)'s 'sigma-convergence', a narrowing of the dispersion of country productivity levels over time.

Source: econterms

Category split effect

Research on frequency estimation has shown that several factors can influence the subjective frequency of events. One of these factors is the category width. Splitting an event category into smaller subcategories can increase the subjective frequency of events: A total set of events may have less impact, or appear less frequent, subjectively, than the sum of its (exclusive) subsets. For example, imagine you are asked to judge the number of Japanese cars in your own country, or, in another condition, to judge the frequency of Honda, Nissan, Toyota, Mazda, Daihatsu and Mitshubishi cars. The sum of the judged component frequencies from the split-category condition will be higher, under many circumstances, than the compound frequency of the entire category.

Source: SFB 504

Cauchy distribution

Has thicker tails than a normal distribution.
density function (pdf): f(x) = 1/[pi*(1+x2)]. distribution function (cdf): F(x) = .5 + (tan-1x)/pi.

Source: econterms

Cauchy sequence

A sequence satisfies the Cauchy criterion iff for each positive real epsilon there exists a natural number N such that the distance between any two elements of the sequence past the Nth element is less than epsilon. 'Distance' must be defined in context by the user of the term.

One sometimes hears the construction: 'The sequence is Cauchy' if the sequence satisfies the definition.

Source: econterms

CCAPM

Stands for Consumption-based Capital Asset Pricing Model.
A theory of asset prices. Formulated in Lucas, 1978, and Breeden, 1979.

Source: econterms

CDE

Stands for Corporate Data Exchange, an organization which has data on the shareholdings of large U.S. companies.

Source: econterms

cdf

cumulative distribution function. This function describes a statistical distribution. It has the value, at each possible outcome, of the probability of receiving that outcome or a lower one. A cdf is usually denoted in capital letters. Consider for example some F(x), with x a real number is the probability of receiving a draw less than or equal to x. A particular form of F(x) will describe the normal distribution, or any other unidimensional distribution.

Source: econterms

CDFC

Stands for Concavity of distribution function condition.

Source: econterms

censored dependent variable

A dependent variable in a model is censored if observations of it cannot be seen when it takes on vales in some range. That is, the independent variables are observed for such observations but the dependent variable is not.

A natural example is that if we have data on consumers and prices paid for cars, if a consumer's willingness-to-pay for a car is negative, we will see observations with consumer information but no car price, no matter how low car prices go in the data. Price observations are then censored at zero.

Contrast truncated dependent variables.

Source: econterms

central bank

A government bank; a bank for banks.

Source: econterms

Centrality of typicality

Items with greater family resemblance to a category are judged to be more typical of the category.

Source: SFB 504

Certainty effect

The reduction of the probability of an outcome by a constant factor has more impact when the outcome was initially certain than when it was merely probable (e.g. Allais paradox).

Source: SFB 504

certainty equivalence principle

Imagine that a stochastic objective function is a function only of output and output-squared. Then the solution to the optimization problem of choosing output will have the special characteristic that only the conditional means of the future forcing variables appear in the first order conditions. (By conditional means is meant the set of means for each state of the world.) Then the solution has the "certainty equivalence" property. "That is, the problem can be separated into two stages: first, get minimum mean squared error forecasts of the exogenous [variables], which are the conditional expectations...; second, at time t, solve the nonstochastic optimization problem," using the mean in place of the random variable. "This separation of forecasting from optimization.... is computationally very convenient and explains why quadratic objective functions are assumed in much applied work. For general [functions] the certainty equivalence principle does not hold, so that the forecasting and opt problems do not 'separate.'"

Source: econterms

certainty equivalent

The amount of payoff (e.g. money or utility) that an agent would have to receive to be indifferent between that payoff and a given gamble is called that gamble's 'certainty equivalent'. For a risk averse agent (as most are assumed to be) the certainty equivalent is less than the expected value of the gamble because the agent prefers to reduce uncertainty.

Source: econterms

CES production function

CES stands for constant elasticity of substitution. This is a function describing production, usually at a macroeconomic level, with two inputs which are usually capital and labor. As defined by Arrow, Chenery, Minhas, and Solow, 1961 (p. 230), it is written this way:

V = (bK-r + aL-r) -(1/r)

where V = value-added, (though y for output is more common),
K is a measure of capital input,
L is a measure of labor input,
and the Greek letters are constants. Normally a>0 and b>0 and r>-1. For more details see the source article.

In this function the elasticity of substitution between capital and labor is constant for any value of K and L. It is (1+r)-1.

Source: econterms

CES technology

Example, adapted from Caselli and Ventura:
For capital k, labor input n, and constant b<?? (?less that what?)
f(k,n) = (kb + nb)1/b
Here the elasticity of substitution between capital and labor is less than one, i.e. 1/(1-b)<1.

Source: econterms

CES utility

Stands for Constant Elasticity of Substitution, a kind of utility function. A synonym for CRRA or isoelastic utility function. Often written this way, presuming a constant g not equal to one:
u(c)=c1-g/(1-g)
This limits to u(c)=ln(c) as g goes to one.
The elasticity of substitution between consumption at any two points in time is constant, equal to 1/g. "The elasticity of marginal utility is equal to" -g. g can also be said to be the coefficient of relative risk aversion, defined as -u"(c)c/u'(c), which is why this function is also called the CRRA (constant relative risk aversion) utility function.

Source: econterms

ceteris paribus

means "assuming all else is held constant". The author is attempting to distinguish an effect of one kind of change from any others.

Source: econterms

Ceteris Paribus

A Latin term meaning â??all else held constantâ?? or â??all else remains the same.â?? In economics, in order to study the effect of a change in one variable, we often employ the ceteris paribus assumption: to isolate the effect of the changing variable we hold everything else constant. For example, in order to study the effect of an increase in income on the equilibrium price and quantity of a good we have to assume that everything else is held constant (tastes and preferences, the price of other goods, etc.).

Source: EconPort

CEX

Abbreviation for the U.S. government's Consumer Expenditure Survey

Source: econterms

CFTC

The U.S. government's Commodities and Futures Trading Commission.

Source: econterms

CGE

An occasional abbreviation for 'computable general equilibrium' models.

Source: econterms

chained

Describes an index number that is frequently reweighted. An example is an inflation index made up of prices weighted by frequency with which they are paid, and frequent recomputation of weights makes it a chained inded.

Source: econterms

chaotic

A description of a dynamic system that is very sensitive to initial conditions and may evolve in wildly different ways from slightly different initial conditions.

Source: econterms

characteristic equation

polynomial whose roots are eigenvalues

Source: econterms

characteristic function

Denoted here PSI(t) or PSIX(t). Is defined for any random variable X with a pdf f(x). PSI(t) is defined to be E[eitX], which is the integral from minus infinity to infinity of eitXf(x). This is also the cgf, or cumulant generating function. "Every distribution has a unique characteristic function; and to each characteristic function there corresponds a unique distribution of probability." -- Hogg and Craig, p 64

Source: econterms

characteristic root

Synonym for eigenvalue.

Source: econterms

chartalism

or "state theory of money" -- 19th century monetary theory, based more on the idea that legal restrictions or customs can or should maintain the value of money, not intrinsic content of valuable metal.

Source: econterms

chi-square distribution

A continuous distribution, with natural number parameter r. Is the distribution of sums of squares of r standard normal variables. Mean is r, variance is 2r, pdf and cdf is difficult to express in html, and moment-generating function (mgf) is (1-2t)-r/2.

From older definition in this same database: If n random values z1, z2, ..., zn are drawn from a standard normal distribution, squared, and summed, the resulting statistic is said to have a chi-squared distribution with n degrees of freedom: z12 + z22 + ... + zn2) ~ X2(n) This is a one-parameter family of distributions, and the parameter, n, is conventionally labeled the degrees of freedom of the distribution. -- quoted and paraphrased from Johnston See also noncentral chi-squared distribution

Source: econterms

Chicago School

Refers to an perspective on economics of the University of Chicago circa 1970. Variously interpreted to imply:
1) A preference for models in which information is perfect, and an associated search for empirical evidence that choices, not institutional limitations, are what result in outcomes for people. (E.g., that committing crime is a career choice; that smoking represents an informed tradeoff between health risk and immediate gratification.)
2) That antitrust law is rarely necessary, because potential competition will limit monopolist abuses.

Source: econterms

choke price

The lowest price at which the quantity demanded is zero.

Source: econterms

Cholesky decomposition

Given a symmetric positive definite square matrix X, the Cholesky decomposition of X is the factorization X=U'U, where U is the square root matrix of X, and satisfies:
(1) U'U = X
(2) U is upper triangular (that is, it has all zeros below the diagonal) Once U has been computed, one can calculate the inverse of X more easily, because X-1 = U-1(U')-1, and the inverses of U and U' are easier to compute.

Source: econterms

Cholesky factorization

Same as Cholesky decomposition.

Source: econterms

Chow test

A particular test for structural change; an econometric test to determine whether the coefficients in a regression model are the same in separate subsamples. In reference to a paper of G.C. Chow (1960), "the standard F test for the equality of two sets of coefficients in linear regression models" is called a Chow test. See derivation and explanation in Davidson and MacKinnon, p. 375-376. More info in Greene, 2nd edition, p 211-2.

Homoskedasticity of errors is assumed although this can be dubious since we are open to the possibility that the parameter vector (b) has changed.
RSSR = the sum of squared residuals from a linear regression in which b1 and b2 are assumed to be the same
SSR1 = the sum of squared residuals from a linear regression of sample 1
SSR2 = the sum of squared residuals from a linear regression of sample 2
b has dimension k, and there are n observations in total
Then the F statistic is:
((RSSR-SSR1-SSR2)/k ) / ((SSR1+SSR2)/(n-2k).
That test statistic is the Chow test.

Source: econterms

circulating capital

flows of value within a production organization. Includes stocks of raw material, work in process, finished goods inventories, and cash on hand needed to pay workers and suppliers before products are sold.

Source: econterms

CJE

An abbreviation for the Canadian Journal of Economics.

Source: econterms

CLAD

Stands for the "Censored Least Absolute Deviations" estimator. If errors are symmetric (with median of zero), this estimator is unbiased and consistent though not efficient. The errors need not be homoskedastic or normally distributed to have those attributes.

CLAD may have been defined for the first time in Powell, 1984.

Source: econterms

classical

According to Lucas (1998), a classical theory would have no explicit reference to preferences. Contrast neoclassical.

Source: econterms

Classical vs Bayesian methods

Note that the last paragraph is actually a description of classical econometrics. As in statistics, classical (or frequentist) methods concentrate on testing hypotheses that are derived from theory, using the data available. Bayesian econometrics (and statistics), on the other hand, stresses the role of the data itself in both development and testing of economic theories. While most empirical applications of econometrics use classical methods, Bayesian econometrics has gained importance for applied work in recent years, a fact that is partly due to the increased computer power available for computationally intensive applied work. The classical vs. Bayesian controversy extends to other social sciences as well; for an appreciation of its relevance in psychology, see e.g. Gigerenzer (1987).

Source: SFB 504

Clayton Act

A 1914 U.S. law on the subject of antitrust and price discrimination.
Section two prohibits price discrimination.
Section three prohibits sales based on an exclusive dealing contract requirement that may have the effect of lessening competition.
Section seven prohibits mergers where "the effect of such acquisition may be substantially to lessen competition, or tend to create a monopoly" in any line of commerce.

Source: econterms

clears

A verb. A market clears if the vector of prices for goods is such that the excess demand at those prices is zero. That is, the quantity demanded of every good at those prices is met.

Source: econterms

cliometrics

the study of economic history; the 'metrics' at the end was put to emphasize (possibly humorously) the frequent use of regression estimation.

'The cliometric contribution was the application of a systematic body of theory -- neoclassical theory -- to history and the application of sophisticated, quantitative techniques to the specification and testing of historical models.' -- North (1990/1993) p 131.

Source: econterms

clustered data

Data whose observations are not iid but rather come in clusters that are correlated together -- e.g. a data set of individuals some of whom are siblings of others, and are therefore similar demographically.

Source: econterms

Coase theorem

Informally: that in presence of complete competitive markets and the absence of transactions costs, an efficient set of inputs to production and outputs from production will be chosen by agents regardless of how property rights over the inputs were assigned to the agents. A detailed discussion is in the Encyclopedia of Law and Economics, online.

Source: econterms

Cobb-Douglas production function

A standard production function which is applied to describe much output two inputs into a production process make. It is used commonly in both macro and micro examples.

For capital K, labor input L, and constants a, b, and c, the Cobb-Douglas production function is
f(k,n) = bkanc

If a+c=1 this production function has constant returns to scale. (Equivalently, in mathematical language, it would then be linearly homogenous.) This is a standard case and one often writes (1-a) in place of c.

Log-linearization simplifies the function, meaning just that taking logs of both sides of a Cobb-Douglass function gives one better separation of the components.

In the Cobb-Douglass function the elasticity of substitution between capital and labor is 1 for all values of capital and labor.

Source: econterms

cobweb model

A theoretical model of an adjustment process that on a price/quantity or supply/demand graph spirals toward equilibrium.

Example, from Ehrenberg and Smith: Suppose the equilibrium labor market wage for engineers is stable over a ten-year period, but at the beginning of that period the wage is above equilibrium for some reason. Operating on the assumption, let's say, that engineering wages will remain that high, too many students then go into engineering. The wage falls suddenly from oversupply when that population graduates. Too few students then choose engineering. Then there is a shortage following their graduation. Adjustment to equilibrium could be slow.

"Critical to cobweb models is the assumption that workers form myopic expectations about the future behavior of wages." "Also critical to cobweb models is that the demand curve be flatter than the supply curve; if it is not, the cobweb 'explodes' when demand shifts and an equilibrium wage is never reached."

Source: econterms

Cochrane-Orcutt estimation

An algorithm for estimating a time series linear regression in the presence of autocorrelated errors. The implicit citation is to Cochrane-Orcutt (1949).

The procedure is nicely explained in the SHAZAM manual section online at the SHAZAM web site. Their procedure includes an improvement to include the first observation attributed to the Prais-Winsten transformation. A summary of their excellent description is below. This version of the algorithm can handle only first-order autocorrelation but the Cochrane-Orcutt method could handle more.

Suppose we wish to regress y[t] on X[t] in the presence of autocorrelated errors. Run an OLS regression of y on X and construct a series of residuals e[t]. Regress e[t] on e[t-1] to estimate the autocorrelation coefficient, denoted p here. Then construct series y* and X* by: y*1 = sqrt(1-p2)y1,
X*1 = sqrt(1-p2)X1,

and
y*t = yt - pyt-1,
X*t = Xt - pXt-1

One estimates b in y=bX+u by applying this procedure iteratively -- renaming y* to y and X* to X at each step, until estimates of p have converged satisfactorily.

Using the final estimate of p, one can construct an estimate of the covariance matrix of the errors, and apply GLS to get an efficient estimate of b.

Transformed residuals, the covariance matrix of the estimate of b, R2, and so forth can be calculated; see source.

Source: econterms

coefficient of determination

Same as R-squared.

Source: econterms

coefficient of variation

An attribute of a distribution: its standard deviation divided by its mean.

Example: In a series of wage distributions over time, the standard deviation may rise over time with inflation, but the coefficient of variation may not, and thus the fundamental inequality may not.

Source: econterms

Cognition

Cognition is a common label for processes and structures which have to do with perception, recognition, recall, imagination, concept, thought, but also supposition, expectation, plan and problem solving. It should be distinguished between cognition as process and cognition as the result of this process (see Dorsch Psychologisches Wörterbuch, 1994).

Source: SFB 504

Cognitive dissonance theory

The cognitive dissonance theory (Festinger, 1957) is a general theoretical framework which explains how people change their opinions or hypotheses about themselves and their environment. An important application of cognitive dissonance theory is research on attitude change.

The basic assumption of cognitive dissonance theory is that people are motivated to reduce inconsistent cognitions. Cognition refers to any kind of knowledge or opinion about oneself or the world.

Two cognitions can be either relevant or irrelevant. If they are relevant, then they must be consonant or dissonant (i.e. that one does not follow from the other). Dissonant cognitions produce an aversive state which the individual will try to reduce by changing one or both of the cognitions. If, for example, a heavy smoker is exposed to statistics showing that smoking leads to lung cancer, he or she can change the cognition about how much he smokes ("Iīm really only a light smoker.") or perceive the statistical data as hysterical environmentalist propaganda and discount it.

Cognitive dissonance can be reduced by adding new cognitions, if (a) the new cognitions add weight to one side and thus, decreases the proportion of cognitive elements that are dissonant or (b) the new cognitions change the importance of the cognitive elements that are in dissonant relation with one another. The other way to reduce cognitive dissonance is to change existing cognitions. Changing existing cognitions reduces dissonance if (a) the new content makes them less contradictory to others or (b) their importance is reduced.

If new cognitions cannot be added or the existing ones changed, behaviors that have cognitive consequences favoring consonance will be recruited. Seeking new information is an example of such behavior.

Source: SFB 504

cohort

A sub-population going through some specified stage in a process. The term is often applied to describe a population of persons going through some life stage, like a first year in a new school.

Source: econterms

cointegration

"An (n x 1) vector time series yt is said to be cointegrated if each of the series taken individually is ... nonstationary with a unit root, while some linear combination of the series a'y is stationary ... for some nonzero (n x 1) vector a."
Hamilton uses the phrasing that yt is cointegrated with a', and offers a couple of examples. One was that although consumption and income time series have unit roots, consumption tends to be a roughly constant proportion of income over the long term, so (ln income) minus (ln consumption) looks stationary.

Source: econterms

Collar

A collar consists of holding an underlying asset and simultaneously buying a put option (long put) and selling a call option (short call) of this underlying asset. Because of the long put, the collar hedges against losses of the underlying asset. But the short call limits the possibility of participation on the gains of the underlying asset.

Source: SFB 504

commercial paper

commoditized short-term corporate debt.

Source: econterms

Common value auction

Instead of having statstically independent information, the bidders' typically obtain private signals about an unknown common value of the resource in sale which are correlated with the underlying (unknown) common value, and correlated with one another. For example, prior to auctions of oil drilling licenses, the bidding companies obtain extensive seismic information on the likely quantity of oil hidden in the earth (or sea). In order to prepare profitable bids, the bidders then have to estimate the likely information obtained by rivalling bidders. In particular, the equilibrium bids must incoporate the fact that given a bidder wins the auction, all rivalling bids will have been lower, and thus the (unknown) common value on average will assessed to be lower than it would have been estimated without having won the auction. In this sense, winning the auction is 'bad news' that must be anticipated and incorporated into the bids, in order to avoid falling prey to a so-called winner's curse.

Source: SFB 504

compact

A set is compact if it is closed and bounded.

The concept comes up most often in economics in the context of a theory in which a function must be maximized. Continuous functions that are well defined on a compact domain have a maximum and minimum; this is the Weierstrauss Theorem. Noncontinuous functions, or functions on a noncompact domain, may not.

Source: econterms

comparative advantage

To illustrate the concept of comparative advantage requires at least two goods and at least two places where each good could be produced with scarce resources in each place. The example drawn here is from Ehrenberg and Smith (1997), page 136. Suppose the two goods are food and clothing, and that 'the price of food within the United States is 0.50 units of clothing and the price of clothing is 2 units of food. [Suppose also that] the price of food in China is 1.67 units of clothing and the price of clothing is 0.60 units of food.' Then we can say that 'the United States has a comparative advantage in producing food and China has a comparative advantage in producing clothing. It follows that in a trading relationship the U.S. should allocate at least some of its scarce resources to producing food and China should allocate at least some of its scarce resources to producing clothing, because this is the most efficient allocation of the scarce resources and allows the price of food and clothing to be as low as possible.

Famous economist David Ricardo illustrated this in the 1800s using wool in Britain and wine from Portugal as examples. The comparative advantage concept seems to be one of the really challenging, novel, and useful abstractions in economics.

Source: econterms

compensating variation

The price a consumer would need to be paid, or the price the consumer would need to pay, to be just as well off after (a) a change in prices of products the consumer might buy, and (b) time to adapt to that change. It is assumed the consumer does not benefit or lose from producing the product.

Source: econterms

Competitive market equilibrium

Competitive, or Walrasian, market equilibrium is the traditional concept of economic equilibrium, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where buyers and sellers take the terms of trade (prices) as a given parameter of the exchange environment. Basically, each trader decides upon a quantity that is such small compared to the total quantity traded in the market that their individual transactions have no influence on the prices.

A Walrasian or competitive equilibrium consists of a a vector of prices and an allocation such that given the prices, each trader by maximizing his objective function (profit, preferences) subject to his technological possibilities and resource constraints plans to trade into his part of the proposed allocation, and such that the prices make all net trades compatible with one another ('clear the market') by equating aggregate supply and demand for the commodities which are traded.

Although this rather narrow concept of economic equilibrium is inappropriate in many situations, such as oligopolostic market structures, public goods and externalities, collusion, or markets with price rigidities, it hightlights the close connection between unregulated free price formation in competitive markets and allocative efficiency. For a broad variety of preferences, technologies, and ownership structures, competitive equilibria maximize social welfare in the sense of maximizing the sum of aggregate consumer and producer surplus (see economic rents). Not only do Walrasian markets provide an exchange institution that leads to efficient outcomes, but any efficient allocation can be reached as a competitive equilibrium by an appropriate redistribution of the traders' intial resources.

In addition, Walrasian markets minimize the informational requirements to complete a transaction: each trader only has to know the characteristics of the object traded, the price, and his own objective function (preferences, technology). However, complete information on prices and on the characteristic of the commodities is necessary to retain the efficiency features of free price formation in competitive markets. If there is asymmetric information on the quality of the commodities, prices only insufficiently signal the relative opportunity costs of economic decisions, and, as a result, allocative decisions will no longer lead to efficient market outcomes. Even worse, the repercussions of adverse quality updating can make markets break down completely, with no voluntary trade taking place at all. (Potential market breakdowns in the presence of commodities of varying quality and asymmetric information have become famous as the lemons problem.)

If markets are 'thin', traders have market power, and the competitive paradigm does no longer apply. Instead, prices are explained from matching strategically formed price 'bids' (buying demands) and price 'asks' (selling offers). Accordingly, more general models of competitive markets are described as auctions. However, as the number of bidders grows large, the strategic equilibrium bids from common value auctions approach the competitive price. A similar result holds for competitive markets with perfect information where the traders are free to form coalitions which maximize the joint gains from trade. Then, the coalitionally stable outcomes form a large set, which includes in particular the (efficient) competitive allocation. Again, as the number of trader becomes large, the set of outcomes which is stable under collusive behavior shrinks, and it approaches the (unique) competitive outcome again. Thus, in the limit, both the coalitional and the strategic approach to describing competitive markets collapse into the simple competitive (Walrasian) paradigma. This fact both underlines the benchmark role of perfectly competitive market equilibrium for the allocation of goods, and the restrictive nature of the Walrasian concept of competitive markets.

Source: SFB 504

Complementary Goods

See Complements.

Source: EconPort

Complements

Goods that are typically consumed together. Examples include guns and bullets, peanut butter and jelly, washers and dryers. If two goods are complements, then an increase in the price of one good, will lead to a decrease in the demand for the other related good (the complement). Similarly, a decrease in the price of one good will lead to an increase in the demand for the complement.

If two goods are complements, the cross-price elasticity of demand is negative.

Source: EconPort
See also: cross-price elasticity of demand , 

complete

(economics theory definition) A model's markets are complete if agents can buy insurance contracts to protect them against any future time and state of the world.

(statistics definition) In a context where a distribution is known except for parameter q, a minimal sufficient statistic is complete if there is only one unbiased estimator of q using that statistic.

Source: econterms

complete market

One in which the complete set of possible gambles on future states-of-the-world can be constructed with existing assets.
This is a theoretical ideal against which reality can be found more or less wanting. It is a common assumption in finance or macro models, where the set of states-of-the-world is formally defined.

Source: econterms

Compustat

a data set used in finance

Source: econterms

concavity of distribution function condition

A property of a distribution function-utility function pair. (At least, it MAY require specification of the utility function; this editor can't tell well.) It is assumed to hold in some principal-agent models so as to make certain conclusions possible.

Source: econterms

concentration ratio

A way of measuring the concentration of market share held by particular suppliers in a market. "It is the percentage of total market sales accounted for by a given number of leading firms." Thus a four-firm concentration ratio is the total market share of the four firms with the largest market shares. (Sometimes this particular statistic is called the CR4.)

Source: econterms

condition number

A measure of how close a matrix is to being singular. Relevant in estimation if the matrix of regressors is nearly singular the data are nearly collinear and (a) it will be hard to make an accurate or precise inverse, (b) a linear regression will have large standard errors.

The condition number is computed from the characteristic roots or eigenvalues of the matrix. If the largest characteristic root is denoted L and the smallest characteristic root is S (both being presumed to be positive here, that is, the matrix being diagnosed is presumed to be positive definite), then the condition number is:

gamma = (L/S).5

Values larger than 20, according to Greene (93), are observed if and only if the matrix is 'nearly singular'. Greene cites Belsley et al (1980) for this term and the number 20.

Source: econterms

conditional

has a special use in finance when used without other modifiers; often means 'conditional on time and previous asset returns'. In that context, one might read 'returns are conditionally normally distributed.'

Source: econterms

conditional factor demands

a collection of functions that give the optimal demands for each of several inputs as a function of the output expected, and the prices of inputs. Often the prices are taken as given, and incorporated into the functions, and so they are only functions of the output.

Usual forms:

x1(w1, w2, y) is a conditional factor demand for input 1, given input prices w1 and w2, and output quantity y

Source: econterms

conditional variance

Shorthand often used in finance to mean, roughly, "variance at time t given that many events up through time t-1 are known."

For example, it has been useful in studying aggregate stock prices, which go through periods of high volatility and periods of low volatility, to model them econometrically as having the variance at time t as coming from an AR process. This is the ARCH idea. In such a statistical model, the conditional variance is generally different from the unconditional variance. That is, the unconditional variance is the variance of the whole process, whereas the 'conditional variance' can be better estimated since in this phrasing it is assumed that we can estimate the immediately previous values of variance.

Source: econterms

conformable

A matrix may not have the right dimension or shape to fit into some particular operaton with another matrix. Take matrix addition -- the matrices are supposed to have the same dimensions to be summed. If they don't, we can say that they are not conformable for addition. The most common application of the term comes in the context of multiplication. Multiplying an M x N matrix A by an R x S matrix B directly can only be done if N=R. Otherwise the matrices are not conformable for this purpose. If instead M=R, then the intended operation may be to take the transpose of A and multiply it by B. This operation would properly be denoted A'B, where the prime denotes the transpose of A.

Source: econterms

conglomerate

A firm operating in several industries.

Source: econterms

consistent

An estimator for a parameter is consistent iff the estimator converges in probability to the true value of the parameter; that is, the plim of the estimator, as the sample size goes to infinity, is the parameter itself. Another phrasing: an estimator is consistent if it has asymptotic power of one.

"Consistency", without a modifier, is synonymous with weak consistency.

From Davidson and Mackinnon, p. 79: If for any possible value of the parameter q in a region of a parameter space the power of a test goes to one as sample size n goes to infinity, that test is said to be consistent against alternatives in that region of the parameter space. That is, if as the sample size increases we can in the limit reject every false hypothesis about the parameter, the test is consistent.

How does one prove that an estimator is consistent? Here are two ways.
(1) Prove directly that if the model is correct, the estimator has power one in the limit to reject any alternative but the true parameter.
(2) Sufficient conditions for proving that an estimator is consistent are (i) that the estimator is asymptotically unbiased and (ii) that its variance collapses to zero as the sample size goes to infinity. This method of proof is usually easier than (1) and is commonly used.

Source: econterms

constant returns to scale

An attribute of a production function. A production function exhibits constant returns to scale if changing all inputs by a positive proportional factor has the effect of increasing outputs by that factor. This may be true only over some range, in which case one might say that the production function has constant returns over that range.

Source: econterms

Constant Returns To Scale

If a firm exhibits constant returns to scale, when it increases the use of inputs then output increases by the same proportion. For example, if the firm doubles the use of all inputs, then output will also double. With constant returns to scale, long-run average costs are constant.

Source: EconPort

Constantsum games

Games in which for every combination of strategies the sum of players' payoff is the same. For example, auction games for risk neutral bidders and a risk-neutral seller are constant-sum games, where a fixed social surplus from exchange is to be divided between the bidders and the bid-taker. More generally, all exchange situations which do neither allow for production nor for destruction of resources are constant-sum games.

Source: SFB 504

Construct validity

is a type of validity that refers to the degree to which a test captures the underlying construct purportedly measured by the test.

Source: SFB 504

Consumer demand

In the theory of consumer demand, demand functions are derived for commodities by considering a model of rational choice based on utility maximization together with a description of underlying economic constraints. In the theory of consumer demand, these constraints include income (which is treated as given here while it might be endogenous in a more general model of household decisions), and commodity prices, which are also fixed from the perspective of an individual household.

Source: SFB 504

Consumer Expenditure Survey

Conducted by the U.S. government. See its Web site.

Source: econterms

Consumption

Household behavior can most easily by characterized by the consumption function, both in in macroeconomics as well as in microeconomics. The consumption function explains how much a household consumes as a function of income (and, in some cases, other explanatory variables). Note that consumption is not only expenditures for goods but also consumption of services like living in the own house or using durables.

Keynes (1936) postulates in his General Theory the consumption function as the relationship between consumption to disposable income. In the early 50s the two dominant models of consumption were developed: the permanent income hypothesis and the life-cycle hypothesis. While these models were once viewed as competing, they can now be seen as complementary with differences in emphasis which serve to illuminate different significant problems. Both models emphasize the distinction between (1) consumer expenditures measured by the national income account and (2) consumption which is explained by optimal allocation of present and future recources over time.

The dependence of consumption on current income is described in the Keynesian consumption function while the dependence of consumption on lifetime income is described in the life cycle hypothesis and the permanent income hypothesis. The interest rate influences consumption via saving because of the intertemporal substitution from one period to a future period: Income that is not used for consumption purposes can be saved and consumed one period later, earning an interest payment and hence allowing for more consumption in the future. This increase in the absolute amount available for consumption, as reflected in the interest rate, has then to be compared with the individualīs rate of time preference (the latter expressing her patience with respect to later consumption, or, more generally, to delayed utility derived from consumption). In the optimum, the interest rate and the rate of time preference have to be equal. This is one of the fundamentals of intertemporal choice (as a special form of rational behavior).

Source: SFB 504

consumption beta

"A security's consumption beta is the slope in the regression of its return on per capita consumption."

Source: econterms

consumption set

The set of affordable consumption bundles. One way to define a consumption set is by a set of prices, one for each possible good, and a budget. Or a consumption set could be defined in a model by some other set of restrictions on the set of possible consumption bundles.
E.g. if consumer i can consume nonnegative quantities of all goods, it is standard to define xi as i's consumption set, a member of R+L where L is the number of goods. Normally if the agent is endowed with a set of goods, the endowment is in the consumption set.

Source: econterms

contingent valuation

The use of questionnaires about valuation to estimate the willingness of respondents to pay for public projects or programs.

Often the question is framed, "Would you accept a tax of x to pay for the program?" Any such survey must be carefully done, and even so there is dispute about the value of the basic method, as is discussed in the issue of the JEP with the Portney (1994) article.

Source: econterms

Continue to

reverse hindsight bias, implications for further research, theoretical explanations

Source: SFB 504

contract curve

Same as Pareto set, with the implication that it is drawn in an Edgeworth box.

Source: econterms

contraction mapping

Given a metric space S with distance measure d(), and T:S->S mapping S into itself, T is a contraction mapping if for some b ('b') in the range (0,1), d(Tx,Ty) is less than or equal to b*d(x,y) for all x and y in S.

One often abbreviates the phrase 'contraction mapping' by saying simply that T is a contraction.

The function resulting from the applications of a contraction could slope the opposite way of the original function as long as it is less steeply sloped.

A standard way to prove that an operator T is a contraction is to prove that it satisfies Blackwell's conditions.

Source: econterms

contractionary fiscal policy

A government policy of reducing spending and raising taxes.
In the language of some first courses in macroneconomics, it shifts the IS curve (investment/saving curve) to the left.

Source: econterms

contractionary monetary policy

A government policy of raising interest rates charged by the central bank.
In the language of some first courses in macroeconomics, it shifts the LM curve (liquidity/money curve) to the left.

Source: econterms

control for

As used in the following way: "The effect of X on Y disappears when we control for Z", the phrase means to regress Y on both X and Z, together, and to interpret the direct effect of X as the only effect. Here the effect of Z on X has been "controlled for". It is implied that X is not causing changes in Z.

Source: econterms

Control group

In an experimental design, which is contrasting two or more groups, the control group of subjects is not given the treatment whose effect is under investigation.

Source: SFB 504

control variable

A variable in a model controlled by an agent in order to optimize something.

Source: econterms

convergence

Multiple meanings: (1) a mathematical property of a sequence or series that approaches a value;
In macro: ''Catch-up' refers to the long-run process by which productivity laggards close the proportional gaps that separate them from the productivity leader .... 'Convergence,' in our usage, refers to a reduction of a measure of dispersion in the relative productivity levels of the array of countries under examination.' Like Barro and Sala-i-Martin (92)'s 'sigma-convergence', a narrowing of the dispersion of country productivity levels over time.

Source: econterms

convergence in quadratic mean

A kind of convergence of random variables. If xt converges in quadratic mean it converges in probability but it does not necessarily converge almost surely.

The following is a best guess, not known to be correct.
Let et be a stochastic process and Ft be an information set at time t uncorrelated with et:

E[et|Ft-m] converges in quadratic mean to zero as m goes to infinity IFF:
E[E[et|Ft-m]2] converges to zero as m goes to infinity.

Source: econterms

convolution

The convolution of two functions U(x) and V(x) is the function:
U*V(x) = (integral from 0 to x of) U(t)V(x-t) dt

Source: econterms

Cook's distance

A metric for deciding whether a particular point alone affects regression estimates much. After a regression is run one can consider for each data point how far it is from the means of the independent variables and the dependent variable. If it is far from the means of the independent variables it may be very influential and one can consider whether the regression results are similar without it.

[Need to add the equation defining the Cook's d here.]

Source: econterms

cooperative game

A game structure in which the players have the option of planning as a group in advance of choosing their actions. Contrast noncooperative game.

Source: econterms

Coordination games

Normal form game where the players have the same number of strategies, which can be indexed such that it is always a strict Nash equilibrium for both players to play strategies having the same index.

Source: SFB 504

core

Defined in terms of an original allocations of goods among agents with specified utility functions. The core is the set of possible reallocations such that no subset of agents could break off from the others and all do better just by trading among themselves.
Equivalently: The intersection of individually rational allocations with the Pareto efficient allocations. Individually rational, here, means the allocations such that no agent is worse off than with his endowment in the original allocation.

Source: econterms

corner solution

A choice made by an agent that is at a constraint, and not at the tangency of two classical curves on a graph, one characterizing what the agent could obtain and the other characterizing the imaginable choices that would attain the highest reachable value of the agents' objective.

A classic example is the intersection between a consumer's budget line (characterizing the maximum amounts of good X and good Y that the consumer can afford) and the highest feasible indifference curve. If the agent's best available choice is at a constraint -- e.g. among affordable bundles of good X and good Y the agent prefers quantity zero of good X -- that choice is often not at a tangency of the indifference curve and the budget line, but at a "corner"

Contrast interior solution.

Source: econterms

correlation

Two random variables are positively correlated if high values of one are likely to be associated with high values of the other. They are negatively correlated if high values of one are likely to be associated with low values of the other.

Formally, a correlation coefficient is defined between the two random variables (x and y, here). Let sx and xy denote the standard devations of x and y. Let sxy denote the covariance of x and y. The correlation coefficent between x and y, denoted sometimes rxy, is defined by:

rxy = sxy / sxsy

Correlation coefficients are between -1 and 1, inclusive, by definition. They are greater than zero for positive correlation and less than zero for negative correlations.

Source: econterms

cost curve

A graph of total costs of production as a function of total quantity produced.

Source: econterms

cost function

is a function of input prices and output quantity. Its value is the cost of making that output given those input prices. A common form: c(w1, w2, y) is the cost of making output quantity y using inputs that cost w1 and w2 per unit.

Source: econterms

cost-benefit analysis

An approach to public decisionmaking. Quotes below from Sugden and Williams, 1978 p. 236, with some reordering: 'Cost-benefit analysis is a 'scientific' technique, or a way of organizing thought, which is used to compare alternative social states or courses of action.' 'Cost-benefit analysis shows how choices should be made so as to pursue some given objective as efficiently as possible.' 'It has two essential characteristics, consistency and explicitness. Consistency is the principle that decisions between alternatives should be consistent with objectives....Cost-benefit analysis is explicit in that it seeks to show that particular decisions are the logical implications of particular, stated, objectives.' 'The analyst's skill is his ability to use this technique. He is hired to use this skill on behalf of his client, the decision-maker..... [The analyst] has the right to refuse offers of employment that would require him to use his skills in ways that he believes to be wrong. But to accept the role of analyst is to agree to work with the client's objectives.' p. 241: Two functions of cost-benefit analysis: It 'assists the decision-maker to pursue objectives that are, by virtue of the community's assent to the decision-making process, social objectives. And by making explicit what these objectives are, it makes the decision-maker more accountable to the community.' 'This view of cost-benefit analysis, unlike the narrower value-free interpretation of the decision-making approach, provides a justification for cost-benefit analysis that is independent of the preferences of the analyst's immediate client. An important consequence of this is that the role of the analyst is not completely subservient to that of the decision-maker. Because the analyst has some responsibility of principles over and above those held by the decision-maker, he may have to ask questions that the decision-maker would prefer not to answer, and which expose to debate conflicts of judgement and of interest that might otherwise comfortably have been concealed.'

Source: econterms

cost-of-living index

A cost-of-living price index measures the changing cost of a constant standard of living. The index is a scalar measure for each time period. Usually it is a positive number which rises over time to indicate that there was inflation. Two incomes can be compared across time by seeing whether the incomes changed as much as the index did.

Source: econterms

costate

A costate variable is, in practice, a Lagrangian multiplier, or Hamiltonian multiplier.

Source: econterms

countable additivity property

the third of the properties of a measure.

Source: econterms

coupon strip

A bond can be resold into two parts that can be thought of as components: (1) a principal component that is the right to receive the principal at the end date, and (2) the right to receive the coupon payments. The components are called strips. The right to receive coupon payments is the coupon strip.

Source: econterms

Cournot duopoly

A pair of firms who split a market, modeled as in the Cournot game.

Source: econterms

Cournot game

A game between two firms. Both produce a certain good, say, widgets. No other firms do. The price they receive is a decreasing function of the total quantity of widgets that the firms produce. That function is known to both firms. Each chooses a quantity to produce without knowing how much the other will produce.

Source: econterms

Cournot model

A generalization of the Cournot game to describe industry structure. Each of N firms will choose a quantity of output. Price is a commonly-known decreasing functions of total output. All firms know N and take the output of the others as given. Each firm has a cost function ci(qi). Usually the cost functions are treated as common knowledge. Often the cost functions are assumed to be the same for all firms.

The prediction of the model is that the firms will choose Nash equilibrium output levels.

Formally, from notes given by Michael Whinston to the Economics D50-1 class at Northwestern U. on Sept 23, 1997:
Denote xi as a quantity that firm i considers,
X as the total quantity (the sum of the xi's),
xi* and X* as the Nash equilibrium levels of those quantities,
X-i as the total quantity chosen by all firms other than firm i,
and p(X) as the function mapping total quantity to price in the market.

Each firm i solves:
maxxi p(xi+X-i)-ci(xi)

The first order conditions are, for i from 1 to N:

p'(xi*+X-i)+p(X*)-ci'(xi*)=0

Assuming xi* is greater than 0 for all i, then the Nash equilibrium output levels are characterized by the N equations:

p'(X*)xi* + p(X*) = ci'(xi*) for each i.

Source: econterms

covariance stationary

A stochastic process is covariance stationary if neither its mean nor its autocovariances depend on the index t.

Source: econterms

Covered short call written call

A covered short call consists of holding an underlying asset and simultaneously selling a call option (short call) of this underlying asset. Although in the literature exists the name call hedge, the covered short call is not an actual hedge strategy. It is only possible to hedge losses of the underelying asset to the amount of the option price, higher losses will be reduced to this amount. On the other hand it is not possible to participate on gains of the underlying asset, because in this case, the option will be exercised, i.e. the seller has to deliver the underlying asset.

Source: SFB 504

Cowles Commission

A 1950s, probably British, panel on econometrics which focussed attention on the problem of simultaneous equations. In some tellings of the history this had an impact on the field -- other problems such as errors-in-variables (measurement errors in the independent variables), were set aside or given lower priority elsewhere too because of the prestige and influence of the Cowles Commission.

Source: econterms

CPI

The Consumer Price Index, which is a measure of the cost of goods purchased by average U.S. household. It is calculated by the U.S. government's Bureau of Labor Statistics.


As a pure measure of inflation, the CPI has some flaws:
1) new product bias (new products are not counted for a while after the appear)
2) discount store bias (consumers who care won't pay full price)
3) substitution bias (variations in price can cause consumers to respond by substituting on the spot, but the basic measure holds their consumption of various goods constant)
4) quality bias (product improvements are under-counted)
5) formula bias (overweighting of sale items in sample rotation)

Source: econterms

CPS

The Current Population Survey (of the U.S.) is compiled by the U.S. Bureau of the Census, which is in the Dept of Commerce. The CPS is the source of official government statistics on employment and unemployment in the U.S. Each month 56,500-59,500 households are interviewed about their average weekly earnings and average hours worked. The households are selected by area to represent the states and the nation. "Each household is interviewed once a month for four consecutive months in one year and again for the corresponding time period a year later" to make month-to-month and year-to-year comparisons possible. The March CPS is special. For one thing the respondents are asked about insurance then.

Source: econterms

Cramer-Rao lower bound

Whenever the Fisher information I(b) is a well-defined matrix or number, the variance of an unbiased estimator B for b is at least as large as [I(B)]-1.

Source: econterms

criterion function

Synonym for loss function. Used in reference to econometrics.

Source: econterms

critical region

synonym for rejection region

Source: econterms

Cronbach's alpha

A test for a model or survey's internal consistency. Called a 'scale reliability coefficient' sometimes. The remainder of this definition is partial and unconfirmed.

Cronbach's alpha assesses the reliability of a rating summarizing a group of test or survey answers which measure some underlying factor (e.g., some attribute of the test-taker). A score is computed from each test item and the overall rating, called a 'scale' is defined by the sum of these scores over all the test items. Then reliability a is defined to be the square of the correlation between the measured scale and the underlying factor the scale was supposed to measure. (Which implies that one has another measure in test cases of that underlying factor, or that it's imputed from the test results.) (In Stata's examples it remains unclear what the scale is, and how it's measured; apparently alpha can be generated without having a measure of the underlying factor.)

Source: econterms

Cross-Price Elasticity of Demand

The cross-price elasticity of demand measures the sensitivity of the demand for one good to a change in the price of another good. It is calculated as:
(Percentage Change in Demand for Good X)/(Percentage Change in Price of Good Y)

If two goods are independent (that is, the price of one good has no effect on the demand for the other good), the cross-price elasticity of demand is zero.

If two goods are complements (they are typically consumed together) an increase in the price of one good will decrease the demand for the other good (and the reverse); therefore for complements the cross-price elasticity of demand is negative

On the other hand, if two goods are substitutes, an increase in the price of one good will increase the demand for the other good (and the reverse); therefore for substitutes, the cross-price elasticity of demand is positive.

Source: EconPort

cross-section data

Parallel data on many units, such as individuals, households, firms, or governments. Contrast panel data or time series data.

Source: econterms

cross-validation

A way of choosing the window width for a kernel estimation. The method is to select, from a set of possible window widths, one that minimizes the sum of errors made in predicting each data point by using kernel regression on the others.

Formally, let J be the number of data points, j an index to each one, from one to J, yj the dependent variable for each j, Xj the independent variables for that j, Yj the dependent variable for that j, and {hi} for i=1 to n the set of candidate window widths. The hi's might be a set of equally spaced values on a grid. The algorithm for choosing one of the hi's is:

For each candidate window width hi
{
..For each j from 1 to J
..{
....Drop the data point (Xj, Yj) from the sample temporarily
....Run a kernel regression to estimate Yj using the remaining X's and Y's
....Keep track of the square of the error made in that prediction
..}
..Sum the squares of the errors for every j to get a score for candidate window width hi
..Record that in a list as the score for hi
}
Select as the outcome h of this algorithm the hi with the lowest score

The grid approach is necessary because the problem is not concave. Otherwise one might try a simpler maximization e.g., with the first order conditions.
Note however that a complete execution of the cross-validation method can be very slow because it requires as many kernel regressions as there are data points. E.g. in this author's experience, the cross-validation computation for one window width on 500 data points on a Pentium-90 in Gauss took about five seconds, 1000 data points took circa seventeen seconds, but for 15000 data points it took an hour. (Then it takes another hour to check another window width; so even the very simplest choice, between two window widths, takes two hours.)

Source: econterms

CRRA

Stands for Constant Relative Risk Aversion, a property of some utility functions, also said to have isoelastic form. CRRA is a synonym for CES.

Example 1: for any real a<1, u(c)=ca/a is a CRRA utility function. It is a vNM utility function.

Source: econterms

CRS

Stands for Constant Returns to Scale.

Source: econterms

CRSP

Center for Research in Security Prices, a standard database of finance information at the University of Chicago. Has daily returns on NYSE, AMEX, and NASDAQ stocks.

Started in early 1970s by Eugene Fama among others. The data there was so much more convenient than alternatives that it drove the study of security prices for decades afterward. It did not have volume data which meant that volume/volatility tests were rarely done.

Source: econterms

cubic spline

A particular nonparametric estimator of a function. Given a data set {Xi, Yi} it estimates values of Y for X's other than those in the sample. The process is to construct a function that balances the twin needs of (1) proximity to the actual sample points, (2) smoothness. So a 'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for exact equation. The cubic spline seems to be the most common kind of spline smoother.

Source: econterms

current account balance

The difference between a country's savings and its investment. "[If] positive, it measures the portion of a country's saving invested abroad; if negative, the portion of domestic investment financed by foreigners' savings."

Defined by the sum of the value of imports of goods and services plus net returns on investments abroad, minus the value of exports of goods and services, where all these elements are measured in the domestic currency.

Source: econterms

D

DARA

decreasing absolute risk aversion

Source: econterms

data

data

Source: econterms

DataDesk

Data analysis software, discussed at http://www.datadesk.com.

Source: econterms

Deadweight Loss

Deadweight loss is the loss of consumer and producer surplus that is caused by inefficiency in a market. It may be caused by taxes, monopoly pricing, or other factors that cause inefficiency.

Source: EconPort

Debiasing strategies

All kind of strategies to test the robustness of an observed bias by attempting to eliminate it under controlled conditions (destructive testing).

Source: SFB 504

decision rule

Either (1) a function that maps from the current state to the agent's decision or choice or (2) a mapping from the expressed preferences of each of a group of agents to a group decision. The first is more relevant to decision theory and dynamic optimization; the second is relevant to game theory.

The phrase allocation rule is sometimes used to mean the same thing as decision rule. The term strategy-proof has been defined in both contexts.

Source: econterms

Decision strategies

Decision strategies specify the type and order that information is processed to determine a choice. Some examples for decision strategies are:

Source: SFB 504

decomposition theorem

Synonym for FWL theorem or Frisch-Waugh-Lovell theorem.

Source: econterms

Decreasing Returns to Scale

If a firm exhibits decreasing returns to scale, when it increases the use of inputs then output increases by a smaller proportion. For example, if the firm doubles the use of all inputs, then output will increase by less than double. With decreasing returns to scale, long-run average costs increase as output increases.

Source: EconPort

deductive

Characterizing a reasoning process of logical reasoning from stated propositions. Contrast inductive.

Source: econterms

deep

A capital market may be said to be deep if it has great depth (which see).

May less formally be used to describe a market with large total market capitalization.

Source: econterms

delta

As used with respect to options: The rate of change of a financial derivative's price with respect to changes in the price of the underlying asset. Formally this is a partial derivative.

A derivative is perfectly delta-hedged if it is in a portfolio with a delta of zero. Financial firms make some effort to construct delta-hedged portfolios.

Source: econterms

delta method

Gives the distribution of a function of random variables for which one has a distribution. In particular, for the function g(b,l), where b and l are estimators for true values b0 and l0:
g(b,l) ~ N(g(b0,l0), g'(b,l)var(b,l)g'(b,l)')

Source: econterms

demand

A relation between each possible price and the quantity demanded at that price.

[Aspects of the population doing the demanding are often left implicit. An actual supply is not necessary to conceive of demand because demand involves hypothetical quantities.]

Source: econterms

demand curve

For a given good, the demand curve is a relation between each possible price of the good and the quantity that would be bought at market sale at that price.

Drawn in introductory classes with this arrangement of the axes, although price is thought of as the independent variable:

Price   |  \
        |    \
        |      \
        |        \ Demand
        |________________________
                        Quantity

Source: econterms

demand deposits

The money stored in the form of checking accounts at banks.

Source: econterms

demand set

In a model, the set of the most-preferred bundles of goods an agent can afford. This set is a function of the preference relation for this agent, the prices of goods, and the agent's endowment.

Assuming the agent cannot have a negative quantity of any good, the demand set can be characterized this way:
Define L as the number of goods the agent might receive an allocation of. An allocation to the agent is an element of the space R+l; that is, the space of nonnegative real vectors of dimension L.
Define >p as a weak preference relation over goods; that is, x>px' states that the allocation vector x is weakly preferred to x' .
Let e be a vector representing the quantities of the agent's endowment of each possible good, and p be a vector of prices for those goods. Let D(>p,p,e) denote the demand set. Then:
D(>p,p,e) = {x: px <= pe and x >p x' for all affordable bundles x'}.

Source: econterms

democracy

Literally "rule by the people". This is a dictionary definition and is not considered sharp enough for academic use. Schumpeter (1942) contrasts these two definitions below and regards only the second one as useful and plausible enough to work with: "The eighteenth-century philosophy of democracy may be couched in the following definition: the democratic method is that institutional arrangement for arriving at political decisions which realizes the common good by making the people itself decide issues through the election of individuals who are to assemble in order to carry out its will." (p 250) This "classical" definition has the problem that the will of the people is not clearly defined here (e.g. consider voting paradoxes) or known (perhaps even to the people at the time), and this can lead to ambiguity about whether a given political system is democratic. The following definition is preferred for its clarity but has a modern feel that is at some distance from the original dictionary definition. Political representation is assumed to be necessary here. "[T]he democratic method is that institutional arrangement for arriving at political decisions in which individuals acquire the power to decide by means of a competitive struggle for the people's vote." (p 269) More clearly: the democratic method is one in which people campaign competitively for the people's votes to achieve the power to make public decisions. This definition is the sharpest.

Source: econterms

demography

The study of the size, growth, and age and geographical distribution of human populations, and births, deaths, marriages, and migrations.

Source: econterms

density function

A synonym for pdf.

Source: econterms

depreciation

The decline in price of an asset over time attributable to deterioration, obsolescence, and impending retirement. Applies particularly to physical assets like equipment and structures.

Source: econterms

depth

An attribute of a market.

In securities markets, depth is measured by "the size of an order flow innovation required to change prices a given amount." (Kyle, 1985, p 1316).

Source: econterms

derivatives

securities whose value is derived from the some other time-varying quantity. Usually that other quantity is the price of some other asset such as bonds, stocks, currencies, or commodities. It could also be an index, or the temperature. Derivatives were created to support an insurance market against fluctuations.

Source: econterms

deterioration

The process or occurrence of an asset's declining productivity as it ages. This is a component of depreciation.

Source: econterms

determinant

An operator defined on square matrices or the value of that operator. For a matrix B the determinant is denoted |B|. Its value is a unique scalar. Calculation of the value of the determinant is discussed in linear algebra books.

Source: econterms

deterministic

Not random. A deterministic function or variable often means one that is not random, in the context of other variables available.

That is, those other variables determine the variable in question unerringly, by a function that would give the same value every time those other variables were given to it as arguments, unlike a random one which with some probability would give different answers.

Source: econterms

development

The study of industrialization.
development

Source: econterms

Dickey-Fuller test

A Dickey-Fuller test is an econometric test for whether a certain kind of time series data has an autoregressive unit root. In particular in the time series econometric model y[t] = by[t-1] + e[t], where t is an integer greater than zero indexing time, and b=1, let bOLS denote the OLS estimate of b from a particular sample. Let T be the sample size.

Then the test statistic T*(bOLS -1) has a known, documented distribution. Its value in a particular sample can be compared to that distribution to determine a probability that the original sample came from a unit root autoregressive process; that is, one in which b=1.

Source: econterms

dictator game

A formal game with two players: Allocator A and Recipient R. They have received a windfall of, say, $1. The allocator, moving first, proposes a split so that A would receive x and R would receive 1-x. The recipient then accepts, no matter what A proposed. In a subgame perfect equilibrium, A would offer R nothing. In experiments with human subjects, however, in which A and R do not know one another, A offers relatively large shares to R (often 50-50). See also Ultimatum Game.

Source: econterms

diffuse prior

In Bayesian statistics the investigator has to specify a prior distribution for a parameter, before the experiment or regression that is to update that distribution. A diffuse prior is a distribution of the parameter with equal probability for each possible value, coming as close as possible to representing the notion that the analyst hasn't a clue about the value of the parameter being estimated.

Source: econterms

discount factor

In a multi-period model, agents may have different utility functions for consumption (or other experiences) in different time periods. Usually in such models they value future experiences, but to a lesser degree than present ones. For simplicity the factor by which they discount next period's utility may be a constant between zero and one, and if so it is called a discount factor. One might interpret the discount factor not as a reduction in the appreciation of future events but as a subjective probability that the agent will die before the next period, and so discounts the future experiences not because they aren't valued, but because they may not occur.

A present-oriented agents discounts the future heavily and so has a LOW discount factor. Contrast discount rate and future-oriented.
In a discrete time model where agents discount the future by a factor of b, one usually lets b=1/(1+r) where r is the discount rate.

Source: econterms

discount rate

At least two meanings:

(1) The interest rate at which an agent discounts future events in preferences in a multi-period model. Often denoted r. A present-oriented agent discounts the future heavily and so has a HIGH discount rate. Contrast 'discount factor'. See also 'future-oriented'.
In a discrete time model where agents discount the future by a factor of b, one finds r=(1-b)/b, following from b=1/(1+r).

(2) The Discount Rate is the name of the rate at which U.S. banks can borrow from the U.S. Federal Reserve.

Source: econterms

discrete choice linear model

An econometric model: Pr(yi=1) = F(Xi'b) = Xi'b

Source: econterms

discrete choice model

An econometric model in which the actors are presumed to have made a choice from a discrete set. Their decision is modeled as endogenous. Often the choice is denoted yi.

Source: econterms

discrete regression models

Econometrics models in which the dependent variables assumes discrete values.

Source: econterms

Diseconomies of Scale

See decreasing returns to scale

Source: EconPort

diseconomies of scale

Like economies of scale but with the implication that they are negative, so larger scale would increase cost per unit.

Source: econterms

disintermediation

prevention of banks from flowing money from savers to borrowers as an effect of regulations; e..g the U.S. home mortgage market is partly blocked from banks and left to savings and loan institutions.

Source: econterms

dismal science

Refers to economics, which because it is so often about tradeoffs, is widely thought to be depressing to study.

Source: econterms

distribution function

A synonym for cdf.

Source: econterms

Divisia index

A continuous-time index number. "The Divisia index is a weighted sum of growth rates, where the weights are the components' shares in total value." -- Hulten (1973, p. 1017)

See also http://www.geocities.com/jeab_cu/paper2/paper2.htm.

Source: econterms

DOJ

Abbreviaton for the U.S. national Department of Justice, which does among other things investigations into violations of antitrust law. See also FTC.

Source: econterms

Domar aggregation

This seems to be the principle that the growth rate of an aggregate is the weighted average of the growth rates of its components, where each component is weighted by the share of the aggregate it makes up. The idea comes up in the context of national accounts and national statistics.

Source: econterms

dominant design

After a technological innovcation and a subsequent era of ferment in an industry, a basic architecture of product or process that becomes the accepted market standard. From Abernathy& Utterback 1978, cited by A&T 1991. Dominant designs may not be better than alternatives nor innovative. They have the benchmark features to which subsequent designs are compared. Examples include the IBM 360 computer series and Ford's Model T automobile, and the IBM PC.

Source: econterms

Dominant strategy

In some games, a player can choose a strategy that "dominates" all other strategies in his strategy set: Regardless of what he expects his opponents to do, this strategy always yields a better payoff than any other of his strategies. An example of a game where each player has a dominant strategy is a second-price auction with independent valuations of the bidders: Here bidding one's true valuation is always a best response, regardless of one's opponents' bids.

Source: SFB 504

Donsker's theorem

Synonymous with Functional Central Limit Theorem (FCLT).

Source: econterms

double coincidence of wants

phrasing from Jevons (1893). "[T]he first difficulty in barter is to find two persons whose disposable possessions mutually suit each other's wants. There may be many people wanting, and many possessing those things wanted; but to allow of an act of barter there must be a double coincidence, which will rarely happen." That is, paraphrasing Ostroy and Starr, 1990, p 26, the double coincidence is the situation where the supplier of good A wants good B and the supplier of good B wants good A.
The point is that the institution of money gives us a more flexible approach to trade than barter, which has the double coincidence of wants problem.

Source: econterms

dummy variable

In an econometric model, a variable that marks or encodes a particular attribute. A dummy variable has the value zero or one for each observation, e.g. 1 for male and 0 for female. Same as indicator variables or binary variables.

Source: econterms

dumping

An informal name for the practice of selling a product in a foreign country for less than either (a) the price in the domestic country, or (b) the cost of making the product. It is illegal in some countries to dump certain products into them, because they want to protect their own industries from such competition.

Source: econterms

Durbin's h test

An algorithm for detecting autocorrelation in the errors of a time series regression. The implicit citation is to Durbin (1970). The h statistic is asymptotically distributed normally if the hypothesis that there is no autocorrelation.

Source: econterms

Durbin-Watson statistic

A test for first-order serial correlation in the residuals of a time series regression. A value of 2.0 for the statistic indicates that there is no serial correlation. For tables to interpret the statistic see Greene pgs 738-743, and context discussing them is on pages 424-425.
This result is biased toward the finding that there is no serial correlation if lagged values of the regressors are in the regression. Formally, the statistic is:
d=(sum from t=2 to t=T of: (et-et-1)2/(sum from t=1 to t=T of: et2)
where the series of et are the residuals from a regression.

Source: econterms

Dutch auction

Sequential biding game where the standing price is gradually lowered, typically by means of an exogenous counting device (a clock, or a pointer), until it is stopped by a bidder. The first bidder to halt the clock wins the item and pays the price where he stopped the wheel. Dutch auctions are strategically equivalent to first price sealed bid auctions. The name derives from the fact that many agricultural products worldwide, but in particular Dutch flowers, are sold in this way.

Source: SFB 504

dyadic map

synonym for dyadic transformation.

Source: econterms

dyadic transformation

For whole numbers t and initial value x0 in [0,1], consider the mapping:

xt+1 = (2xt) mod 1

"This law of motion is a standard example of chaotic dynamics. It is commonly known as the dyadic transformation. It is mixing (and hence also ergodic)."
-- Domowitz and Muus, 1992, p 2849

All the xt's will be in [0,1]. Their distribution will depend on the initial value x0. If x0 is rational, the mapping will eventually become periodic (for large enough values of t). If x0 is irrational, the mapping is never periodic.

Source: econterms

dynamic

means 'changing over time'.

Source: econterms

dynamic inconsistency

A possible attribute of a player's strategy in a dynamic decision-making environment (such as a game).
When the best plan that a player can make for some future period will not be optimal when that future period arrives, the plan is dynamically inconsistent.
In one stylized example, addicted smokers face this problem -- each day, their best plan is to smoke today, and to quit (and suffer) tomorrow in order to get health benefits subesquently. But the next day, that is once again the best plan, so they do not quit then either. (In a model this can come about if the planner values the present much more than the near future, -- that is, has a low short-run discount factor -- but has a higher discount factor between two future periods.)
Monetary policy is sometimes said to suffer from a dynamic inconsistency problem. Government policymakers are best off to promise that there will be no inflation tomorrow. But once agents and firms in the economy have fixed nominal contracts, the government would get seigniorage revenues from raising the level of inflation.

Source: econterms

dynamic multipliers

The impulse responses in a distributed lag model.

Source: econterms

dynamic optimization

dynamic optimization

Source: econterms

dynamic optimizations

maximization problems to which the solution is a function; equivalently, optimization problems in infinite-dimensional spaces.

Source: econterms

dynamic programming

The study of dynamic optimization problems through the analysis of functional equations like value equations.

This phrase is normally used, analogously to linear programming to describe the study of discrete problems; e.g. those for which a decision must be made at times t for t=1,2,3,...

Source: econterms

dynamical systems

The branch of mathematics describing processes in motion. Some are predictable and others are not. Two reasons a process might be unpredictable are that it might be random, and it might be chaotic.

Source: econterms

E

EBIT

Stands for "earnings before interest and taxes" which is used as a measure of earnings performance of firms that is not clouded by changes in debt or equity types, or tax rules.

Source: econterms

EconLit

An electronic bibliography of economics literature organized by the American Economics Association, derived partly from the Journal of Economic Literature. EconLit is made available through libraries and universities. See http://www.econlit.org for more information.

Source: econterms

econometric model

An economic model formulated so that its parameters can be estimated if one makes the assumption that the model is correct.

Source: econterms

Econometrica

A journal whose web site is at http://www.econometricsociety.org/es/journal.html .

Source: econterms

econometrics

econometrics

Source: econterms

Econometrics

Econometrics is the field of economics that is concerned with the application of mathematical statistics and the tools of statistical inference to the empirical measurement of relationships postulated by economic theory. That is, econometrics (hopefully) uses some clever combination of economic theory and mathematical statistics. Typically, application of econometric methods involves the following elements:


formulating an economic model appropriate to the questions to be answered;
reviewing the available statistical models and the assumptions underlying these models, and selecting the form most suitable for the problem at hand;
obtaining appropriate data, properly defined and matching the concepts of the economic model;
finding suitable computer software to enable the calculations necessary for estimating and testing the econometric model.

The ultimate goal of an econometric exercise is to see whether an economic model is consistent with empirical (observed) behavior as reflected in the data. Note that econometrics is mostly based on large samples, i.e., on observing economic relationships over a long period of time or for a large number of individuals at the same time (or both, as in the case of longitudinal or panel data). Note also that econometricians usually have to use data that were not created in a controlled experiment (as in natural and some other social sciences). An important aspect of applied work is therefore to assess whether the sample used for estimation is actually a random sample drawn from the population for which the underlying model is supposed to be appropriate ? in other words, whether the relationship of interest is empirically identified. For example, this might not be the case if there are selection problems.

Source: SFB 504

Economic decision rule

A rule in economics asserting that if the marginal benefit of an action is higher than the marginal cost, then one should undertake the action; however if the marginal cost is higher than the marginal benefit of the action, one should not undertake it.

Source: EconPort

economic discrimination

in labor markets: the presence of different pay for workers of the same ability but who are in different groups, e.g. black, white; male, female.

Source: econterms

economic environment

In a model, a specification of preferences, technology, and the stochastic processes underlying the forcing variables.

Source: econterms

economic growth

Paraphrasing directly from Mokyr, 1990: Economic growth has four basic causes:
1) Investment, meaning increases in the capital stock (Solovian growth)
2) Increases in trade (Smithian growth)
3) Size or scale effects, e.g. by overcoming fixed costs, or achieving specialization
4) Increases in knowledge, most of which is called technological progress (Schumpeterian growth).

Further elaboration is in Mokyr's book.

Source: econterms

Economic profit

Profit that takes into account both explicit and implicit costs of production. It is calculated as Total revenues minus implicit and explicit costs.

Source: EconPort

Economic Rent

In equilibrium of a market or game, the traders (or players) participate voluntarily because their payoff exceeds the one from abstaining to engage in the trade (or to play the game): in equilibrium, the participants earn profits. Part of the equilibrium profits is explained by ordinary trading or exchange activity; part of it accrues to a trader (player) by owning a fixed idiosyncratic resource which is not consumed in the transaction (interaction). The last part is called a player's (trader's) rent. An economic rent is thus the 'wage' for some fixed resource which is necessary for and valuable in a transaction but in monopolistic possession of some trader.

Apart from the costs of not using his outside options (i.e. turning to another partner for exchange), it is the rent on such idiosyncratic factors which must be conceded to a player in order to ensure his participation in the exchange process. For example, traders draw a rent from making accessible a fixed resource like 'land' (which is where the term comes from), from uniquely owning a patent or license protecting a technological achievement or professional activity, or from uniquely owning private information about a fact that influences all players' payoffs.

In analogy to rents acrrueing to a trader from possessing idiosyncratic property rights or 'tangible' assets, the equilibrium profits acrrueing to a player solely from possessing payoff-relevant information are called his information rent. The familiar consumer's surplus from microeconomics is a simple example. The consumer is left a surplus from being able to buy all the quantities consumed at the price of the last consumed unit, instead of having to pay higher prices for earlier units consumed. This 'surplus' corresponds to her information rent for knowing her entire schedule of marginal willingnesses-to-pay for different quantities of the object. If the seller instead knew this schedule of marginal valuations, he could squeeze out all the profits from the customer by selling each unit at a different price, just demanding the consumer's marginal valuation for each unit. The fact that the consumer's information is private thus guarantees him a consumer's rent.

If the seller faces a single customer that she knows very well, having observed his choices at varying prices for a long time, she could devise a schedule of quantity discounts that extracts nearly all of the consumer's rent. This changes when the seller faces a set of competing consumers, each of which has private information on his marginal willingness-to-pay for different quantities. The schedule of discounts must now prevent lower-valued consumers from copying the quantity demanded by higher-valued consumers (and thus get much for a small payment). Typically, this is achieved by selling each additional unit of quantity at a slightly lower price than the previous one, thus inducing customers with higher valuations to choose quantities such large that low-valuation customers will find it too costly to mimic choosing a large quantity. (See the entry on incentive compatibility.) In this way, however, the seller thus increases the equilibrium profits of high-valuation customers disproportionately (relative to those for low-valuations customers), i.e. she pays larger information rents to higher marginal valuations (types) of customers.

The upshot of all this is that in a game where the seller designs an incentive compatible price schedule so that the players implicitly 'sell' their private information by revealing it through their equilibrium choices, the players do not loose their information rents. Instead, the very possibility that lower typed customers can mimic the choices of higher typed customers forces the seller to leave higher information rents to higher types, which which own 'more valuable' private information. In this sense, paying information rents to economic agents is intimately related to providing incentives for the revelation private information in strategic contexts.

Source: SFB 504

economic sociology

Piore (1996) writes of two definitions of economics, a narrow one organized around optimization and a broad one organized around scarcity, and suggests that the subjects included by the larger one but not in the smaller one are the subjects of economic sociology discussed in the Handbook (1994).

More specifically, the broad definition of economics is "the study of how people employ scarce resources and distribute them over time and among competing demands" paraphrasing Paul Samuelson (1961). The narrower definition is from Gary Becker (1976): "The combined assumptions of maximizing behavior, market equilibrium, and stable preferences, used relentlessly and unflinchingly . . . [B]ehavior [of] participants who maximize their utility from a stable set of preferences and accumulate an optimal amount of information and other inputs in a variety of markets."

A bit more specifically -- optimization and formal equilibrium are not natural subjects or methods of economic sociology, but the general subjects of economics are. Economic sociology is more likely than economics to use groups or organizations rather than individuals as units of analysis. The practical definition seems to be evolving over time.

Source: econterms

Economics

The study of the allocation of scarce (limited) resources.

Source: EconPort

economies of scale

Usually one says there are economies of scale in production of cost per unit made declines with the number of units produced. It is a descriptive, quantitative term. One measure of the economies of scale is the cost per unit made. There can be analosous economies of scale in marketing or distribution of a product or service too. The term may apply only to certain ranges of output quantity.

Source: econterms

Economies of Scale

See increasing returns to scale

Source: EconPort

ECU

European Currency Unit

Source: econterms

Editor's comment on time series

A frequent and dangerous mistake for those not familiar with this language is to think that discussion of 'time series' are about data values in a sample. Actually, they are about probability distributions. It has taken this author years to get used to that, which may just be normal.

An example of the error is to think that a discussion about E[Xt] is testable or measurable. Usually it's not. It's assumed in the discussion. A sample has a computable mean, but whether a time series has a trend, or a unit root, or heteroskedasticity are statements about a conjectured process, not statements about data.

Source: econterms

education production function

Usually a function mapping quantities of measured inputs to a school and student characteristics to some measure of school output, like the test scores of students from the school.

For empirical purposes one might assume this function is linear and generate the linear regression:

Y = X'b + S'c + e

where Y is a measure of school outputs like a vector of student test scores, X is a set of measures of student attributes (collectively or individually), S is vector of measures of schools those students attend, b and c are coefficients, and e is a disturbance term.

Source: econterms

EEH

An abbreviation for the journal Explorations in Economic History.

Source: econterms

EER

An abbreviation for European Economic Review.

Source: econterms

effective labor

In the context of a Solow model, if labor time is denoted L and labor's effectiveness, or knowledge, is A, then by effective labor we mean AL. In general means 'efficiency units' of labor or 'productive effort' as opposed to time spent.

Source: econterms

efficiency

Has several meanings. Sometimes used in a theoretical context as a synonym for Pareto efficiency. Below is the econometric/statistical definition. Efficiency is a criterion by which to compare unbiased estimators. For scalar parameters, one estimator is said to be more efficient than another if the first has smaller variance. For multivariate estimators, one estimator is said to be more efficient than another if the covariance matrix of the second minus the covariance matrix of the first is a positive semidefinite matrix. Sometimes properties of the most efficient estimator can be computed; see efficiency bound.

Computation of efficiency is defined on the basis of assumed distributions of errors ('disturbance terms'). It is not calculated directly on the basis of sample information unless the sample information come from a simulation where the actual error distribution was known.

Source: econterms

Efficiency

Analysis of efficiency in the context of resource allocation has always been a central concern of economics, and it is an essential element of modern microeconomic theory. The ends of economic activity are the satisfaction of human needs within resource constraints, preferences, and technological constraints. In this broad sense, an efficient use of scarce resources within a given technological environment is one that maximizes the satisfaction of aggregate needs for a given set of preferences. In a narrower sense, efficiency is a commonly agreed upon criterion to compare the economic desirability of different allocations, or states of the economy, and different allocation mechanisms or institutions. The incomparability of economic preferences gives rise to a criterion that is independent of the distributional characteristics of the allocations (or institutions) compared (Pareto efficiency). Whether construed as a general purpose of economic activity, or as a criterion for evaluating different allocations and exchange institutions, efficiency is a purely technical notion that is neither related to justness or equality criteria, nor to any moral or ethic questions of economic activity.

Source: SFB 504

efficiency bound

The minimum possible variance for an estimator given the statistical model in which it applies. An estimator which achieves this variance is called efficient.

Source: econterms

efficiency units

Usually interpretable as "output per worker per hour."
More generally: An abstract measure of the amount produced for a constant production technology by a worker in some time period. Often the context is theoretical and the time period and production technology do not have to be specified.
But efficiency units can be conceived of (and theorized about) as a function of each worker's characteristics, of the vintage of equipment, of the date in history, of the production technology, and so forth.

Source: econterms

efficiency wage hypothesis

The hypothesis that workers' productivity depends positively on their wages. (For reasons this might be the case see the entry on efficiency wages.)
This could explain why employers in some industries pay workers more than employers in other industries do, even if the workers have apparently comparable qualifications and jobs. A contrasting explanation is that of hedonic models in which these differentials are explained by quality differences in the jobs.

Source: econterms

efficiency wages

A higher than market-clearing wage set by employers to, for example:
-- discourage shirking by raising the cost of being fired
-- encourage worker loyalty
-- raise group output norms
-- improve the applicant pool
-- raise morale

Labor productivity in efficiency wage models is positively related to wage.

By contrast, consider models in which the wage is equal to labor productivity in equilibrium, or models in which wages are set to reduce the likelihood of unionization (union threat models). In these, productivity is not a function of the wage.

Source: econterms

efficient

A description of either:
-- an allocation that is Pareto efficient
or
-- an estimator that has the minimum possible variance given the statistical model; see efficiency bound.

Source: econterms

Efficient capital market

Market efficiency is one of the major paradigms of financial economics, focussing on informational efficiency as opposed to Pareto efficiency in microeconomic theory.

Market efficiency as applied to securities markets means that it is on average impossible to gain from trading on the basis of generally available public information (information-arbitrage efficiency) and that the valuation of an asset reflects accurately the future payments to which the asset gives title (fundamental-valuation efficiency). It is apparent that market efficiency in this sense is only part of overall market efficiency.

Fama (1970) distinguishes three forms of informational efficiency: He defines weak, semi-strong and strong form efficiency as holding when the stock market prices reflect all historical price information, all publicly available information, and all information (including insider information), respectively. In order for the price to reflect exactly all information about an asset, nothing can impede the purchase or sale of securities, such as brokerage, fees, taxes and so on. To the extent that impediments exist to the trading of an asset, the prices will only imperfectly reflect information of relevance to the valuation of the security.

Most financial markets have generally been shown to be efficient in the weak or semi-strong form, although not necessarily so in the strong sense.

Source: SFB 504

efficient markets hypothesis

"A market in which prices always 'fully reflect' available information is called 'efficient.'" -- Fama, p. 383

Source: econterms

EGARCH

Exponential GARCH. The EGARCH(p,q) model is attributed to Nelson, (1991).

Source: econterms

eigenvalue

An eigenvalue or characteristic root of a square matrix A is a scalar L that satisfies the equation:

det [ A - LI ] = 0

where "det" is the operator that takes a determinant of its argument, and I is the identity matrix with the same dimensions as A.

Source: econterms

eigenvalue decomposition

Same as spectral decomposition.

Source: econterms

eigenvector

For each eigenvalue L of a square matrix A there is an associated right eigenvector, denoted b that has the dimension of the number of rows of A. The right eigenvector satisfies: Ab = Lb

Source: econterms

EJ

An occasional abbreviation for the British academic journal Economic Journal.

Source: econterms

elasticity

A measure of responsiveness. The responsiveness of behavior measured by variable Z to a change in environment variable Y is the change in Z observed in response to a change in Y. Specifically, this approximation is common:

elasticity = (percentage change in Z) / (percentage change in Y)

The smaller the percentage change in Y is practical, the better the measure is and the closer it is to the intended theoretically perfect measure.

Elasticities are often negative, but are sometimes reported in absolute value (perhaps for brevity) in which case the author is depending on the reader knowing, or quickly applying, some theory. Usually the theory is the theory of supply and demand.

Among the elasticities that show up in the economics literature are:
elasticity of quantity demanded of some product in response to a change in price of that product-- I think this is 'elasticity of demand' or 'price elasticity of demand'. These are ordinarily negative, and when author reports a positive figure it is usually just an absolute value. A reader has to decide whether the true value is negative; hopefully this is obvious.
elasticity of supply, which is analogous
elasticity of quantity demanded in response to a change in the potential consumer's income -- called 'income elasticity of demand'. These are normally positive.

Inventing another kind of elasticity is plausible. Doing so implies a partial theory of behavior -- e.g. that Y creates a reason for the agent to change behavior Z.

Source: econterms

Elimination by aspects

Tversky (1972): This rule begins by determining the most important attribute and then retrieves a cutoff value for that attribute. All alternatives with values below that cutoff are eliminated. The process continues with the most important remaining attribute(s) until only one alternative remains.
Lexicographic Strategy: This strategy first identifies the most important attribute and then selects the alternative that is best on this attribute. In the case of ties, the tied alternatives are compared on the next most important attribute and so on.
Equal Weight Strategy: It examines all alternatives and attribute values but ignores the weights (probabilities). It sums the attribute values for an alternative to get an overall score for that alternative and then selects the alternative with the highest evaluation.
Satisficing Strategy Simon (1955): This strategy considers one alternative at a time, in the order they are presented. Each attribute of the current alternative is compared to a cutoff. If an attribute fails to exceed the cutoff, then the alternative is rejected. The first alternative to pass all the cutoffs is selected.

Source: SFB 504

EMA

An occasional abbreviation for the journal Econometrica.

Source: econterms

embedding effect

The tendency of some contingent valuation survey responses to be similar across different survey questions in conflict with theories about what is valued in the utility function.

An example from Diamond and Hausman (1994): A survey might come up with a willingness-to-pay amount that was the same for either (a) one lake or (b) five lakes which include the one that was asked about individually. If lakes have some utility value to the respondent, one would have expected that five lakes would be worth more than one. Possibly the difference arises because the respondent was not expressing a specific preference for the first lake, and/or was not taking a budget constraint into account. Diamond and Hausman argue that for this reason among others contingent valuation surveys cannot arrive at good estimates for values of public goods.

Source: econterms

embodied

An attribute of the way technological progress affects productivity. In Solow (1956), any improvement in technology instantaneously affects the productivity of all factors of production. In Solow (1960) however productivity improvements were a property of only of new capital investment. In the second case we say the technologies are embodied in the new equipment, but in the first case they are disembodied.

Source: econterms

EMS

European Monetary System -- founded in 1979, its purpose was to reduce currency fluctuations, and evolved toward offering a common currency.

Source: econterms

EMU

European Monetary Union.

Source: econterms

endogenous

A variable is endogenous in a model if it is at least partly function of other parameters and variables in a model. Contrast exogenous.

Source: econterms

endogenous growth model

An endogenous growth macro model is one in which the long-run growth rate of output per worker is determined by variables within the model, not an exogenous rate of technological progress as in a neoclassical growth model like those following from Ramsey (1928), Solow (1956), Swan (1956), Cass (1965), Koopmans (1965). Influential early endogenous growth models are Romer (1986), Lucas (1988), and Rebelo (1991). See the sources for this entry for more information. Hulten (2000) says 'What is new in endogenous growth theory is the assumption that the marginal product of (generalized) capital is constant, rather than diminishing as in classical theories.' Generalized capital includes the result of investments in research and development (R&D).

Source: econterms

endowment

In a general equilibrium model, an individual's endowment is a vector made up of quantities of every possible good that the individual starts out with.

Source: econterms

energy intensity

energy consumption relative to total output (GDP or GNP).

Source: econterms

Engel curve

On a graph with good 1 on the horizontal axis and good 2 on the vertical axis, envision a convex indifference curve, and a diagonal budget constraint that meets it at one point. Now move the budget constraint in and out and mark the points where the tangencies with indifference curves are. The locus of such points is the Engel curve -- it's the mapping from wealth into the space of the two goods. That is, the Engel curve is (x(w), y(w)) where w is wealth and x() and y() are the amounts of each of the goods purchased at those levels of wealth.

Hardle (1990) p 18 defines the Engel curve as the graph of average expenditure (e.g. on food) as a function of income. And on p 118, defines food expenditure as a function of total expenditure.

The name refers to 19th century Prussian statistician Ernst Engel, according to Fogel (1979).

Source: econterms

Engel effects

Changes in commodity demands by people because their incomes are rising. A generalization of Engel's law.

Source: econterms

Engel's law

The observation that "the proportion of a family's budget devoted to food declines as the family's income increases."

See also Engel effects.

Source: econterms

English open bid auction

Sequential bidding game where the standing bid wins the item unless another, higher bid is submitted. Bidders can submit bids as often as they want to, and they observe (hear) all previous bids. Often, a new bid has to increase the standing bid by some minimal amount (advance). The English auction is known to have been in use since antique times; from this auction format the word derives: the latin word augere means to increase. With stastitically independent private valuations, an English auction is equivalent in terms of payoffs to a second price sealed bid auction.

Source: SFB 504

entrenchment

A possible description of the actions of managers of firms. Managers can make investments that are more valuable under themselves than under alternative managers. Those investments might not maximize shareholder value. So shareholders have a moral hazard in contracting with managers.

Or, in the phrasing of Weisbach (1988): "Managerial entrenchment occurs when managers gain so much power that they are able to use the firm to further their own interests rather than the interests of shareholders."

The abstract to Shleifer and Vishny, 1989, p 123, is nicely explicit: "By making manager-specific investments, managers can reduce the probability of being replaced, extract higher wages and larger perquisities from shareholders, and obtain more latitude in determining corporate strategy."

Source: econterms

EOE

European Options Exchange

Source: econterms

Epanechnikov kernel

The Epanechnikov kernel is this function: (3/4)(1-u2) for -1<u<1 and zero for u outside that range. Here u=(x-xi)/h, where h is the window width and xi are the values of the independent variable in the data, and x is the value of the scalar independent variable for which one seeks an estimate.
For kernel estimation.

Source: econterms

epistemic

"Of, relating to; or involving knowledge or the act of knowing." An economic theory might take aspects of human understanding or belief as fundamental to economic processes or outcomes.

Source: econterms

epistemology

"1. The division of philosophy that investigates the nature and origin of knowledge. 2. A theory of the nature of knowledge."

Source: econterms

epsilon-equilibrium

(Usually written with a true epsilon character.)

In a noncooperative game, for any small positive number epsilon, an epsilon-equilibrium is a profile of totally mixed strategies such that each player gives more probability weight than epsilon only to strategies that are best responses to the profile of strategies the others are playing.

For a more formal definition see sources. This is a rough paraphrase.

Source: econterms

epsilon-proper equilibrium

In a noncooperative game, a profile of strategies is an epsilon-proper equilibrium if "every player is giving his better responses much more probability weight than his worse responses (by a factor 1/epsilon), whether or not those 'better' responses are 'best'."
-- Myerson (1978), p 78.

For a more formal definition see sources. This is a rough paraphrase.

Source: econterms

equilibrium

Some balance that can occur in a model, which can represent a prediction if the model has a real-world analogue. The standard case is the price-quantity balance found in a supply and demand model. If the term is not otherwise qualified it often refers to the supply and demand balance. But there also exist Nash equilibria in games, search equilibria in search models, and so forth.

Source: econterms

Equilibrium

In economics, an equilibrium is a situation in which no agent has an incentive to change any of her choices, given the constraints she faces (constraints being interpreted in a broad sense here):


her perceptions of the behavior of other agents;
the terms of trade (prices);
the strategic environment;
her individual characteristics such as perferences (or production technologies), wealth, and computing capabilities.

In addition to this central property, it is also required that every agent makes optimal choices based on correct expectations of these constraints. Important applications of the concept of an (economic) equilibrium are the formation of prices on markets (the competitve market equilibrium), and the strategic equilibria used in game theory.

Source: SFB 504

equity premium puzzle

Real returns to investors from the purchases of U.S. government bonds have been estimated at one percent per year, while real returns from stock ("equity") in U.S. companies have been estimated at seven percent per year (Kocherlakota, 1996). General utility-based theories of asset prices have difficulty explaining (or fitting, empirically) why the first rate is so low and the second rate so high, not only in the U.S. but in other countries too. The phrase equity premium puzzle comes from the framing of this problem (why is the difference so great?) and the attention focused on it by Mehra and Prescott (1985); sometimes the phrase risk free rate puzzle is used to describe the closely related question: why is the bonds rate so low? The problem can be inverted to ask: why do investors not reject the low-returning bonds in order to buy stocks, which would then raise the price of stocks and lower their subsequent returns?

The above is drawn from the excellent review by Kocherlakota (1996) which surveys the substantial literature on this subject. Abbreviating further from it: the theories against which the evidence constitute a "puzzle" (or paradox, which see) tend to have these aspects in common: (1) standard preferences described by standard utility functions, (2) contractually complete asset markets (against possible time- and state-of-the-world contingencies), and (3) costless asset trading (in terms of taxes, trading fees, and presumably information).

Overwhelmingly the discussion in the economics literature has focused on expansions to the formal theory and on refinements and expansions of data sources, rather than survey evidence. A survey of U.S. households would answer (has answered?) the question of why they invest so little in stocks.

[Editorial comment follows.] It is likely (but this is conjecture) that large fractions of the population do not seriously consider investing in stocks, and are thus not rejecting stocks because their returns are low, but rather because they do not know how and think there are some barriers to learning how; and/or they perceive the risks of stocks to be higher than they have historically been; and/or they believe their savings are insufficient to invest. These explanations suggest that as stock trading becomes easier (e.g. over the Web, with heavy marketing and easy interfaces) the theories will fit better because more of the population will buy stocks. Indeed, this has been observed over the last few years. Another class of likely explanations is that people are highly impatient to spend their income (which would conflict with standard constant-discount-rate utility functions, but agree with the evidence; see hyperbolic discounting). Seen this way, the puzzle is not why the evidence looks the way it does, but the hard theoretical problem of getting these factors into the asset pricing models.

Source: econterms

ergodic

Informally: a stochastic process is ergodic if no sample helps meaningfully to predict values that are very far away in time from that sample. Another way to say that is that the time path of the stochastic process is not sensitive to initial conditions.

Two events A and B (e.g. possible sets of states of the process) are ergodic iff, taking the limit as h goes to infinity:
lim (1/h)SUMfrom i=1to i=h |Pr(A intersection with L-iB)-Pr(A)Pr(B)| = 0
Here L is the lag operator. This definition is like that of 'mixing on average'. A stochastic process is ergodic, I believe, if all possible events in it are ergodic by this definition.

If a random process is mixing, it is ergodic.

Priestly, p 340: A process is ergodic iff 'time averages' over a single realization of the process converge in mean square to the corresponding 'ensemble averages' over many realizations.

Example 1: Let xt (for integer t=0 to infinity) is known to be drawn iid from a standard normal distribution. Then knowing the value of x1 doesn't help predict the value of x2, because they are independently drawn. This time series process is ergodic.

Example 2: Suppose the process is xt=k+sin(t)+et where k is unknown and et is a white noise error. Then any sample of xt for a known t gives information about k and that is enough information to make predictions at remote times in the future that are just as good as predictions at nearby times. This process is not ergodic.

Source: econterms

ergodic properties

means persistent properties

Source: econterms

ergodic set

In the context of a stochastic processes {xt}, set E is an ergodic set if:
(i) it is a subset of the state space S of possible values of xt,
(ii) if xt is in E, then Pr(xt+1 is in E}=1, and
(iii) no proper subset of E has the property in (ii).

Source: econterms

ERISA

The Employee Retirement Income Security Act of 1974, a major U.S. law which guaranteed certain categories of employees a pension after some period at their employer; there had been more ambiguity before about what rules an employer could put on which employees could get a pension. Also ERISA changed the perceived rules about whether pensions could be invested in venture capital.

Source: econterms

error-correction model

A dynamic model in which "the movement of the variables in any periods is related to the previous period's gap from long-run equilibrium."

Source: econterms

essentially stationary

A time series process {xt} is essentially stationary iff E[xt2] is uniformly bounded. (from Wooldridge)

This definition may not be standard or widely used.

I believe this means that even if the variance wanders around and is different for different t, there is a finite bound to those variances. The variance of the distribution of xt is never infinite for any t and indeed never exceeds that finite bound. Thus an ARCH-type process might be essentially stationary even though its variance is not constant for all t.

Note that there are strictly stationary processes that have infinite second moments; such processes are not essentially stationary.

Source: econterms

estimation

estimation

Source: econterms

estimator

A function of data that produces an estimate for an unknown parameter of the distribution that produced the data.
The way estimators are often discussed, they can be thought of as chosen before the data are seen. This can be hard to understand for the person new to the term. Properties of estimators (such as unbiasedness in finite samples, asymptotic unbiasedness, efficiency, and consistency) are discussed without considering any particular sample, by making assumptions about the distribution of the data, and considering the estimator in the context of the distributions.

Source: econterms

Euler equation

A first order condition that is across a time or state boundary. (Across a state boundary means a tradeoff between uncertain events.) That is, a first order condition that is a relation between a variable that has different values in different periods or different states. E.g. kt = b(1+r)kt+1 is an Euler equation, but 2nt2 - 3kt = 0 is not.

Source: econterms

Euler's constant

May refer to either the natural logarithm base e, approximately 2.71828, or to the Euler-Mascheroni (sp) constant, which is approximately .57721566.

Source: econterms

Eurodollar

"Originally, it was a dollar-denominated deposit created either in a European bank or in the European subsidiary of an American bank, usually located in London." Here's why: (1) Americans overseas might want their deposits in dollars; (2) the dollar being the most common international currency, borrowers and lenders internationally may want to make their accounts in it; (3) the Eurodollar market was "exempt from reserve requirements and other regulatory costs imposed on domestic American banks. Superior terms in the Eurodollar market attracted American borrowers and depositors who would have otherwise patronized domestic institutions." An example of such regulation was the US Regulation Q which limited interest banks could pay.

Source: econterms

Eurosclerosis

a name for the 'disease' of rigid, slow-moving labor markets in Europe in contrast to fast-moving markets, e.g. in North America.

Source: econterms

even function

A function f() is even iff f(x)=f(-x).

Source: econterms

event studies

Empirical study of prices of an asset just before and after some event, like an announcement, merger, or dividend. Can be used to discuss whether the market priced the information efficiently, whether there was private information, etc.

This method was developed by Fama, Fisher, Jensen, and Roll (1969) according to Weisbach, 1988, p 455

Source: econterms

evolutionary game theory

Describes game models in which players choose their strategies through a trial-and-error process in which they learn over time that some strategies work better than others.

Source: econterms

ex ante

Latin for "beforehand". In models where there is uncertainty that is resolved during the course of events, the ex antes values (e.g. of expected gain) are those that are calculated in advance of the resolution of uncertainty.

Source: econterms

ex dividend date

Firms pay dividends to those who are shareholders on a certain date. The next day is called the ex dividend date. People who own no shares until the ex dividend date do not receive the dividend. The price of the stocks is often adjusted downward before the start of trading on the ex dividend date because to compensate for this.

Source: econterms

ex post

Latin for "after the fact". In models where there is uncertainty that is resolved during the course of events, the ex post values (e.g. of expected gain) are those that are calculated after the uncertainty has been resolved.

Source: econterms

Excess chance measures

Starting point of the excess chance measures is a target return, for example a one-month market return, defined by the investor. Chance is than to be considered as the possibility to beat the target return. Special cases of excess chance measures are the excess probability, the excess expectation and the excess variance.

Source: SFB 504

excess kurtosis

Sample kurtosis minus 3, which means when 'excess kurtosis' is positive, there is greater kurtosis than in the normal distribution.

Source: econterms

excess returns

Asset returns in excess of the risk-free rate. Used especially in the context of the CAPM. Excess returns are negative in those periods in which returns are less than the risk-free rate. Contrast abnormal returns.

Source: econterms

Excess Supply

A situation in which the quantity supplied exceeds the quantity demanded. It occurs when the market price is greater than the equilibrium price.

Source: EconPort

exclusion restrictions

In a simultaneous equation system -- that some of the exogenous variables are not in some of the equations; often this idea is expressed by saying the coefficient next to that exogenous variable is zero. This way of putting it may make this restriction (hypothesis) testable, and may make a simultaneous equation system identified.

Source: econterms

exclusive dealing

A requirement in a contract that the buyer will only buy goods of a certain type from the stated seller.

Source: econterms

ExecuComp

data set from Standard and Poors on compensation to American corporate executives, including stock and options ownership.

Source: econterms

existence value

The value that individuals may attach to the mere knowledge of the existence of something, as opposed to having direct use of that thing. Synonymous with nonuse value.

For example, knowledge of the existence of rare and diverse species and unique natural environments may have value to environmentalists who do not actually see them.

Source: econterms

exogenous

A variable is exogenous to a model if it is not determined by other parameters and variables in the model, but is set externally and any changes to it come from external forces. Contrast endogenous.

Source: econterms

expectation

There are several, overlapping definitions: 1) The mean of a probability distribution. If the probability distribution function is F(x) then the mean would be calculated by integrating dF(x) over the domain of the probability distribution function. The expectation operator, E[], is a linear operator per Hogg and Craig, 1995, page 55.
2) In a model, the agents may have to anticipate the value of variables whose realizations may occur in the future. The values they anticipate are often called their expectations. The agents may generalize only from past realizations in a way that we can call "adaptive expectations" or they may have other information from which they hypothesize a distribution from which the realization will be drawn. From such a distribution they can calculate the mean value, and variance, and so forth. This process is one of "rational expectations." --- Note: the notation Ex[] means the expectation of the expression taken over the random variable X. The result of the expression could still be a random variable if there are other random variables in the expression.

Source: econterms

expected utility hypothesis

That the utility of an agent facing uncertainty is calculated by considering utility in each possible state and constructing a weighted average, where the weights are the agent's estimate of the probability of each state. Arrow, 1963 attributes to Daniel Bernoulli (1738) the earliest known written statement of this hypothesis.

Source: econterms

Expected utility von NeumannMorgenstern utility

An axiomatic extension of the ordinal concept of utility to uncertain payoffs. An agent possesses a von Neumann-Morgenstern utility function if she ranks uncertain payoffs according to (higher) expected value of her utility of the individual outcomes that may occur.

Source: SFB 504

expected value

The expected value of a random variable is the mean of its distribution.
In its technical use this word does not have exactly the same meaning as in ordinary English. For example, people buying a lottery ticket that has a 1/10,000 chance of paying $10,000 can expect to get zero since that is overwhelmingly the likely outcome. They can be certain they won't get $1. But the expected value of their winnings is $1.
Having said this, it is a standard implementation of 'rational expectations' to assume that agents behave in response to the expected values of the distributions they face.

Source: econterms

expenditure function

e(p,u) -- the minimum income necessary for a consumer to achieve utility level u given a vector of prices for goods p. (The consumer is presumed to get utility from the goods.)

Source: econterms

experience

In the context of studies of employees, length of time employed anywhere. Sometimes narrowed to include only length of time employed in relevant jobs. Contrast tenure.

Source: econterms

Experiment

An empirical research method used to examine a hypothesized causal relationship between independent and dependent variables. The antecedent event in a proposed causal sequence is called the independent variable. The measured effect in the causal sequence is called the dependent variable. The main methodological rule of experimentation is that the experimenter must have precise control over the experimental situation. Control involves the creation and variation (manipulation) of the independent variables. The values of the independent variable(s) define the experimental conditions or the design of the experiment. At a minimum, the design involves the application of a treatment to one group of participants and the withholding of the treatment from a comparison or control group. Aside this manipulation the situation in both groups must be hold identical. Accordingly, the dependent variable(s) has (have) to be assessed by consistent measures. In order to render the group samples comparable, participants must be randomly assigned to conditions. Differences that then occur on the measures of the dependent variable can be attributed to the factor which differentiates the groups systematically, the presence, absence or level of the independent variable. Without randomization the method is quasi-experimental (e.g. if gender is used as a factor in the design). Experiments can be conducted in the laboratory or in natural settings (field). Because it is easier to precisely control the experimental situation in the laboratory, this kind of experiments allow the experimenter to achieve a higher level of internal validity than in field experiments. However, if it is possible to sufficently control the experimental situation in natural conditions, field experiments are more likely to be externally valid.

Source: SFB 504

Experimental design

A plan for collecting and treating the data of a proposed experiment. It is important that the experimental design provides the opportunity to make appropriate inferences and decisions relating to the hypothesis from the data.

Source: SFB 504

Experimental group

In an experimental design, which is contrasting two or more groups, the experimental group of subjects is given the treatment whose effect is under investigation.

Source: SFB 504

Explicit Costs

The accounting costs involved in the production of a good or service. Explicit costs include fixed costs and variable costs, but do not include opportunity costs.

Source: EconPort

exponential distribution

A particular function form for a continuous distribution with parameter k, a scalar real greater than zero. Has pdf f(x)=ke-kx.
The mean is E[x]=1/k, and variance var(x)=1/k2. Moment-generating function is (1-kt)-1.

Source: econterms

exponential family

A distribution is a member of the exponential family of distributions if its log-likelihood function can be written in the form below.

ln L(q | X) = a(X) + b(q) + c1(X)s1(q) + c2(X)s2(q) + . . . + cK(X)sK(q) where a(), b(), and cj() and sj() for each j=1 to K are functions; q is the vector of all parameters; X is the matrix of observable data; and L() is the likelihood function as defined by the maximum likelihood procedure.

The members of the exponential family vary from each other in a(), b(), and the cj()s and sj()s. Most common named distributions are members of the exponential family.

Quoting from Greene, 1997, page 149: "If the log-likelihood function is of this form, then the functions cj() are called sufficient statistics [and] the method of moments estimators(s) will be functions of them," Those estimators will be the maximum likelihood estimators which are asymptotically efficient here.

Source: econterms

exponential utility

A particular functional form for the utility function. Some versions of it are used often in finance.

Here is the simplest version. Define U() as the utility function and w as wealth. a is a positive scalar parameter.
U(w) = -e-aw

is the exponential utility function.

Now consider events over time. An agent might have a utility function mapping possible streams of consumption into utility values. Here is one way this is often parameterized:
Define (b) as a constant discount rate known to the agent. It's a scalar that is between zero and one, and usually thought of as near one.
Define t as a time subscript that starts at zero and increases over the integers, either to some fixed T or to infinity.
Define c(t) as the amount the agent gets to consume at each t, and {c(t)} as the series of consumptions for all relevant t. c(t) is random here. its value is not known but its distribution is assumed known to the agent.
Let E[] be the expectations operator that takes means of distributions.

Using this notation a common dynamic version of exponential utility is:
u({ct} = the sum over all t of (b)tE[-e-ac(t)]

Whether this utility function describes observed investment decisions is discussable and testable. It is not often discussed, however. If clear information on that becomes known to this author, it will be added here.
Most uses of the exponential utility function in finance are driven by these aspects: (a) its analytic tractability; e.g. that it can be differentiated with respect to choice variables that affect future wealth w or consumption c(t); (b) for some applications it aggregates usefully, meaning that if every agent has this exact utility function and they can buy securities then a representative agent can be defined which also has this analytically convenient form and for whom the securities prices would be the same. It's convenient for computing securities prices in some abstract economies to use that representative agent. There are 'no wealth effects' -- that is, the amount of risky securities that the agent wants to hold is not a function of his own wealth, as long as he can borrow infinitely (which is often assumed for tractability in these models.)

Source: econterms

Exports

Goods and services that are produced in the home country and sold in other countries.

Source: EconPort

extended reals

Or, extendend real numbers, or extended real line. The set of reals plus the elements (infinity) and (minus infinity). Addition and multiplication can generally be extended to this set; see Royden, p. 36

Source: econterms

extensive margin

Refers to the range to which a resource is utilized or applied. Example: the number of hours worked by an employee. Contrast intensive margin.

Source: econterms

External validity

or criterion related validity is a type of validity that is assessed by the relationship between test scores and an independent, non-test criterion.

Source: SFB 504

externality

An effect of a purchase or use decision by one set of parties on others who did not have a choice and whose interests were not taken into account.
Classic example of a negative externality: pollution, generated by some productive enterprise, and affecting others who had no choice and were probably not taken nto account.
Example of a positive externality: Purchase a car of a certain model increases demand and thus availability for mechanics who know that kind of car, which improves the situation for others owning that model.

Source: econterms

Externality

In a general sense, a technological externality is the indirect effect of a consumption activity or a production activity on the consumption or production possibilities available to some other consumer or producer. By the term indirect it is meant that the effect concerns an agent other than the one exerting this economic activity and that this effect does not work through the price system. This effect has first been analyzed by Pigou (1920).

In a non-cooperative game, the utility payoff to one player usually only depends on the profile of strategies taken by all the players, but not on the identity of the players that undertake certain actions or that face certain outcomes. If this is the case, the game is said to contain externalities. In an auction with externalities, for example, the final valuation for the object in sale of each bidder depends on the identity of the player who wins the auction and receives the object. The modification of payoffs due to externalities in a game can be thought of as an immediate consequence of the actions taken during the play of the game, as in production externalities, or as a reduced-form description of expected future interactions among the players (i.e., their equilibrium behavior) after the end of the game, as in the case of an auction with resale possibilities of the object obtained.

Source: SFB 504

F

F distribution

The F distribution is defined in terms of two independent chi-squared variables. Let u and v be independently distributed chi-squared variables with u1 and v1 degrees of freedom, respectively.
Then the statistic: F=(u/u1)/(v/v1) has an F distribution with (u1,v1) degrees of freedom. As can be computed from the definition of the t distribution, the square of a t statistic may be written: t2=(z2/1)/(v/v1), where z2, being the square of a standard normal variable, has a chi-squared distribution. Thus the square of a t variable with v1 degrees of freedom is an F variable with (1,v1) degrees of freedom, that is: t2=F(1,v1).

Source: econterms

F test

Normally a test for the joint hypothesis that a number of coefficients are zero. Large values (greater than two?) generally reject the hypothesis, depending on the level of significance required.

Source: econterms

f.o.b.

Indicates which services come with a price. Stands for 'free on board.' Describes a price which includes goods plus the services of loading those goods onto some vehicle or vessel at a named location, sometimes put in parentheses after the f.o.b.

Source: econterms

factor loadings

"A security's factor loadings are the slopes in a multiple regression of its return on the factors."

Source: econterms

factor price equalization

An effect observed in models of international trade -- that the prices of inputs to ("factors of") production in different countries, like wages, are driven towards equality in the absence of barriers to trade. This happens among other reasons because price incentives cause countries to choose to specialize in the production of goods whose factors of production are abundant there, which raises the prices of the factors towards equality with the prices in countries where those factors are not abundant. Shocks to factor availability in a country would cause only a temporary departure from factor price equality.

The basic theorem of this kind is attributed to Samuelson (1948) by Hanson and Slaughter (1999) who also cite Blackorby, Schworm, and Venables (1993). The context of the theorem is a Heckscher-Ohlin model.

Source: econterms

factory system

factories may have been more efficient by reducing transactions costs, as argued by Oliver Williamson (1980).

Source: econterms

fads

The conjecture that market prices for securities take long swings away from their fundamental values and tend to return to them.
In a time series of data this suggests that "the market price differs from the fundamental price by a highly serially correlated fad.". This formulation attributed to Shiller(1981, 1994), Summers (1986) and Poterba and Summers (1988) by Bollerslev and Hodrick (1992) p. 13.

Source: econterms

fair trader

Contrasted with free trader, a holder of the the point of view that one's country's government must prevent foreign companies from having artificial advantages over domestic ones.

The term dates at least as far back as 1886 Britain, where tariffs were recommended by one point of view expressed in a Royal Commission report 'not to countervail any natural and legitimate advantage which foreign manufacturers may possess, but simply to prevent our own industries being placed at an artificial disadvantage by the interference of either home or foreign legislation....' (Carr and Taplin, p 122)

Source: econterms

Fama-MacBeth regression

A panel study of stocks to estimate CAPM or APT parameters

Source: econterms

family

two or more persons related by blood, marriage, or adoption, and residing together.

Source: econterms

FASB

Financial Accounting Standards Board, which sets accounting rules for the US. (public? private?)

Source: econterms

fat-tailed

describes a distribution with excess kurtosis.

Source: econterms

Fatou's lemma

Let {Xn} for n=1,2,3,... be a sequence of nonnegative real random variables.
Then lim infn->infinity E[Xn] ≥ E[lim infn->infinity Xn].

Source: econterms

FCLT

stands for 'functional central limit theorem', and is synonymous with Donsker's theorem.

Briefly: if {et} is a series of independent and mean zero random variables, partial sums (from 1 to T) of the e's converge to a standard Brownian motion process on [0,1] as T goes to infinity. See other sources for a proper formal statement.

Source: econterms

FDI

Foreign Direct Investment, a component of a country's national financial accounts. Foreign direct investment is investment of foreign assets into domestic structures, equipment, and organizations. It does not include foreign investment into the stock markets. Foreign direct investment is thought to be more useful to a country than investments in the equity of its companies because equity investments are potentially "hot money" which can leave at the first sign of trouble, whereas FDI is durable and generally useful whether things go well or badly.

Source: econterms

FE

stands for Fixed Effects estimator. That is, a linear regression in which certain kinds of differences are subtracted out so that one can estimate the effects of another kind of difference.

Source: econterms

Fed Funds Rate

The interest rate at which U.S. banks lend to one another their excess reserves held on deposit at the U.S. Federal Reserve.

Source: econterms

FGLS

Feasible GLS. That is, the generalized least squares estimation procedure (see GLS), but with an estimated covariance matrix, not an assumed one.

Source: econterms

fiat money

is intrinsically useless; is used only as a medium of exchange.

Source: econterms

fields

Most terms are in one of these categories. You can click on one to see a list of terms relevant to it.
fields

Source: econterms

filter

A filter is a way of treating or adjusting data before it is analyzed. Examples are the Hodrick-Prescott filter or Kalman filter.

More exactly, a filter is an algorithm or mathematical operation that is applied to a time series sample to get another sample, often called the 'filtered' data. For example a filter might remove some high-frequency effects from the data; or detrend it; or remove seasonal frequencies but leave monthly frequencies in.

Source: econterms

FIML

Full Information Maximum Likelihood, an approach to the estimation of simultaneous equations.

As portrayed in Johnston's book: Define A as the matrix of coefficients in the multiple-equation model, u as the vector of residuals for each choice of A, and s as the covariance matrix E(uu'). FIML consists of maximizing ln(L(A, s)) with respect to the elements of A and s.

Source: econterms

finance

The study of securities, borrowing, and ownership. finance

Source: econterms

FIPS

Federal Information Processing Standards. These are encodings defined by the U.S. government and used to encode some data (like states and counties) in U.S. data sets. Listings can be found at the NIST FIPS site.

Source: econterms

firm

Defined by Alchian and Demsetz (1972) this way: "The essence of the classical firm is identified here as a contractual structure with: 1) joint input production [see team production]; 2) several input owners [e.g. the workers]; 3) one party [the firm or its owners] who is common to all the contracts of the joint inputs; 4) who has rights to renegotiate any input's contract independently of contracts with other input owners; 5) who holds the residual claim; and 6) who has the right to sell his central contractual residual status. The central agent is call the firm's owner and the employer. No authoritarian control is involved; the arrangement is simply a contractual structure subject to continuous renegotiation with the central agent. The contractual structure arises as a means of enhancing efficient organization of team production." ---------- a firm is a hierarchical organization attempting to make profits.

Source: econterms

First price sealed bid auction

Simultaneous bidding game where the bidder that has submitted the highest bid is awarded the object and pays his own bid (which is the 'first highest' bid). The multi-object form of the first price auction is called discriminatory auction . The equilibrium bid functions of first price auctions balance the trade- off that a higher winning probability is 'bought' by a higher expected payment. As a result, the bidders' private information is revealed in the bids in shaded form only. Oligopolistic competition of price setting firms under incomplete information (Betrand competition) is an instance of a first price procurement auction.

Source: SFB 504

First Welfare Theorem

The statement that a Walrasian equilibrium is weakly Pareto optimal. Such a theorem is true in a large and important class of general equilibrium models (usually static ones). The standard case is if every agent has a positive quantity of every good, and every agent has a utility function that is convex, continuous, and strictly increasing, the then the First Welfare Theorem holds.

Source: econterms

first-order stochastic dominance

Usually means stochastic dominance.

Source: econterms

fiscalist view

An extreme Keynesian view, that money doesn't matter at all as aggregate demand policy. Assumes that investment demand does not respond to interest rate changes. Relevant only in depression conditions (Branson, p 386).

Source: econterms

Fisher consistency

This is a necessary condition for maximum likelihood estimation to be consistent. Maximizing the likelihood function L gives an estimate for parameter b that is Fisher-consistent if: E[d(ln L)/db]=0 at b=b0, where b0 is the true value of b.

Another interpretation or phrasing: "An estimation procedure is Fisher consistent if the parameters of interest solve the population analog of the estimation problem." (Wooldridge).

Source: econterms

Fisher effect

That in a model where inflation is expected to be steady, the nominal interest rate changes one-for-one with the inflation rate; see Fisher equation. The empirical analogy is the Fisher hypothesis.

Source: econterms

Fisher equation

nominal rate of interest = real rate of interest + inflation

Source: econterms

Fisher hypothesis

That the real rate of interest is constant. So the nominal rate moves with inflation.
The real rate of interest would be determined by the time preferences of the public and technological constraints determining the return on real investment.

Source: econterms

Fisher Ideal Index

The 'geometric mean of the fixed-weighted Paasche and Laspeyres indexes.' Proposed as a price index by Irving Fisher in 1922. This is a superlative index number formula. -- Triplett, 1992.

Source: econterms

Fisher index

A price index, computed for a given period by taking the square root of the product of the the Paasche index value and the Laspeyres index value.

Source: econterms

Fisher information

The Fisher information is an attribute or property of a distribution with known form but uncertain parameter values. It is only well-defined for distributions satisfying certain assumptions. It is a (k x k) matrix, where k is the number of elements in a vector of parameters b. Thus, for parameter b of pdf f(x):
I(b)=E{ [f'(x)/f(x)]2 | b}
That's from DeGroot. I think this is the same as in Greene p 96:
I(b)=E[{d/db(ln L(b))}2]
=-E[d2/db2(ln L(b))]
If the Fisher information is 'large' then the estimated distribution will change radically as new data (x) are incorporated into the estimate of the distribution by maximum likelihood. The Fisher information is the main ingredient in the Cramer-Rao lower bound, and in some maximum likelihood estimators.

Source: econterms

Fisher transformation

Hypotheses about the value of r, the correlation coefficient between variables x and y of the underlying population, can be tested using the Fisher transformation of a sample's correlation coefficient r. Let N be the sample's size. This transformation is defined by: z = 0.5 * ln ( (1+r)/(1-r) ) z is approximately normally distributed with mean r, and standard error 1/((N-3)^0.5). This is a common way of testing whether a correlation coefficient is significantly different from 0, and hence ascribing a p-value. ------ [Editor: We suspect that for x and y bivariate normal the distribution works exactly in all sample sizes, otherwise only asymptotically.] [See Kennedy, p 369. Bickel and Dobson, 'Mathematical Statistics: Basic Ideas and selected topics' page 221 also gives derivation, but makes no mention of any distribution requirements.]

Source: econterms

Fisherian criterion

for optimal investment by a firm -- that it should invest in real assets until their marginal internal rate of return equals the appropriately risk-adjusted rate of return on securities

Source: econterms

Fixed Cost

A cost of production that is independent of the quantity produced; a fixed costs must be paid even if nothing is produced. For example, if a trucking company buys a new semi-truck by taking out a 5-year loan to buy the truck, the loan must be paid (i.e., the truck must be paid for) even if the company decides to shut down and not operate at all. This fixed cost of production must be paid whether the company shuts down, reduces its production, or operates at full capacity.

Fixed costs such as this are the opposite of variable costs, which depend on the amount produced. In the example above, gasoline would be a variable cost â?? if the company does not operate, it does will not have to purchase any gasoline, and the amount of gasoline purchased depends on the amount produced.

Source: EconPort
See also: Variable Cost , 

fixed effects estimation

A method of estimating parameters from a panel data set. The fixed effects estimator is obtained by OLS on the deviations from the means of each unit or time period. This approach is relevant when one expects that the averages of the dependent variable will be different for each cross-section unit, or each time period, but the variance of the errors will not. In such a case random effects estimation would give inconsistent estimates of b in the model: y = Xb + e
The fixed effects estimator is: (X'QX)-1X'Qy
where Q is the matrix that "partials out" the averages from the groups that have different variances.
Example: Define L as IN x 1T, where x is the Kronecker cross product operator, T is the number of time periods, and N is the number of cross-section units (individuals, say). Now individual effects can be screened out by premultiplying the model's equation by Q and running OLS, or equivalently using the estimator equation above. Thus estimating b.

Source: econterms

flexible-accelerator model

A macro model in which there is a variable relationship between the growth rate of out put and the level of net investment. The relation between the change in output and the level of net investment is the accelerator principle.

Source: econterms

fob

An occasional compressed form of f.o.b..

Source: econterms

Folk theorem

The theorem is that a Nash equilibrium exists in repeated games in which sufficiently patient players to reach Pareto optimal payoffs in a Nash equilibrium. (Fudenberg and Tirole, p 150, describes the achievable payoffs as the individually rational ones, not the Pareto optimal ones.) The strategies that achieve this often have the pattern that they 'punish' the other player at length for any defection from the Pareto optimal choice. In equilibrium that encourages the other player not to defect for a short term gain.

Source: econterms

Frame framing effect

A decision-frame is the decision-maker's subjective conception of the acts, outcomes and contingencies associated with a particular choice. The frame that a decision maker adopts is controlled partly by the formulation of the problem and by the norms, habits, and personal characteristics of the decision maker. It is often possible to frame a given decision problem in more than one way. A framing effect is a change of preferences between options as a function of the variation of frames, for instance through variation of the formulation of the problem. For example, a problem can be presented as a gain (200 of 600 threatened people will be saved) or as a loss (400 of 600 threatened people will die), in the first case people tend to adopt a gain frame, generally leading to risk-aversion, and in the latter people tend to adopt a loss frame, generally leading to risk-seeking behavior.

Source: SFB 504

Frechet derivative

Informally: A derivative (slope) defined for mappings from one vector space to another.

The first e in Frechet should have an accent aigu.

Formally (this taken more or less directly from Tripathi, 1996):
Let T be a transformation defined on an open domain U in a normed space X and mapping to a range in a normed space Y.
(Does normed space mean normed vector space? Or might it not?)

Holding fixed an x in U and for each h in X, if a linear and continuous operator L (mapping from X to Y) exists such that:

lim||h|| falls to 0 (1/||h||) * (||T(x+h)-T(x)-L(h)||) = 0

Then the operator L, often denoted T'(x), is the Frechet derivative of T() and we can say T is Frechet differentiable at x. (Ed.: I believe any such L is unique.)

Source: econterms

Frechet differentiable

Informally: A possible property of mappings from one space to another. For such a transformation, a Frechet derivative may exist at each point and if so we say the transformation is Frechet differentiable at that point.

Properly the first e in Frechet should have an accent aigu.

See the entry at Frechet derivative for a formal definition.

Source: econterms

Freddie Mac

Shorthand for U.S. Federal Home Loan Mortgage Corporation.

Source: econterms

free cash flow

cash flow to a firm in excess of that required to fund all projects that have positive net present values when discounted at the relevant cost of capital.
Free cash flow can be a source of principal-agent conflict between shareholders and managers, since shareholders would probably want it paid out in some form to them, and managers might want to control it, e.g. to use it for unprofitable projects, for perquisites, to make acquisitions, to create jobs for friends and allies, and so forth. A possible partial solution to the conflict for the shareholders is for the company to have heavy debts on which frequent, heavy payments are due. Those payments keep the managers focused on delivering consistent revenues and clear out the extra cash.

Source: econterms

free entry condition

An assumption posited in a search and matching model of a market. The assumption is that there is no institutional constraint on firms entering the market (e.g. to hire workers). There is no fixed number of firms. The number of firms is determined in equilibrium, by the costs of starting up.

Source: econterms

free reserves

excess reserves minus borrowed reserves (Branson, p 353).

Source: econterms

free trader

Holder of the political point of view that the best policy is to allow free trade into one's own country.

Source: econterms

frequency function

The frequency function is the probability of drawing each particular value from a discrete distribution: p(x) = Pr(X=x). Here X is the random variable and x is one of its possible values.

Source: econterms

frictional unemployment

Unemployment that comes from people moving between jobs, careers, and locations. Contrast structural unemployment.

Source: econterms

Friedman rule

In a cash-in-advance model of a monetary system, the Friedman rule for monetary policy is to deflate so that it is not costly to those who have money to continue to hold it. Then the cash-in-advance constraint isn't binding on them.

Source: econterms

FTC

Abbreviaton for the U.S. national Federal Trade Commission, which rules in some circumstances on some antitrust regulations. See also FTC.

Source: econterms

FTC Act

A 1914 U.S. law creating a regulatory body for antitrust, price discrimination, and regulation. Section five says "Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful."

Source: econterms

functional

a mapping from paths of functions to the reals (e.g. a value function defined by a mapping from possible paths of choices)

Source: econterms

functional equation

an equation where the unknown is a function. Example: a value function is the solution to the equation that sets the value function equal to the present discounted value of the current period's utility and the discounted value function of next period's state.

Source: econterms

fungible

"Being of such a nature or kind that one unit or part may be exchanged or substituted for another unit or equal part to discharge an obligation."
Examples: money or grain. Not examples: works of art.

Source: econterms

future-oriented

A future-oriented agent discounts the future lightly and so has a LOW discount rate, or equivalently a HIGh discount factor. See also present-oriented, discount rate, and discount factor.

Source: econterms

FWL theorem

Given a statistical model y = X1b1 + X2b2+ e
where
y is a vector of values of a dependent variable,
the X's are linearly independent matrices of predetermined variables, and
the e's are errors, we could premultiply the equation by M1=I-X1(X1'X1)-1X' which projects vectors in the space spanned by X1 to zero, and run OLS on the resulting equation M1y = M1X2b2+ M1e
and (the theorem says) would get exactly the same estimate of b2 that OLS on the first equation would have given.
This use of premultiplying is used in the derivation of many estimators: notably IV estimators and FE estimators.

Source: econterms

G

game

A game is a model with (1) players who make (2) strategy (or action) choices in a (3) predefined time order, and then (4) receive payoffs, which are usually conceived of in money or utility terms. Classic games are the Prisoner's Dilemma, Matching Pennies, the Battle of the Sexes, the dictator game, the ultimatum game, the Bertrand game, and the Cournot game.

Source: econterms

Game theory

Theory of rational behavior for interactive decision problems. In a game, several agents strive to maximize their (expected) utility index by chosing particular courses of action, and each agent's final utility payoffs depend on the profile of courses of action chosen by all agents. The interactive situation, specified by the set of participants, the possible courses of action of each agent, and the set of all possible utility payoffs, is called a game; the agents 'playing' a game are called the players.

In denegerate games, the players' payoffs only depend on their own actions. For example, in competitive markets (competitive market equilibrium), it is enough that each player optimizes regardless of the behavior of other traders. As soon as a small number of agents is involved in an economic transaction, however, the payoffs to each of them depend on the other agents' actions. For example in an oligopolistic industry or in a cartel, the price or the quantity set optimally by each firm depends crucially on the prices or quantities set by the competing firms. Similarly, in a market with a small number of traders, the equilibrium price depends on each trader's own actions as well as the one of his fellow traders (see auctions).

Whenever an optimizing agent expects a reaction from other agents to his own actions, his payoff is determined by other player's actions as well, and he is playing a game. Game theory provides general methods of dealing with interactive optimization problems; its methods and concepts, particularly the notion of strategy and strategic equilibrium find a vast number of applications throughout social sciences (including biology). Although the word 'game' suggests peaceful and 'kind' behavior, most situations revelant in politics, psychology, biology, and economics involve rather strong conflicts of interest, competition, and cheating, apart from leaving room for cooperation or mutually benefically actions.

Based on a model of optimizing agents that plan individually optimal course of play, knowing that her opponents will do so as well, the basic objects of interest in strategic (or 'non-cooperative') game theory are the players' strategies. A player's strategy is a complete plan of actions to be taken when the game is actually played; it must be completely specified before the actual play of the game starts, and it prescribe the course of play for each decision that a player might be called upon to take, for each possible piece of information that the player may have at each time where he might be called upon to act. A strategy may also include random moves. It is generally assumed that the players evaluate uncertain payoffs according to von Neumann Morgenstern utility. In addition to the strategic branch of game theory, there is another one that focuses on the interactions of groups of players that jointly strive to maximize their surplus. While this second branch represents the analysis of coalitional games, which centers around notions of 'coalitionally stable' payoff configurations, we focus here on strategic game theory (from which coalitional games are derived).

Given a strategic game, a profile of strategies results in a profile of (expected) utility payoffs. A certain payoff allocation, or a profile of final moves of the players is called an outcome of the game. An outcome is called an equilibrium outcome if no player can unilaterally improve the outcome (in terms of his own payoff) given that the other players stick to their equilibrium strategies. A profile of strategies is called a (strategic) equilibrium if, given that all players conform to the prescribed strategies, no player can gain from unilaterally switching to another strategy. Alternatively, a profile of strategies forms an equilibrium if the strategies form best responses to one another. (Unfortunately, it is impossible to describe what is an equilibrium other than in such a self-referential way. The best way to understand this definition is then to take it literally.) Only equilibrium outcomes are reasonable outcomes for games, because outside an equilibrium there is at least one player that can improve by playing according to another strategy. An implicit assumption of game theory is that the players, being rational, are able to reproduce any equilibrium calculations of anybody else. In particular, all the equilibrium strategies must be known to (as they are computed by) the players. Similarly, it is assumed that the whole structure of the game, in much the same way as the players' social context, is known by each player (and that this knowledge itself is known etc.)

Source: SFB 504

game theory

game theory

Source: econterms

Game tree

Time structure of possible moves describing an extensive form game. A game tree is a set of nodes some which are linked by edges. A tree is a connected graph with no cycles. The first move of the game is identified with a distinguished node that is called the root of the tree. A play of the game consists of a connected chain of edges starting at the root of the tree and ending, if the game is finite, at a terminal node. The nodes in the tree represent the possible moves in the game. The edges leading away from a node represent the choices or actions available at that move. Each node other than the terminal node is assigned a player's name so that it is known who makes the choice at that move. Each terminal node must be labeled with the consequences for each player if the game ends in the outcome corresponding to that terminal node.

Source: SFB 504

gamma (of options)

As used with respect to options: The rate of change of the portfolio's delta with respect to the price of the underlying asset. Formally this is a partial derivative.

A portfolio is gamma-neutral if it has zero gamma.

Source: econterms

gamma distribution

A distribution relevant to, for example, waiting times. Expression of its pdf requires reference to the gamma function which will be called GAMMA(a) here. (When HTML supports math a better display will be possible.) The gamma distribution's pdf has parameters a>0 and b>0, and GAMMA(a) is also greater than zero. The support is on x>0:
f(x)=[xa-1e-x/b]/[GAMMA(a)ba]

Source: econterms

gamma function


A function of a real a>0. It is the integral over y from zero to infinity of ya-1e-y dy. This integral is the gamma function of a, GAMMA(a). (When HTML supports math a better display will be possible.) The gamma distribution is a function that includes the gamma function.

Source: econterms

GARCH

Generalized ARCH. First paper may have been Bollerslev, 1986, Journal of Econometrics

Source: econterms

GARP

abbreviation for the Generalized Axioms of Revealed Preference.

Source: econterms

Gauss

A matrix programming language and programming environment. Made by Aptech.

Source: econterms

Gaussian

an adjective that describes a random variable, meaning it has a normal distribution.

Source: econterms

Gaussian kernel

The Gaussian kernel is this function: (2PI)-.5exp(-u2/2). Here u=(x-xi)/h, where h is the window width and xi are the values of the independent variable in the data, and x is the value of the independent variable for which one seeks an estimate. Unlike most kernel functions this one is unbounded on x; so every data point will be brought into every estimate in theory, although outside three standard deviations they make hardly any difference.
For kernel estimation.

Source: econterms

Gaussian white noise process

A white noise process with a normal distribution.

Source: econterms

GDP

Gross domestic product. For a region, the GDP is "the market value of all the goods and services producted by labor and property located in" the region, usually a country. It equals GNP minus the net inflow of labor and property incomes from broad. -- Survey of Current Business

A key example helps. A Japanese-owned automobile factory in the US counts in US GDP but in Japanese GNP.

Source: econterms

GDP deflator

A measure of the cost of goods purchased by U.S. households, government, and industry. Differs conceptually from the CPI measure of inflation, but not by much in practice.

Source: econterms

GEB

An abbreviation for the journal Games and Economic Behavior.

Source: econterms

general equilibrium

general equilibrium

Source: econterms

generalized linear model

A model of the form y=g(b'x) where y is a vector of dependent variables, x is a column vector of independent variables, b' is a row vector of parameters (that is, b is not a function of x) and g() is a possibly random function called a link function.

Examples: linear regression (y=b'x+errs) and logistic regression y=1/(1+e-x)+errs.

An example that is not in the class of generalized linear models is: y=x1*x2.

Source: econterms

Generalized Method of Moments

See GMM.

Source: econterms

generalized Tobit

Synonym for Heckit.

Source: econterms

generalized Wiener process

generalized Wiener process

A continuous-time random walk with a drift and random jumps at every point in time (roughly speaking). Algebraically:
a(x,t)dt + b(x,t)c(dt).5
describes a generalized Wiener process, where:
a and b are deterministic functions
t is a continuous index for time
x is a set of exogenous variables that may change with time
dt is a differential in time
c is a random draw from a standard normal distribution at each instant

Source: econterms

generator function

in a dynamical system, the generator function maps the old state Nt into new state Nt+1 E.g. Nt+1 = F(Nt).
A steady state would be an N* such that F(N*) = N*.

Source: econterms

geometric mean

Geometric mean is a kind of average of a set of numbers that is different from the arithmetic average. The geometric mean is well defined only for sets of positive real numbers. Geometric mean of A and B is the square root of (A*B). The geometric mean of A, B, and C is the cube root of (A*B*C). And so forth. Contrast this to the arithmetic means, which are .5*(A+B) and .333*(A+B+C).

Source: econterms

GEV

abbrevation for Generalized Extreme Value distribution. The difference between two draws of GEV type 1 variables has a logistic distribution, which is why a GEV distribution for errors gets assumed in certain binary econometric models.

Source: econterms

GGH preferences

Refers to a paper by Greenwood, Hercowitz, and Huffman (1988) with utility functions across agents and across time by:
u(Cit, Nit) = Cit - Nitb
where a>0 and b>1 are constants, and Cit and Nit stand for consumption and hours worked by each agent i at date t.
-- this utility function has Gorman form and so it aggregates
-- it has been successful at matching cross-section data relative to other functions that do.

Source: econterms

Gibbs sampler

A way to generate empirical distributions of two variables from a model. Say the model defines probability distributions F(X|Y) and G(Y|X). Then start with a random set of possible X's, draw Y's from G(), then use those Y's to draw X's, and so on indefinitely. Keep track of the X's and Y's seen, and this will give samples enough to find the unconditional distributions of X and Y.

Source: econterms

Gibrat's law

A descriptive relationship between size and growth -- that the size of units and their growth percentage statistics are statistically independent. Sometimes Gibrat's law is thought to apply to large firms, and sometimes to cities (Gabaix, May 1999 American Economic Review, page 130).

Source: econterms

Gini coefficient

A number between zero and one that is a measure of inequality. An example is the concentration of suppliers in a market or industry.

The Gini coefficient is the ratio of the area under the Lorenz curve to the area under the diagonal on a graph of the Lorenz curve, which is 5000 if both axes have percentage units. The meaning of the Gini coefficient: if the suppliers in a market have near-equal market share, the Gini coefficient is near zero. If most of the suppliers have very low market share but there exist one or a few supplies providing most of the market share then the Gini coefficient is near one.,

In labor economics, inequality of the wage distribution can be discussed in terms of a Gini coefficient, where the wages of subgroups are fractions of the total wage bill.

Source: econterms

Glass-Steagall Act

A 1933 United States national law separating investment banking and commercial banking firms. Also prohibited banks from owning corporate stock. It was designed to confront the problem that banks in the Great Depression collapsed because they held a lot of stock.

Source: econterms

GLS

Generalized Least Squares. A generalization of the OLS procedure to make an efficient linear regression estimate of a parameter from a sample in which the disturbances are heteroskedastic. That is, in
y = Xb + e (equation 1)
that the e's vary in magnitude with the X's.
The estimator of b is: (X'O-1X)-1X'O-1y (equation 2)
where O, standing for omega, is the covariance matrix. (As you see in the estimator, the covariance matrix is assumed to be invertible.)
The procedure to derive this is to multiply through the first equation by the square root of the inverse of the covariance matrix (which assumed to be known; if it estimated, one calls this procedure FGLS, for feasible GLS.) Then take OLS of the resulting equation.

Source: econterms

GMM

Stands for Generalized Method of Moments, an econometric framework of Hansen, 1982. It is an approach to estimating parameters in an economic model from data. Used often to figure out what standard errors on parameter estimates should be.

Source: econterms

GNP

Gross national product. The GDP is "the market value of all the goods and services producted by labor and property belonging to the region, usually a country. It equals GDP plus the net inflow of labor and property incomes from broad. A Japanese-owned automobile factory in the US counts in US GDP but in Japanese GNP.

Source: econterms

Golden Rule capital rate

f'(k*)=(1+n) where k* is optimal capital stock, f() is the aggregate production function, and n is population growth rate. f(k)-k is consumed by the population. 'Golden Rule' may refer to a Solow fairy tale.

Source: econterms

good

A good is a desired commodity.

Source: econterms

goodwill

The accounting term to describe the premium that acquiring companies pay over the book value of the firm being acquired. Goodwill can include value for R&D and trademarks.

Source: econterms

Gordon model

Of a stock price. From M. R. Gordon (1962). This model is sometimes used as a baseline for comparison or for intuition.
Assume a constant rate of return r, and a constant dividend growth rate g. Define Pt to be the price of the stock in period t, and Dt to be its dividend in period t. Implication is that price of stock Pt = Dt/(r-g).

Source: econterms

Gorman form

A utility function or indirect utility function is in Gorman form if it is affine with respect to some argument. Which argument should be clear from context. E.g.:
Ui(xi, z) = A(z)xi + Bi(z)
Here the utility Ui for individual i is is affine in argument xi. A critical implication is that the sum of Gorman form utility functions for individuals is a well-defined aggregate utility function under some conditions....

Source: econterms

government failure

A situation, usually discussed in a model not in the real world, in which the behavior of optimizing agents in a market with a government would not produce a Pareto optimal allocation. The point is not that a particular government had, or would have, failed at something, but that the problem abstractly put cannot be perfectly solved by the government. The most common source of government failures in models is private information among the agents.

Source: econterms

Granger causality

Informally, if one time series helps predict another, we can say it Granger causes the other. The original definition, for linear predictors, is in Granger, 1980. From Sargent: A stochastic process zt is said NOT to Granger-cause a random process xt if E(xt+1 | xt,xt-1,...,zt,zt-1,...) = E(xt+1 | xt,xt-1,...) *** NOTE in J Pehkonen, Applied Economics, 1991, 23, 1559-1568, p. 1560. *** Expert treatment of this subject and more formal, less ambiguous definitions are in Chamberlain, Econometrica, May 82

Source: econterms

Grenander conditions

Conditions on the regressors under which the OLS estimator will be consistent.

The Grenander conditions are weaker than the assumption on the regressor X that limn->infinity(X'X)/n is a fixed positive definite matrix, which is a common starting assumption.

See Greene, 2nd ed, 1993, p 295.

Source: econterms

Gresham's Law

Some version of "Bad money will drive out good." I think the context is that if there are two suppliers of the same money (e.g. if one of them is a counterfeiter) or of two monies with a fixed exchange rate between them (per Hayek, Denationalization of Money, 1976 p. 39), there will be a tendency for overproduction and that the actual money stock will be made up of the bad, or less valuable, one. (Another situation is if one supplier makes coins that are 90% gold and the other has the option of making coins with less gold, Bertrand competition for coins would drive the gold fraction down over time.)

Source: econterms

GSOEP

German Socio-Economic Panel. A German government database going back to at least 1984.

Source: econterms

H

H index

Stands for Herfindahl-Hirschman index, which is a way of measuring the concentration of market share held by particular suppliers in a market. It is the sum of squares of the percentages of the market shares held by the firms in a market. If there is a monopoly -- one firm with all sales, the H index is 10000. If there is perfect competition, with an infinite number of firms with near-zero market share each, the H index is approximately zero. Other industry structures will have H indices between zero and 10000.
Tirole's version is bounded between zero and one because each of the market shares is between zero and one.

Source: econterms

Habakkuk thesis

That high wages and labor scarcity stimulated technological progress in the U.S. in the 1800s, and in particular brought about the American system of manufacturing based on interchangeable parts. (This description from Mokyr, 1990; idea from Habakkuk, 1962).

Source: econterms

Habit

Generally, habits are conceptualized as the learning of sequences of acts that have become automatic responses to specific situations, which may be functional in order to achieve a given result, or to obtain certain goals or end states (e.g. James, 1890, Triandis, 1977, Watson, 1914). Habits thus comprise a goal directed type of automaticity; they are instigated by a specific goal directed state of mind on the presence of triggering stimulus cues, for instance taking the car to travel to the supermarket. Once evoked, the behavior will run to completion without the need for attentional control of the process. Habit strenght is proposed to increase as a result of repetitions of positive reinforcements.

Source: SFB 504

Hahn problem

Hahn (1965) question: when does there exist an equilibrium in a model in which money has positive value?

Source: econterms

Hansen's J test

See J statistic

Source: econterms

Harrod-neutral

A synonym for labor-augmenting, in practice.

Source: econterms

Hausman test

Given a model and data in which fixed effects estimation would be appropriate, a Hausman test tests whether random effects estimation would be almost as good. In a fixed-effects kind of case, the Hausman test is a test of H0: that random effects would be consistent and efficient, versus H1: that random effects would be inconsistent. (Note that fixed effects would certainly be consistent.) The result of the test is a vector of dimension k (dim(b)) which will be distributed chi-square(k). So if the Hausman test statistic is large, one must use FE. If the statistic is small, one may get away with RE.

Source: econterms

hazard rate

escape rate; rate of transition out of current state

Source: econterms

Heaviside function

Is a mapping from the real line to {0, 1}, denoted (at least sometimes) hv(x). hv(x) is zero for x<0, and is one for x>=0.

Source: econterms

Heckit

An occasional name for generalized Tobit. This approach allows a different set of explanatory variables to predict the binary choice from those which predict the continuous choice. (The data environment is one in which the continuous choice is measured only when the binary choice is nonzero -- e.g., if we have data on people, whether they bought a car, and how expensive it was, we can estimate a statistical model of how expensive a car other people would buy, but only on the basis of the ones who did buy a car in the data sample.) A regular, non-generalized Tobit constrains the two sets of variables to be the same, and the signs of their effects to be the same in the two estimated equations. 'Heck' is for James Heckman.

-- Christopher Baum, Boston College economics department, 20 May 2000, in a broadcast to the statalist, the email list of people interested in the software Stata.

Source: econterms

Heckman two-step estimation

A way of estimating treatment effects when the treated sample is self-selected and so the effects of the treatment are confounded with the population that chose it because they expected it would help -- the classic example is that college educations may be selected by those most likely to benefit.

Taking that example, we wish to advance past the following regression:
wi = a + bXi + dCi + ei
where i indexes people, wi is the wage (or other outcome variable) for agent i, Xi are variables predicting i's wage, and Ci is 1 if i went to college and 0 if not. ei is the remaining error after least squares estimation of a, b, and d.

Source: econterms

Heckscher-Ohlin model

A model of the effects of international trade. "The Heckscher-Ohlin framework typically is presented as a two-country, two-good, two-factor model. The two countries are assumed to share identical, homothetic tastes for the two substitutable goods and identical, constant-returns-to-scale technologies with some factor substitutability. Perfect competition prevails in each market with zero transport costs and no artificial barriers to international trade in goods, although factors are internationally immobile. In this framework, each country will (incompletely) specialize in production and export the good using intensively in production the factor that the country has in relative abundance." That effect is called factor-price equalization across countries, and is used sometimes to explain how rising international trade would lead to greater income inequality in the most developed countries. (from Bergstrand, Cosimano, Houck, and Sheehan, 1994, p 3)
The reference in the name is to "Scandinavian economists Eli Heckscher and Bertil Ohlin early in [the twentieth century]" in work that is rarely cited directly. (from Bluestone, 1994, p 336).

Source: econterms

Hedge strategies

A hedge strategy is the intentional reduction of the loss risk (downside risk) of an underlying asset to the debit of the gain chance. Usually hedge strategies are done with derivative securities, e.g. options.

Source: SFB 504

hedonic

of or relating to utility. (Literally, pleasure-related.) A hedonic econometric model is one where the independent variables are related to quality; e.g. the quality of a product that one might buy or the quality of a job one might take.

A hedonic model of wages might correspond to the idea that there are compensating differentials -- that workers would get higher wages for jobs that were more unpleasant.

"A product that meets several needs, or has a variety of features ... generates a number of hedonic services. Each one of these services can be thought of as generating its own demand, along with a resulting hedonic price. Although each separate component is not observable, the aggregation of all the components results in the observed product demand and equilibrium price.... [Q]uality improvements will appear to an observer as an outward shift of the product demand curve, as consumers are willing to purchase more at the prevailing price." -- William J. White, "A Hedonic Index of Farm Tractor Prices: 1910-1955", Ohio State University working paper, October 1998, pp. 3-4.

Source: econterms

help

A list of fields contained here is below. There is some other advice at this help page: http://econterms.com/help.html

Most terms are in one of these categories. You can click on one to see a list of terms relevant to it.
fields

Source: econterms

Herfindahl-Hirschman index

See 'H index'.

Source: econterms

Hermite polynomials

The Hermite polynomials are a series of polynomials defined for each natural number r, used for statistical approximations I believe. Click here for the equation and graphs of the first several.

Source: econterms

Hessian

The matrix of second derivatives of a multivariate function. That is, the gradient of the gradient of a function. Properties of the Hessian matrix at an optimum of differentiable function are relevant in many places in economics: 1) In maximum likelihood estimation, the information matrix is (-1) times the Hessian.

Source: econterms

heterogeneous process

A stochastic process is heterogeneous if it is not identically distributed every period.

Source: econterms

heteroscedastic

An alternate spelling of heteroskedastic. McCulloch (1985) argues that the spelling with the k is preferred, on the basis of the pronunciation and etymology (Greek not French derivation) of the term.

Source: econterms

heteroskedastic

An adjective describing a data sample or data-generating process in which the errors are drawn from different distributions for different values of the independent variables.
Most commonly this takes the form of changes in variance with the magnitude of X. That is, in
y = Xb + e
that the e's vary in magnitude with the X's. (An example is that variance of income across individuals is systematically higher for higher income individuals.)
If the errors are drawn from different distributions, or if higher moments of the error distributions vary systematically, these are also forms of heteroskedasticity.

Source: econterms

Heuristic

A heuristic is a strategy that can be applied to a variety of problems and that usually ? but not always ? yields a correct solution. People often use heuristics (or shortcuts) that reduce complex problem solving to more simple judgmental operations. Three of the most popular heuristics are discussed by Tversky and Kahnemann (1974):

Source: SFB 504

Hicks-Kaldor criterion

For whether a cost-benefit analysis supports a public project. The criterion is that the gainers from the project could in principle compensate the losers. That is, that total gains from the project exceed the losses. The criterion does not go so far as the Pareto criterion, according to which the gainers would in fact have to compensate the losers.

Source: econterms

Hicks-neutral

An attribute of an effectiveness variable in a production function. The attribute is that it does not affect labor differently from the way it affects capital.

The canonical example is the Solow model production function Y=AF(K,L). There Y is output, L labor, K capital, F a production variable, and A represents some kinds of effectiveness variable. In Y=F(AK,L) the effectiveness variable affects capital but not labor. In Y=F(K,AL) it affects labor but not capital. These two cases can be described as Hicks-biased. In Y=AF(K,L) it is Hicks-neutral.

Source: econterms

Hicks-neutral technical change

Given a production function AF(K,L) changes in A are Hicks-neutral, meaning that they do not affect the optimal choice of K or L. The subject comes up in practice only for aggregate production functions.

Uzawa, H. 'Neutral Inventions and the Stability of Growth Equilibrium,' The Review of Economic Studies 28:2 (Feb., 1961), 117-124 contains the first known published use of the adjective 'Harrod neutral' According to it, the criterion of Harrod-neutrality comes from

Harrod, Roy F., 'Review of Joan Robinson's Essays in the Theory of Employment,' Economic Journal, vol. 47 (1937), 326-330.

Uzawa also proves that AF(K,L) and F(K,AL) are the right functional forms to meet Hicks and Harrod-neutrality, and that only the Cobb-Douglas form accomplishes both.

Source: econterms

Hicksian demand function

h(p,u) -- the amount of a good that demanded by a consumer given that it costs p per unit and that the consumer will have utility u from all goods. h(p,u) is the cost-minimizing amount.

Source: econterms

High School and Beyond

A panel data set on U.S. high school students.

Source: econterms

high-powered money

reserves plus currency

Source: econterms

Hilbert space

A complete normed metric space with an inner product. So the Hilbert spaces are also Banach spaces. L2 is an example of a Hilbert space. Any Rn with n finite is another.

Source: econterms

Hindsight bias

It is a common observation that events in the past appear simple, comprehensible, and predictable in comparison to events in the future. Everyone has had the experience of believing that they knew all along the outcome of a football game, a political election or a business investment. The hindsight bias is the tendency for people with outcome knowledge to believe falsely that they would have predicted the reported outcome of an event. After learning of the occurrence of an event, people tend to exaggerate the extent to which they had foreseen the likelihood of its occurrence.

Source: SFB 504

Hindsight bias biased reconstruction

Contemporary models suggest that the hindsight bias appears due to reconstructive processes while people try to generate their original estimates.
One is loosely based on the response bias hypothesis, originally developed in eyewitness testimony research: People are assumed either to have remembered or to have forgotten their original judgements. Those that do remember their original estimates are likely to reproduce them. Those who have forgotten them are forced to guess and, in the presence of outcome information, are likely to utilize this information as an anchor assuming that their estimates must have been somewhere in the proximity of the true outcome. But since people are generally optimistic about their capacities, they will locate their presumed prior estimates closer to the real outcome than it had actually been, resulting in the hindsight bias.
Other authors concentrate their assumptions on three different stages in the reconstructive process: Selective retrieval, prejudiced interpretation and weighing of different cues.
Finally, the hindsight distortion may be triggered by the heuristic of anchoring and adjustment.

Source: SFB 504

Hindsight bias implications for further research

Hindsight Bias is a strong phenomenon which has been observed in many circumstances, and appears to influence every-day decision-making. Researchers are trying to develop hindsight bias models to provide a better theoretical explanation. Another aspect is practical relevance, whereby real-life cases need to be examined for evidence on distortion influence.

Source: SFB 504

Hindsight bias memory impairment

Fischhoffīs original explanation of the hindsight bias, Immediate Assimilation Hypothesis, states that memory for original predictions is altered by subsequent outcome (Fischhoff, 1975). When learning about the actual or alleged outcome, the person re-interprets the original evidence in the light of the outcome. They are therefore inadvertently modifying what had been previously stored in memory. Subsequent outcome knowledge is integrated immediately into the existing knowledge structure. This results in a permanent modification of the personīs prior representation of the event. Other variations of the memory impairment hypothesis suggest that the origins of hindsight biases lay in the retrieval stage. The Selective Retrieval Hypothesis maintains that known outcome serves as a retrieval cue for relevant case material. Once an outcome has been learned, information congruent with this outcome will become highly accessible. Incongruent information cannot be retrieved with the same ease. The authors of the Dual Memory Traces Model (Hell, Gigerenzer, Gauggel, Mall & Müller, 1988) suggested an extension of Fischhoffīs model. They assume two separate memory traces for own judgements and subsequent outcome information. The strength of hindsight biases is determined by the relative strength of the memory traces.

Source: SFB 504

Hindsight bias motivational explanations

Motivational explanations suggest that judgement and decision processes are not only affected by rational cognitions. They are also influenced by actual needs and motives, including need for control, need for cognition, self-relevance and, most importantly self-presentation concerns. In the latter approach, people are motivated to make others believe that their predictions were close to the actual outcome, in an attempt to maintain a high level of public self-esteem. Contrary to the memory impairment hypothesis, this explanation interprets hindsight distortions as adjustments during the response generation state. Several authors showed, that empirical evidence for motivational underpinnings of the knew-it-all-along effect is rather weak.

Source: SFB 504

Hindsight bias practical relevance

Every-day life provides numerous examples on the practical relevance of hindsight bias. This was shown in a medical context. A GPīs second opinion does not differ completely from another GPīs opinion, if he/she is aware of the first opinion. This seems to be of serious consequence if one considers that a second opinion is only required when serious illnesses have been diagnosed.

In a legal context, hindsight bias was found to occur when a jury makes a final decision in court. In the course of a trial, the judge is empowered to order the jury to ignore certain testimonials, by disallowing them. It has been proven impossible to ignore such information.

Hindsight bias also plays a role in an economic context. An economic expert may, for example, analyze certain share activity, as if he/she had always known what would happen. This results in them forming a higher opinion of their own judgement. This in turn can have a long-term, restrictive effect on their individual learning capability as well as future decision-making. In addition to this intra-personal effect, it is possible for inter-personal effects to arise. Example: A supervisor may no longer be able to make an undistorted judgement on his employeesī decision-making if he/she got information about some results of their performance. This is a special problem in the case of poor outcome because it could happen, despite them having acted correctly on the information they were given at the time.

Source: SFB 504

Hindsight bias response bias hyothesis

This model was originally conceived during eyewitness testimony research. This was to account for the fact that witnesses receiving misleading information about a previously observed event, show a poorer memory for it. Similar to the hindsight bias, the misleading information effect was originally attributed to memory impairment.

Subsequent interpretations offered the following explanation: Rather than altering existing memory traces, the new information may be used as a reference point by those who cannot remember the original information and therefore need to guess. Misleading information does not alter the original representation, but simply serves as an anchor to perceivers who are unable to retrieve it. Parallels between the hindsight bias and the misleading information paradigms are obvious. Both lines of research query whether information stored in the memory might be less accessible after being confronted with inconsistent new information. The experimental designs of both research traditions show strong similarities. Important differences: In the misleading information paradigm, the original information is presented by the experimenter. In hindsight bias studies however, the original estimates are generated by the subjects. Misleading information is presented unobtrusively without the subjects being aware of its misleading nature. Outcome information given in hindsight studies is explicitly labeled as the correct information.

Source: SFB 504

Hindsight bias theoretical and empirical work

More recent studies have come up with models that enable precise forecasting of the strength and direction of the hindsight distortion, using a quantitative basis. Researchers are also attempting to determine how far hindsight decisions affect a personīs trust in their own judgements. They are also trying to determine how far such decisions affect a personīs impression of their competency in future decision-making on the same subject. Another interesting point to be examined is the relationship between hindsight bias and the attribution of responsibility.

Source: SFB 504

Hindsight bias theoretical explanations

Although there is a wealth of literature on hindsight distortions, the underlying mechanisms are not yet fully understood. Attempts of explanation were given in three major theoretical areas: Models of memory impairment state that outcome knowledge affects memory for previous judgements by either altering or erasing existing memory traces, or by rendering them less accessible. Other attempts to explain the hindsight bias are based on assumptions that these distortions are driven by motivations. An alternative explanation is that people use distorting heuristics while reconstructing their original judgements.

Source: SFB 504

history

The subject of economic history is anything in history that is subject to economic explanations. Application of formal theory or statistical analysis of data may be relevant, although it is possible to make a contribution without either, e.g. with a case study or a contextual reinterpretation. Historians tend to be focused on what happened, how, and why, not on the question of whether a model fits the evidence.
history

Source: econterms

HLM

Statistical software for Hierarchical Linear Modeling, from Scientific Software International.

Source: econterms

Hodrick-Prescott filter

Algorithm for choosing smoothed values for a time series. The H-P filter chooses smooth values {st} for the series {xt} of T elements (t=1 to T) that solve the following minimization problem: min { {(xt-st)2 ... etc. } Parameter l>0 is the penalty on variation, where variation is measured by the average squared second difference. A larger value of l makes the resulting {st} series smoother; less high-frequency noise. The commonly applied value of l is 1600. For the study of business cycles one uses not the smoothed series, but the jagged series of residuals from it. See Cooley, 1995, p 27-29. That H-P filtered data shows less fluctuation than first-differenced data, since the H-P filter pays less attention to high frequency movements. H-P filtered data also shows more serial correlation than first-differenced data. For l=1600: "if the series were stationary, then [this choice] would eliminate fluctuations at frequencies lower than about thirty-two quarters, or eight years."

Source: econterms

hold-up problem

One of a certain class of contracting problems.

Imagine a situation where there is profit to be made if agents A and B work together, so they consider an agreement to do so after A buys the necessary equipment. The hold-up problem (in this context) is A might not be willing to take that agreement, even though the outcome would be Pareto efficient, because after A has made that investment, B would have the power might decide to demand a larger share of the profits than before, since A is now deeply invested in the project but B is not, so B has some bargaining power that wasn't there before the investment. B could demand all of the profits, in fact, since A's alternative is to lose the investment entirely.

Other hold-up problems are analogous to this one.

Source: econterms

Holder continuous

An attribute of a function g:Rd->R. g can be said to be Holder continuous if there exist constants C and 0<=E<=1 such that for all u and v in Rd:
|g(u)-g(v)| <= C||u-v||E

So if g is Holder continuous for C=1 then it is Lipschitz continuous? And if g is Holder continuous then it is continuous.

Source: econterms

homoscedastic

An alternate spelling of homoskedastic. McCulloch (1985) argues that the spelling with the k is preferred, on the basis of the pronunciation and etymology (Greek not French derivation) of the term.

Source: econterms

homoskedastic

An adjective describing a statistical model in which the errors are drawn from the same distribution for all values of the independent variables. Contrast heteroskedastic.
This is a strong assumption, and includes in particular the assumption in a linear regression, for example,
y = Xb + e
that the variance of the e's is the same for all X's.

(The observed variance will differ in almost any sample. But if one believes that the data-generating process does not in principle have greater variances for different values of the independent variable, one would describe the sample as homoskedastic anyway.)

Source: econterms

homothetic

Let u(x) be a function homogeneous of degree one in x. Let g(y) be a function of one argument that is monotonically increasing in y. Then u(g()) is a homothetic function of y.

So a function is homothetic in y if it can be decomposed into an inner function that is monotonically increasing in y and an outer function that is homogeneous of degree one in its argument.

In consumer theory there are some useful analytic results that can come from studing homothetic utility functions of consumption.

Source: econterms

Household behavior

In traditional microeconomics, household behavior is understood narrowly as the theory of consumer demand for commodities, i.e., household consumption. There are, however, other aspects of household behavior that have also been investigated in the microeconomics literature, such as the householdīs supply of labor, the production of commodities (mainly, services) within the household (household production), saving decisions, retirement decisions, and many more.

Source: SFB 504

HRS

Health and Retirement Study, a longitudinal panel of older Americans studied by the Survey Research Center at the University of Michigan. Their Web site is at http://www.umich.edu/~hrswww.

Source: econterms

HSB

High School and Beyond, a panel data set on U.S. high school students.

Source: econterms

Huber standard errors

Same as Huber-White standard errors.

Source: econterms

Huber-White standard errors

Standard errors which have been adjusted for specified assumed-and-estimated correlations of error terms across observations.

The implicit citations are to Huber, 1967, White, 1980, and White, 1982.

Source: econterms

human capital

The attributes of a person that are productive in some economic context. Often refers to formal educational attainment, with the implication that education is investment whose returns are in the form of wage, salary, or other compensation. These are normally measured and conceived of as private returns to the individual but can also be social returns.

''Human capital' was invented by the economist Theodore Schultz in 1960 to refer to all those human capacities, developed by education, that can be used productively -- the capacity to deal in abstractions, to recognize and adhere to rules, to use language at a high level. Human capital, like other forms of capital, accumulates over generations; it is a thing that parents 'give' to their children through their upbringing, and that children then successfully deploy in school, allowing them to bequeath more human capital to their own children.' -- Traub (2000)

Source: econterms

hyperbolic discounting

A way of accounting in a model for the difference in the preferences an agent has over consumption now versus consumption in the future.

For a and g scalar real parameters greater than zero, under hyperbolic discounting events t periods in the future are discounted by the factor (1+at)(-g/a).

That expression describes the "class of generalized hyperbolas". This formulation comes from a 1999 working paper of C. Harris and D. Laibson, which cites Ainslie (1992) and Loewenstein and Prelec (1992).

In dynamic models it is common to use the more convenient assumption that agents have a common discount rate applying for any t-period forecast, starting now or starting in the future. Hyperbolic discounting is less convenient but fits the psychological evidence better, and when contrasted to the constant-discount-rate assumption can get models to fit the noticeable fall in consumption that U.S. workers are observed to experience when they retire. In a constant-discount-rate model the worker would usually have forecast the fall in income and their consumption expenses would be smooth.

One reason hyperbolic preferences are less convenient in a model is not only that there are more parameters but that the agent's decisions are not time-consistent as they are with a constant discount rate. That is, when planning for time two (two periods ahead) the agent might prepare for what looks like the optimal consumption path as seen from time zero; but at time two his preferences would be different.

Contrast quasi-hyperbolic discounting.

Source: econterms

hysteresis

a hypothesized property of unemployment rates -- that there is a ratcheting effect, so a short-term rise in unemployment rates tends to persist.
Theories that would lead to hysteresis:
-- an insider/outsider model of decisionmaking about employment; insiders such as the unionized workers ratchet up wage rates beyond where it is profitable to hire the unemployed; outsiders who are unemployed don't get to be part of the negotiation process.
-- behavioral and human capital changes among the unemployed, such as forgetting the details of work or work behavior, or losing interest or skill in getting new jobs, could lead to declining chances of becoming employed.

Source: econterms

I

IARA

increasing absolute risk aversion

Source: econterms

IC constraint

IC stands for "incentive compatible".
When solving a principal-agent maximization problem for a contract that meets various criteria, the IC constraints are those that require agents to prefer to act in accordance with the solution. If the IC constraint were not imposed, the solution to the problem might be economically meaningless, insofar as it produced an outcome that met some criterion of optimality but which an agent would choose not to act in accord with.
See also IR constraint.

Source: econterms

ICAPM

Intertemporal CAPM. From Merton, 1973.

Source: econterms

idempotent

A matrix M is idempotent if MM=M. (M times M equals M.)
Example: the identity matrix, denoted I.

Source: econterms

identification

A parameter in a model is identified if and only if complete knowledge of the joint distribution of the observed variables gives enough information to calculate the parameter exactly.

If the model has been written in such a way that its parameters can be consistently estimated from the observables, then the parameters are identified. There exist cases (mostly obscure) where parameters are identified but consistent estimators are not possible. (See, e.g. Gabrielsen, 1978)
A model is identified if there is no observationally equivalent model. That is, potentially observable random variables in the model have different distributions for different values of the parameter.

Formally:
Let h* be a vector of unknown functions and distributions in an econometric model.
Let H denote a set which h* is known to belong. H is defined by the model's restrictions.
Let P(h) denote the joint distribution of observable variables of the model for various elements of h in H. The distribution for the actual data will be assumed to be P(h*).
Now, vector h* is identified within H if for all h in H such that h<>h* it is true that P(h)<>P(h*).
Note: Linear models are either globally identified or there are an infinite number of observably equivalent ones. But for models that are nonlinear in parameters, "we can only talk about local properties." Thus the idea of locally identified models, which can be distinguished in data from any other 'close by' model.

"An identification problem occurs when a specifed set of assumptions combined with unlimited observations drawn by a specified sampling process does not reveal a distribution of interest." -- Manski, Charles F. "Identification problems and decisions under ambiguity: empirical analysis of treatment response and normative analysis of treatment choice" Northwestern University Department of Economics and Institute for Policy Research, September 1998, p. 2

Source: econterms

Identification

Charles Manski gave a brilliant example of what identification is all about:

Suppose that you observe the almost simultanous movement of a man his image in a mirror. Does the mirror image cause the man's movements or reflect them? If you don't understand something of optics and human behavior, you will not be able to tell. (Manski, 1995, p. 1)

Methodological research in the social sciences uses statistical theory. The empirical problem is to infer some feature of a population described by a probability distribution. The data available to the researcher (be it field or experimental data) are observations extracted from the population by some sampling process. In this framework, the statistical and identification problems can be separated:


Identification is about the conlusions that could be drawn if one could use the sampling process to obtain an unlimited number of observations.


Statistical inference is about the (generally weaker) conclusions that can be drawn from a finite number of observations.

Identification problems cannot be solved by gathering more of the same kind of data. They can be alleviated only by invoking stronger assumptions or by initiating new sampling processes that yield different kinds of data.

Source: SFB 504

identity matrix

An identity matrix is a square matrix of any dimension whose elements are ones on its northwest-to-southeast diagonal and zeroes everywhere else. Any square matrix multiplied by the identity matrix with those dimensions equals itself. One usually says 'the' identity matrix since in most contexts the dimension is unambiguous. It is standard to denote the identity matrix by I.

Source: econterms

idle

Sometimes used to name the state of people who are not in school but also not working. Context is usually industrialized countries with established labor markets, and the idle are often poor.

Source: econterms

IER

An abbreviation for the journal International Economic Review.

Source: econterms

iff

abbreviation for "if and only if"

Source: econterms

IGARCH

Integrated GARCH, a kind of econometric model of a stochastic process in which there is a unit root in a GARCH environment.
The IGARCH(p,q) process was proposed in Engle and Bollerslev (1986).

Source: econterms

IIA

stands for Irrelevance of Independent Alternatives, an assumption in a model. In a discrete choice setting, the multinomial logit model is appropriate only if the introduction or removal of a choice has no effect on the proportion of probability assigned to each of the other choices.
This is a strong assumption; a standard example where IIA is not an appropriate assumption is if one compares a model of transportation choices between a car and a red bus, then introduces a blue bus. The blue bus is functionally like the red bus, so presumably its introduction draws ridership more heavily from the red bus than from the car.

Source: econterms

iid

An abbreviation for "independently and identically distributed." One would say this about two or more random variables to describe their joint distribution. A common use is to describe ongoing disturbances to a stochastic process, indicating that they are not correlated to one another.

Source: econterms

IJIO

An occasional abbreviation for the academic journal International Journal of Industrial Organization.

Source: econterms

ILS

Indirect Least Squares, an approach to the estimation of simultaneous equations models. Steps: 1) Rearrange the structural form equations into reduced form 2) Estimate the reduced form parameters 3) Solve for the structural form parameters in terms of the reduced form parameters, and substitute in the estimates of the reduced form parameters to get estimates for the structural ones.

Source: econterms

IMF

International Monetary Fund -- an international organization with liquidity services to maintain financial stability.

Source: econterms

implementable

A decision rule (a mapping from expressed preferences by each of a group of agents to a common decision) "is implementable (in Nash equilibrium) if there exists a game form whose Nash equilibrium outcome is the desired outcome for the true preferences."

Source: econterms

implicit contract

A non-contractual agreement that corresponds to a Nash equilibrium to the repeated bilateral trading game other than the sequence of Nash equilibria to the one-shot trading game. In the labor market -- an implicit contract is formally represented by a series of games in which the firm pays a salary and the employee works effectively because they expect to play the game again (continue the agreement) if it goes well, not because they have an explicit, enforceable contract. That is, "by implicit contracts is meant nonbinding commitments from employers to offer ... continuity of wages, employment, and working conditions, and from employees to forgo such temptations as shirking and quitting for better opportunities." -- Granovetter, Ch 9

Source: econterms

Imports

Goods and services that are produced abroad and purchased in oneâ??s home country.

Source: EconPort

impossibility theorem

One of a class of theorems following Arrow (1951) showing that social welfare functions cannot have certain collections of desirable attributes in common.

Source: econterms

impulse response function

Consider a shock to a system. A graph of the response of the system over time after the shock is an impulse response function graph. One use is in models of monetary systems. One graphs for example the percentage deviations in output or consumption over time after a one-time one percent increase in the money stock.

Source: econterms

Inada conditions

A function f() satisfies the Inada conditions if: f(0) = 0, f'(0) = infinity, and f'(infinity) = 0. f() is usually a production function in this context.

Source: econterms

inadmissible

A possible action by a player in a game may be said to be inadmissible if it is dominated by another feasible actions.
The term comes the view of a game as a math problem. An action is or is not admissible as a candidate solution to the problem of choosing a utility-maximizing strategy for the game player.

As used in Manski, Charles F. "Identification problems and decisions under ambiguity: empirical analysis of treatment response and normative analysis of treatment choice" Northwestern University Department of Economics and Institute for Policy Research, September 1998, p. 2

Source: econterms

Incentive compatibility

In typical strategic interactions under incomplete information, different types (of a player) can choose from among a menu of different actions (strategies) that comprises the possibility that they mimic the behavior of other types (of the same, or of another player). Incentive compatibility conditions ensure that different types (of each player) align themselves such that they can be identified by their equilibrium choices. Typically, they are used to prevent that some type profits from copying another type's action (given the other types do not disguise themselves behind others' choices). More generally, incentive compatibility conditions force a desired constellation of choices to form a strategic equilibrium for a given array of types. In particular, they might as well ensure that it be worthwhile for different types to choose the same action (the types pool on an action). Yet in most economic problems, incentive compatibility conditions serve to induce a strategic equilibrium which reveals the players' private information by having them choose different 'characteristic' equilibrium actions, i.e. they have the types 'sort themselves out'.

For a simple example, suppose several buyers that differ in their private (marginal) valuations of some economic good (their types) select the quantity to buy from a seller. If the seller maximizes his proceeds from sale, he will want the buyers with higher valuations to buy higher quantities, i.e. she will want to separate her customers (or market segments) according to their marginal willingness-to-pay. To have such choices form a strategic equilibrium, the seller has to provide incentives to the high- valuations customers to buy higher quantities and to prevent market segments with lower valuations from profitably micking the choices of high-valuation customers. To meet both ends, it is often enough to simply offer a monotonic reward scheme that links choices to rewards. In the example, it is enough that the seller rewards buying higher quantities by offering a schedule of ever larger price discounts for larger quantities bought.

The basic idea that is with a monotonic reward scheme, lower types cannot find it worthwhile to mimic the choices of higher types because higher types themselves find it worthwhile to choose even more extreme actions which, in the end, are too costly to copy for lower types. Thus, the price of having the types self-select by giving monotonic rewards to them necessitates that higher types are rewarded progressively higher rents for revealing their information, relative to lower types. (For an illustration of this point, see the paragraph on information rents in the entry rents.)

More generally, suppose the types are 'naturally ordered' in terms of increasing marginal profitability (or costliness) of some economic action; e.g., the schedule of marginal willingnesses-to-pay is differently steep for different customers; different candidates have differently steep marginal cost schedules in investing varying amounts in some activity, etc. Then, to put it in jargon, any monotonic reward scheme "provides incentives to the types to separate themselves", i.e. it has the players self-select levels of actions which reveal the natural ordering of their types.

Incentive compatibility conditions occur throughout economics with incomplete information because, as we have tried to argue, they are closely related to strategic equilibria with certain features. Most often, incentive compatibility constraints are used to frame the ways of interaction such as to create equilibria where the players' private information is revealed by their equilibrium choices. Among important applications are the theory of optimal taxation of unobservable behavioral characteristics in public finance, optimal selling schemes in non-linear pricing and auctions, the optimal regulation of firms under incomplete information, or incentive wage contracts eliciting effort inputs from employees that can hide shirking under favorable conditions.

Source: SFB 504

Income

The Concise Oxford Dictionary defines income as "receipts from one's lands, work, investment etc." This definition has been adapted by economic theory, where, for instance, a consumer may be said to maximize utility subject to an income constraint. The meaning of income is somewhat modified in the construction of income statistics, as generalized to national income.

Income in a microeconomic sense is the sum of earnings of all factors of production of an individual. It includes all benefits to consumers as part of income, even benefits arising from non-market activities ? such as the monetary value of the services of owner-occupied housing or food grown and consumed on farms. This definition of income also includes periodic and one-time transfers such as pensions, unemployment benefits, and bequests.

Source: SFB 504

income elasticity

When used without another referent, appears to mean 'of consumption'. That is for income I and consumption C:
income elasticity = (I/C)*(dC/dI)
In one paper estimates were shown of .2 to .6 for a random sample of industrialized country middle class people.
For more details see elasticity.

Source: econterms

Increasing Returns to Scale

If a firm exhibits increasing returns to scale, when it increases the use of inputs then output increases by a greater proportion. For example, if the firm doubles the use of all inputs, then output will more than double. With increasing returns to scale, long-run average costs decrease as output increases.

Source: EconPort

indemnity

A kind of insurance, in which payment is made (often in previously determined amounts) for injuries suffered, not for the costs of recovery. The payment is designed not to be a dependent on anything the patient can control. From the point of view of the insurer, this mechanism avoids the moral hazard problem of victim spending too much in recovery.

Source: econterms

independent

Two random variables X and Y are statistically independent if and only if their joint density (pdf) is the product of their marginal densities, that is if f(x,y)=fx(x)fy(y).

If two random variables are independent they are also uncorrelated.

Source: econterms

independent private value

If a bidder has independent private values, it means that he is not influenced by the estimates of the other bidders, when determining how much an object is worth to him

indicator variable

In a regression, a variable that is one if a condition is true, and zero if it is false. Approximately synonymous with dummy variable, binary variable, or flag.

Source: econterms

indifference curve

Represented for example on a graph whose horizontal and vertical axes are quantities of goods an individual might consume, an indifference curve represents a contour along which utility for that individual is constant. The curve represents a set of possible consumption bundles between which the individual is indifferent. Normally, with desirable goods on both axes (say, income today and income tomorrow) the curve has a certain shape, further from the origin when both quantities are positive than when one is zero.

Source: econterms

indirect utility function

Denoted v(p, m) where p is a vector of prices for goods, and m is a budget in the same units as the prices. This function takes the value of the maximum utility that can be achieved by spending the budget m on the consumption goods with prices p.

Source: econterms

individually rational

An allocation is individually rational if no agent is worse off in that allocation than with his endowment.

Source: econterms

inductive

Characterizing a reasoning process of generalizing from facts, instances, or examples. Contrast deductive.

Source: econterms

Industrial Revolution

A period commonly dated 1760-1830 in Britain (as in Mokyr, 1993, p 3 and Ashton, 1948). Characterized by: "a complex of technological advances: the substitution of machines for human skills and strength; the development of inanimate sources of power (fossil fuels and the steam engine); the invention, production, and use of new materials (iron for wood, vegetable for animal matter, mineral for vegetable matter); and the introduction and spread of a new mode of production, known by contemporaries as the factory system." -- Landes (1993b) p 137.

Source: econterms

industrialization

A historical phase and experience. The overall change in circumstances accompanying a society's movement population and resources from farm production to manufacturing production and associated services.

Source: econterms

inf

Stands for 'infimum'. A value is an infimum with respect to a set if all elements of the set are at least as large as that value. An infimum exists in context where a minimum does not, because (say) the set is open; e.g. the set (0,1) has no minimum but 0 is an infimum.

inf is a mathematical operator that maps from a set to a value that is syntactically like the members of that set, although the value may not actually be a member of the set.

Source: econterms

inflation

Reduction in value of a currency. Measured often by percentage increases in the general price level per year.

Source: econterms

Information

A game is of complete information if the payoffs of each player are common knowledge among all the players, and it is of incomplete information if the utility payoffs of each player, or certain parameters to it, remain private information of each player. Games with incomplete information require the players to form beliefs about their opponents' private information, and to evaluate uncertain streams of payoff according to von Neumann-Morgenstern utility function (or some other concept of expected utility).

A game with a perfect information is a game in which at each move in the game, the player with the move knows the full history of the play of the game thus far. Otherwise the game is called a game with imperfect information. In a game of perfect recall, nobody ever forgets something they once knew. An event A is common knowledge if all the players know that A occurred, and all the players know that all the players know that A occurred, and all the players know that all the players know that all the players know that A occurred, and so on, ad infinitum.

Source: SFB 504

information

information

Source: econterms

information matrix

In maximum likelihood estimation, the variance of the score vector. It's a k x k matrix, where k is the dimension of the vector of parameters being estimated. The vector of parameters is denoted q here:
I(q) = var S(q) = E[(S(q)-ES(q))2] = E[S(q)2]
where the score is S(q) = dL(q)/d(q)

The information matrix can also be calculated by multiplying the Hessian of the log-likelihood function by (-1).

Source: econterms

information number

Synonym for Fisher information (which see).

Source: econterms

Information processing

The information processing approach constitutes an important paradigm in psychology, which evolved from computer science and communication science as an alternative to behaviorism, which was the most influential paradigm from the early decades until the mid of the 20th century. Compared to behaviorist theories, in which observable responses are conceptualized as a function of observable stimuli, theories in the information processing domain focus on mental operations intervening between stimulus and response. Within social psychology this approach emphasizes the cognitive mediation of social behavior, and, vice versa, the social impact on cognitive processes. Personal involvement, affective states, or environmental factors can have a considerable influence on logical thinking, stereotyping, social judgments and decisions. The sequence of cognitive processes are typically decomposed into various stages, such as perception, encoding, organization, inference making, retrieval, and judgment. These stages are highly interdependent and characterized by various feedback loops. At all stages, the individual's expectations or older knowledge structures come to interact with new input information. It is quite typical for human information processing that data-driven processes ("bottom up") and conceptually driven processes ("top down") mesh (see Fiedler, 1996).

Source: SFB 504

informational cascade

"An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow [that is, imitate] the behavior of the preceding individual without regard to his own information." -- Bikhchandani, Hirshleifer, and Welch, 1992, p 992

Source: econterms

INSEAD

An American-style business school near Paris. Operates in English.

Source: econterms

inside money

Any debt that is used as money. Is a liability to the issuer. Total amount of inside money in an economy is zero. Contrast outside money.

Source: econterms

institution

There are several definitions. Here's one: 'An institution is a social mechanism through which men work together for common or like ends. It is a necessary arrangement wherever regulated group behavior over a broad field of activity is found. It is opposed in sociological thought to 'face to face' grouping and to local community forms of life ...' (Ware, p. 6)

For more see new institutionalism.

Source: econterms

Institutionalism

(Neo-)Institutionalism is a theoretical school which is concerned with the question of which organizational actions and routines become taken for granted. Within the school of Neo-Institutionalism distinctions between the micro and the macro level of this theory are made.

Source: SFB 504

instrumental variables

Either (1) an estimation technique, often abbreviated IV, or (2) the exogenous variables used in the estimation technique.
Suppose one has a model:
y = Xb + e
Here y is a T x 1 vector of dependent variables, X is a T x k matrix of independent variables, b is a k x 1 vector of parameters to estimate, and e is a k x 1 vector of errors. OLS can be imagined, but suppose in the environment being modelled that the matrix of independent variables X may be correlated to the e's. Then using a T x k matrix of independent variables Z, correlated to the X's but uncorrelated to the e's one can construct an IV estimator that will be consistent:
bIV = (Z'X)-1Z'y
The two stage least squares estimator is an important extension of this idea.

In that discussion above, the exogenous variables Z are called instrumental variables and the instruments (Z'Z)-1(Z'X) are estimates of the part of X that is not correlated to the e's.

Source: econterms

instruments

When regressors are correlated to errors in a model, one may be able to replace the regressors by estimates for these regressors that are not correlated to the errors. This is the technique of instrumental variables, and the replacement regressors are called instruments.

The replacement regressors are constructed by running regressions of the original regressors on exogenous variables that are called the instrumental variables.

Source: econterms

integrated

Said in reference to a random process. A random process is said to be 'integrated of order d' (sometimes denoted I(d)) for some natural number d if the series would be stationary after being first-differenced d times.
Example: a random walk is I(1).
Example: "Most macroeconomic flows and stocks that relate to population size, such as output or employment, are I(1)." They are growing.
Example: "An I(2) series [might] be growing at an ever-increasing rate."

Source: econterms

intensive margin

Refers to the degree (intensity) to which a resource is utilized or applied. For example, the effort put in by a worker or the number of hours the worker works. Contrast extensive margin.

Source: econterms

inter alia

"Among other things"

Source: econterms

inter vivos

From Latin, 'between lives'. Used to describe gifts beetween people, usually from one generation to the next, which are like bequests except that both parties are alive. Quantities and timing of such gifts are studied empirically in the same way that quantities and purposes of bequests are subjects of empirical study.

Source: econterms

interim efficient

Defined, apparently, in Holmstrom and Myerson (1983) with reference to Rothschild and Stiglitz (1976). In Imderst (2000) this term is used to characterize the set ('family') of Rothschild-Sticlitz contracts in a particular model setting.

Source: econterms

interior solution

A choice made by an agent that can be characterized as an optimum located at a tangency of two curves on a graph.

A classic example is the tangency between a consumer's budget line (characterizing the maximum amounts of good X and good Y that the consumer can afford) and the highest possible indifference curve. The slope of that tangency is where:

(marginal utility of X)/(price of X) = (marginal utility of Y)/(price of Y)

Contrast corner solution.

Source: econterms

internal knowledge spillover

positive learning or knowledge externalities between programs or plants within a production organization.

Source: econterms

Intertemporal decision making

Many economic decisions are intertemporal in the sense that current decisions affect also the choices available in the future. Examples are saving and retirement decisions of households, and investment decisions of firms. In the case of saving, the saving decision made today affects not only the household's current consumption but also his future consumption possibilities. If someone saves more today, he can consume less today and hence his current utility declines, but he can consume more in the future, and his future utility increases.

As can be seen from the savings example, intertemporal decisions are characterized by some kind of intertemporal trade-off: If I give up something today, I want to be compensated for the resulting utility loss in the future. The optimal intertemporal decision requires that current and future changes of utility implied by current behavior correspond to the individual's intertemporal preferences ? formally, that the rate of substitution be equal to the rate of time preference.

An number of studies have shown that the standard theory of intertemporal choice is frequently violated in experimental settings, just as standard (static) expected utility (EU) theory of choice is systematically violated (see Camerer, 1995). Intertemporal decisions are therefore an important area of research in behavioral economics.

Source: SFB 504

inverse demand function

A function p(q) that maps from a quantity of output to a price in the market; one might model the demand a firm faces by positing an inverse demand function and imagining that the firm chooses a quantity of output.

Source: econterms

inverse Mills ratio

Usually denoted l(Z), and defined by l(Z)=phi(Z)/PHI(Z), where phi() is the standard normal pdf and PHI() is the standard normal cdf.

Source: econterms

invertibility

In context of time series processes, represented for example by a lag polynomial, inverting means to solve for the e's (epsilons) in terms of the y's.
One inverts moving average (MA) processes to get AR representations.

Source: econterms

investment

Any use of resources intended to increase future production output or income.

Source: econterms

IO

stands for 'Industrial Organization', the field of industry structure, conduct, and performance. By structure we usually mean the size of the firms in the industry -- e.g. whether firms have monopoly power. IO

Source: econterms

IPO

Stands for "initial public offering", the event of a firm's first sale of stock shares.

Source: econterms

IPUMS

Integrated Public Use Microdata Series. These are collections of U.S. Census data, adapted for easy use by the University of Minnestota Social History Research Laboratory, at its Web site http://www.ipums.umn.edu.

Source: econterms

IR constraint

IR stands for "individually rational".
When solving a principal-agent maximization problem for a contract that meets various criteria, the IR constraints are those that require agents to prefer to sign the contract than not to. If the IR constraint were not imposed, the solution to the problem might be economically meaningless, insofar as it was a contract that met some criterion of optimality but which an agent would refuse to sign.
See also IC constraint.

Source: econterms

IRS

The United States national tax collection agency, called the Internal Revenue Service.

Source: econterms

is consistent for

means 'is a consistent estimator of'

Source: econterms

isoquant

Given a production function, an isoquant is 'the locus of input combinations that yield the same output level.' (Chiang, p. 360) There is an isoquant set for each possible output level. Mathematically the isoquant is a level curve of the production function.

Examples and discussion is at Martin Osborne's web page: http://www.chass.utoronto.ca/~osborne/2x3/tutorial/ISOQUANT.HTM.

Source: econterms

Ito process

A stochastic process: a generalized Wiener process with normally distributed jumps.

Source: econterms

IV

abbrevation for Instrumental Variables, an estimation technique

Source: econterms

J

J statistic

In a GMM context, when there are more moment conditions than parameters to be estimated, a chi-square test can be used to test the overidentifying restrictions. The test statistic can be called the J statistic.
In more detail: Say there are q moment conditions and p parameters to be estimated. Let the weighting matrix be the inverse of the asymptotic covariance matrix. Let T be the sample size. Then T times the minimized value of the objective function (TJT(bT)) is asymptotically distributed with a chi-square distribution with (q-p) degrees of freedom.

Source: econterms

jackknife estimator

Has multiple, overlapping definitions numbered below: (1) kind of nonparametric estimator for a regression function. A jackknife estimator is a linear combination of kernel estimators with different window widths. Jackknife estimators have higher variance but less bias than kernel estimators. (Hardle, p. 145.) (2) creates a series of statistics, usually a parameter estimate, from a single data set by generating that statistic repeatedly on the data set leaving one data value out each time. This produces a mean estimate of the parameter and a standard deviation of the estimates of the parameter. (Nick Cox, in an email broadcast to Stata users on statalist, circa 7/5/2000.)

Source: econterms

JE

An occasional abbreviation for the academic journal Journal of Econometrics.

Source: econterms

JEH

An abbreviation for the Journal of Economic History.

Source: econterms

JEL

Journal of Economic Literature. See also JEL classification codes.

Source: econterms

JEL classification codes

These define a classification system for books and journal articles relevant to the economic researcher. The list has three levels of precision: categories A-Z, subcategories like A0-A2 (these are used to classify books), and sub-subcategories like A10-A14 (which are used to classify journal articles). The second level is detailed here; for the complete set of possible JEL codes see any issue, e.g. in the Sept 1997 issue, pages 1609-1620. The list below comes from that same issue, pages 1437-1439. A more up-to-date list is online at http://www.aeaweb.org/journal/elclasjn.html

A. General Economics and Teaching (A0 General, A1 General Economics, A2 Teaching of Economics)
B. Methodology and History of Economic Thought (B0 General, B1 History of Economic Thought through 1925, B2 History of Economic Thought since 1925, B3 History of Thought: Individuals, B4 Economic Methodology)
C. Mathematical and Quantitative Methods (C0 General, C1 Econometric and Statistical Methods: General, C2 Econometric and Statistical Methods: Single Equation Models, C3 Econometric and Statistical Methods: Multiple Equation Models, C4 Econometric and Statistical Methods: Special Topics, C5 Econometric Modeling, C6 Mathematical Methods and Programming, C7 Game Theory and Bargaining Theory, C8 Data Collection and Data Estimation Methodology; Computer Programs, C9 Design of Experiments)
D. Microeconomics (D0 General, D1 Household Behavior and Family Economics, D2 Production and Organizations, D3 Distribution, D4 Market Structure and Pricing, D5 General Equilibrium and Disequilibrium, D6 Economic Welfare, D7 Analysis of Collective Decision-Making, D8 Information and Uncertainty, D9 Intertemporal Choice and Growth)
E. Macroeconomics and Monetary Economics (E0 General, E1 General Aggregative Models, E2 Consumption, Saving, Production, Employment, and Investment, E3 Prices, Business Fluctuations, and Cycles, E4 Money and Interest Rates, E5 Monetary Policy, Central Banking and the Supply of Money and Credit, E6 Macroeconomic Aspects of Public Finance, Macroeconomic Policy, and General Outlook)
F. International Economics (F0 General, F1 Trade, F2 International Factor Movements and International Business, F3 International Finance, F4 Macroeconomic Aspects of International Trade and Finance) G. Financial Economics (G0 General, G1 General Financial Markets, G2 Financial and Institutions and Services, G3 Corporate Finance and Governance) H. Public Economics (H0 General, H1 Structure and Scope of Government, H2 Taxation and Susidies, H3 Fiscal Policies and Behavior of Economic Agents

Source: econterms

JEMS

An abbreviation for the Journal of Economics and Management Strategy.

Source: econterms

Jensen's inequality

If X is a real-valued random variable with E(|X|) finite and the function g() is convex, then E[g(X)] >= g(E[X]).
One application: By Jensen's inequality, E[X2] >= (E[X])2. Since the difference between these is the variance, we have just shown that any random variable for which E[X2] is finite has a variance and a mean.
This is the inequality one can refer to when showing that an investor with a concave utility function prefers a certain return to the same expected return with uncertainty.

Source: econterms

JEP

An abbreviation for the Journal of Economic Perspectives.

Source: econterms

JET

An abbreviation for the Journal of Economic Theory.

Source: econterms

JF

Journal of Finance

Source: econterms

JFE

Journal of Financial Economics

Source: econterms

JFI

Journal of Financial Intermediation, at http://www.bus.umich.edu/jfi/

Source: econterms

JHR

Journal of Human Resources

Source: econterms

JIE

An abbreviation for the Journal of Industrial Economics .

Source: econterms

JLE

An abbreviation for the Journal of Law and Economics.

Source: econterms

JLEO

An abbreviation for the Journal of Law, Economics and Organization.

Source: econterms

job lock

Describes the situation of a person with a U.S. job who is not free to leave for another job because the first job has medical benefits associated with it that the person needs, and the second one would not, perhaps because 'pre-existing conditions' are often not covered under U.S. health insurance.

Source: econterms

JOE

The monthly US publication Job Openings for Economists.

Source: econterms

journals

In the context of research economics these are academic periodicals, usually with peer-reviewed contents. An amazingly complete list of hyperlinks to journals is at the WebEc web site. Some are also in this glossary directly, below.
journals

Source: econterms

JPAM

Journal of Policy Analysis and Management

Source: econterms

JPE

Abbreviation for the Journal of Political Economy

Source: econterms

JPubE

Journal of Public Economics

Source: econterms

JRE

An abbreviation for the Journal of Regulatory Economics.

Source: econterms

K

k percent rule

A monetary policy rule of keeping the growth of money at a fixed rate of k percent a year. This phrase is often used as stated, without specifying the percentage.

Source: econterms

k-nearest-neighbor estimator

A kind of nonparametric estimator of a function. Given a data set {Xi, Yi} it estimates values of Y for X's other than those in the sample. The process is to choose the k values of Xi nearest the X for which one seeks an estimate, and average their Y values. Here k is a parameter to the estimator. The average could be weighted, e.g. with the closest neighbor having the most impact on the estimate.

Source: econterms

Kalman filter

The Kalman filter is an algorithm for sequentially updating a linear projection for a dynamic system that is in state-space representation.

Application of the Kalman filter transforms a system of the following two-equation kind into a more solvable form:
xt+1=Axt+Cwt+1 yt=Gxt+vt in which:
A, C, and G are matrices known as functions of a parameter q about which inference is desired (this is the PROBLEM to be solved),
t is an whole number, usually indexing time,
xt is a true state variable, hidden from the econometrician,
yt is a measurement of x with scalings G and measurement errors vt,
wt are innovations to the hidden xt process,
Ewt+1wt'=1 by normalization,
Evtvt=R, an unknown matrix, estimation of which is necessary but ancillary to the problem of interest which is to get an estimate of q. The Kalman filter defines two matrices St and Kt such that the system described above can be transformed into the one below, in which estimation and inference about q and R is more straightforward, possibly even by OLS:
zt+1=Azt+Kat yt=Gzt+at where zt is defined to be Et-1xt,
at is defined to be yt-Et-1yt,
K is defined to be lim Kt as t goes to infinity.

The definition of those two matrices St and Kt is itself most of the definition of the Kalman filter:
Kt=AStG'(GStG'+R)-1 St+1=(A-KtG)St(A-KtG)'+CC'+KtRKt'
Kt is called the Kalman gain.

It's not yet clear to me what specific examples there are of problems that the Kalman filter solves.

Source: econterms

Kalman gain

One of the two equations that characterizes the application of the Kalman filter process defines an expression sometimes denoted Kt, which is called the Kalman gain.

That equation, using notation from Sargent's lectures, is:

Kt=AStG'(GStG'+R)-1

Source: econterms

keiretsu system

The framework of relationships in postwar Japan's big banks and big firms. Related companies organized around a big bank (like Mitsui, Mitsubishi, and Sumitomo) which own a lot of equity in one another and in the bank and do much business with one another. This system has the virtue of maintaining long term business relationships and stability in suppliers and customers. It has the disadvantage of reacting slowly to outside events since the players are partly protected from the external market. (p 412)

Source: econterms

kernel estimation

Kernel estimation means the estimation of a regression function or probability density function. Such estimators are consistent and asymptotically normal if as the number of observations n goes to infinity, the bandwidth (window width) h goes to zero, and the product nh goes to infinity. In practice, means use of the Nadaraya-Watson estimator, which see.

Source: econterms

kernel function

A weighting function used in nonparametric function estimation. It gives the weights of the nearby data points in making an estimate. In practice kernel functions are piecewise continuous, bounded, symmetric around zero, concave at zero, real valued, and for convenience often integrate to one. They can be probability density functions. Often they have a bounded domain like [-1,1].

Source: econterms

Keynes effect

As prices fall, a given nominal amount of money will be a larger real amount. Consequently the interest rate would fall and investment demanded rise. This Keynes effect disappears in the liquidity trap. Contrast the Pigou effect. Another phrasing: that a change in interest rates affects expenditure spending more than it affects savings.

Source: econterms

kitchen sink regression

Describes a regression where the regressors are not in the opinion of the writer thoroughly 'justified' by an argument or a theory. Often used pejoratively; other times describes an exploratory regression.

Source: econterms

KLIC

Kullback-Leibler Information Criterion. An unpublished paper by Kitamura (1997) describes this as a distance between probability measures. It is defined in that paper thus. The KLIC between probability measures P and Q is:

I(P||Q) = [integral of] ln(dP/dQ) dP if P << Q
........ = infinity otherwise

Source: econterms

Knightian uncertainty

Unmeasurable risk. Contrast Knightian uncertainty.

Source: econterms

knots

If a regression will be run to estimates different linear slopes for different ranges of the independent variables, it's a spline regression, and the endpoints of the ranges are called knots.

The spline regression is designed so that the resulting spline function, estimating the dependent variable, is continuous at the knots.

Source: econterms

Kolmogorov's Second Law of Large Numbers

If {wt} is a sequence of iid draws from a distribution and Ewt exists (call it mu) then the average of the wt's goes 'almost surely' to mu as t goes to infinity.
Same as strong law of large numbers, I believe.

Source: econterms

Kronecker product

This is an operator that takes two matrix arguments. It is denoted by a small circle with an x in it, but will be denoted here by 'o'. Let A be an M x N matrix, and B be an R x S matrix. Then AoB is an MR x NS matrix, formed from A by multiplying each element of a by the entire matrix B and putting it in the place of the element of A, e.g.:
a11B a12B ... a1nB
. . . . . .
. . . . . .
aM1B aM2B ... aMnB
Kronecker products have the following useful properties:
(AoB)(CoD)=ACoBD
(AoB)-1 = A-1oB-1 (AoB)' = A'oB' (AoB)+(AoC)=Ao(B+C) AoC+BoC = (A+B)oC

Source: econterms

Kruskal's theorem

Let X be a set of regressors, y be a vector of dependent variables, and the model be: y=Xb+e where E[ee'] is the matrix OMEGA. The theorem is that if the column space of (OMEGA)X is the same as the column space of X; that is, that there is heteroskedasticity but not cross-correlation, then the GLS estimator of b is the same as the OLS estimator of b.

Source: econterms

kurtosis

An attribute of a distribution, describing 'peakedness'. Kurtosis is calculated as E[(x-mu)4]/s4 where mu is the mean and s is the standard deviation.

Source: econterms

Kuznets curve

A graph with measures of increased economic development (presumed to correlate with time) on the horizontal axis, and measures of income inequality on the vertical axis hypothesized by Kuznets (1955) to have an inverted-U-shape. That is, Kuznets made the proposition when an economy is primarily agricultural it has a low level of income inequality, that during early industrialization income inequality increases over time, then at some critical point it starts to decrease over time. Kuznets (1955) showed evidence for this.

Source: econterms

Kyklos

A journal, whose Web site is at http://www.kyklos-review.ch/kyklos/index.html.

Source: econterms

L

L1

The set of Lebesgue-integrable real-valued functions on [0,1].

Source: econterms

L2

A Hilbert space with inner product (x,y) = integral of x(t)y(t) dt.
Equivalently, L2 is the space of real-valued random variables that have variances. This is an infinite dimensional space.

Source: econterms

Ln

is the set of continuous bounded functions with domain Rn

Source: econterms

labor

"[L]abor economics is primarily concerned with the behavior of employers and employees in response to the general incentives of wages, prices, profits, and nonpecuniary aspects of the employment relationship, such as working conditions."
labor

Source: econterms

labor market outcomes

Shorthand for worker (never employer) variables that are often considered endogeneous in a labor market regression. Such variables, which often appear on the right side of such regressions: wage rates, employment dummies or employment rates.

Source: econterms

labor productivity

Quantity of output per time spent or numbers employed. Could be measured in, for example, U.S. dollars per hour.

Source: econterms

labor theory of value

"Both Ricardo and Marx say that the value of every commodity is (in perfect equilibrium and perfect competition) proportionaly to the quantity of labor contained in the commodity, provided this labor is in accordance with the existing standard of efficiency of production (the 'socially necessary quantity of labor'). Both measure this quantity in hours of work and use the same method in order to reduce different qualities of work to a single standard." And neither accounts well for monopoly or imperfect competition. (Schumpeter, p 23)

Source: econterms

labor-augmenting

One of the ways in which an effectiveness variable could be included in a production function in a Solow model. If effectiveness A is multiplied by labor L but not by capital K, then we say the effectiveness variable is labor-augmenting.

Source: econterms

LAD

Stands for 'Least absolute deviations' estimation.

LAD estimation can be used to estimate a smooth conditional median function; that is, an estimator for the median of the process given the data. Say the data are stationary {xt, yt}. The dependent variable is y and the independent variable is x.The criterion function to be minimized in LAD estimation for each observation t is:
q(xt,yt,q) = |yt=m(xt,q)|

where m() is a guess at the conditional median function.

Under conditions specified in Wooldridge, p 2657, the LAD estimator here is Fisher-consistent for parameters of the estimator of the median function.

Source: econterms

lag operator

Denoted L. Operates on an expression by moving the subscripts on a time series back one period, so: Let = et-1 Why? Well, it can help manipulability of some expressions. For example it turns out one can could write an MA(2) process (which see) to look like this, in lag polynomials (which see): et = (1 + p1L + p2L2)ut and then divide both sides by the lag polynomial, and get a legal, meaningful, correct expression.

Source: econterms

lag polynomial

A polynomial expression in lag operators (which see). Example: (1 - p1L + p2L2) where L2 = LL, or the lag operator L applied twice. These are useful for manipulating time series. For example, one can quickly show an AR(1) is equivalent to an MA(infinity) by dividing both sides by the lag polynomial (1-pL).

Source: econterms

Lagrangian multiplier

An algebraic term that arises in the context of problems of mathematical optimization subject to constraints, which in economics contexts is sometimes called a shadow price.

A long example: Suppose x represents a quantity of something that an individual might consume, u(x) is the utility (satisfaction) gained by that individual from the consumption of quantity x. We could model the individual's choice of x by supposing that the consumer chooses x to maximize u(x):

x = arg maxx u(x)

Suppose however that the good is not free, so the choice of x must be constrained by the consumer's income. That leads to a constrained optimization problem ............ [Ed.: this entry is unfinished]

Source: econterms

LAN

stands for 'locally asymptotically normal', a characteristic of many ('a family of') distributions.

Source: econterms

large sample

Usually a synonym for 'asymptotic' rather than a reference to an actual sample magnitude.

Source: econterms

Laspeyres index

A price index following a particular algorithm.

It is calculated from a set ('basket') of fixed quantities of a finite list of goods. We are assumed to know the prices in two different periods. Let the price index be one in the first period, which is then the base period. Then the value of the index in the second period is equal to this ratio: the total price of the basket of goods in period two divided by the total price of exactly the same basket in period one.

As for any price index, if all prices rise the index rises, and if all prices fall the index falls.

Source: econterms

Law of iterated expectations

Often exemplified by EtEt+1(.) = Et(.) That is, "one cannot use limited information [at time t] to predict the forecast error one would make if one had superior information [at t+1]." -- Campbell, Lo, and MacKinlay, p 23.

Source: econterms

LBO

Leveraged buy-out. The act of taking a public company private by buying it with revenues from bonds, and using the revenues of the company to pay off the bonds.

Source: econterms

Learning process

Consider a repeated play of a finite game. In each period, every player observes the history of past actions, and forms a belief about the other players? strategies. He then chooses a best response according to his belief about the other players? strategies. We call such a process a learning process.

Source: SFB 504

least squares learning

The kind of learning that an agent in a model exhibits by adapting to past data by running least squares on it to estimate a hypothesized parameter and behaving as if that parameter were correct.

Source: econterms

leisure

In some models, individuals spend some time working and the rest is lumped into a category called leisure, the details of which are usually left out.

Source: econterms

lemons model

Describes models like that of Akerlof's 1970 paper, in which the fact that a good is available suggests that it is of low quality. For example, why are used cars for sale? In many cases because they are "lemons," that is, they were problematic to their previous owners.

Source: econterms

Leontief production function

Has the form q=min{x1,x2} where q is a quantity of output and x1 and x2 are quantities of inputs or functions of the quantities of inputs.

Source: econterms

leptokurtic

An adjective describing a distribution with high kurtosis. 'High' means the fourth central moment is more than three times the second central moment; such a distribution has greater kurtosis than a normal distribution. This term is used in Bollerslev-Hodrick 1992 to characterize stock price returns.
Lepto- means 'slim' in Greek and refers to the central part of the distribution.

Source: econterms

Lerman ratio

A government benefit to the underemployed will presumably reduce their hours of work. The ratio of the actual increase in income to the benefit is the Lerman ratio, which is ordinarily between zero and one. Moffitt (1992) estimates it in regard to the U.S. AFDC program at about .625.

Source: econterms

Lerner index

A measure of the profitability of a firm that sells a good: (price - marginal cost) / price.

One estimate, from Domowitz, Hubbard, and Petersen (1988) is that the average Lerner index for manufacturing firms in their data was .37.

Source: econterms

leverage ratio

Meaning differs by context. Often: the ratio of debts to total assets. Can also be the ratio of debts (or long-term debts in particular, excluding for example accounts payable) to equity.

Normally used to describe a firm's but could describe the accounts of some other organization, or an individual, or a collection of organizations.

Source: econterms

Leviathan

The all-powerful kind of state that Hobbes thought "was necessary to solve the problem of social order." -- Cass R. Sunstein, "The Road from Serfdom" The New Republic Oct 20, 1997, p 37.

Source: econterms

Liability of newness

The liability of newness phenomenon describes the different risks of dying of an organization during its life course. It states that at the point of founding of an organization the risk of dying is highest and decreases with growing age of the organization. There are basicly three reasons why this might be the case (see Stinchcombe, 1965):
New organizations which are acting in new areas ask for new roles to be performed by their members. The learning of the new roles takes time and leads to economic inefficencies.
Trust among the organizational members has yet to be developed since in most cases the new employees of a firm do not know each other when the organization is founded.
New organizations have not yet built stable portfolios of clients.

These considerations can - at least in some respects - also apply to the new rules of an organization. A new rule also implies new roles that have to be learned and members have to develop trust towards the new rule. According to this theoretical concept a new organizational rule should also have its highest risk of beeing abolished just after its founding (see Schulz, 1993).

Source: SFB 504

Lifecycle hypothesis

The life-cycle hypothesis presents a well-defined linkage between the consumption plans of an individual and her income and income expectations as she passes from childhood, through the work participating years, into retirement and eventual decease. Early attempts to establish such a linkage were made by Irving Fisher (1930) and again by Harrod (1948) with his notion of hump saving, but a sharply defined hypothesis which carried the argument forward both theoretically and empirically with its range of well-specified tests for cross-section and time series evidence was first advanced by Modigliani & Brumberg (1954). Both their paper and advanced copies of the permanent income theory of Milton Friedman (1957) were circulating in 1953. Both the Modigliani-Brumberg and the Friedman theories are referred to as life-cycle theories.

The main building block of life-cycle models is the saving decision, i.e., the division of income between consumption and saving. The saving decision is driven by preferences between present and future consumption (or the utility derived from consumption). Given the income stream the household receives over time, the sequence of optimal consumption and saving decisions over the entire life can be computed. Note that the standard life-cycle model as presented here is firmly grounded in expected utility theory and assumes rational behavior.

The typical shape of the income profile over the life cycle starts with low income during the early years of the working life, then income increases until a peak is reached before retirement, while pension income during retirement is substantially lower. To make up for the lower income during retirement and to avoid a sharp drop in utility at the point of retirement, individuals will save some fraction of their income during their working life and dissave during retirement. This results in a hump-shaped savings profile over the life cycle ? the main prediction of the life-cycle theory.

Unfortunately, this prediction does not hold in actual household behavior. It is fair to say the reasons for this failure of the simple life-cycle model are still not understood. Rodepeter & Winter (1998) provide empirical evidence for Germany and discuss some extensions of the life-cycle model that might help to understand actual savings behavior. An important direction of current research tries to apply elements of behavioral economics to life-cycle savings decisions.

Source: SFB 504

Lifecycle hypothesis a review of the literature

This review of the literature on life-cycle consumption and saving decisions is adapted from Fisher (1987).

The life-cycle hypothesis presents a well-defined linkage between the consumption plans of an individual and her income and income expectations as she passes from childhood, through the work participating years, into retirement and eventual decease. Early attempts to establish such a linkage were made by Irving Fisher (1930) and again by Harrod (1948) with his notion of hump saving, but a sharply defined hypothesis which carried the argument forward both theoretically and empirically with its range of well-specified tests for cross-section and time series evidence was first advanced by Modigliani & Brumberg (1954). Both their paper and advanced copies of the permanent income theory of Milton Friedman (1957) were circulating in 1953 and led M.R. Fisher (1956) to carry out tests of the theories even preceding publication of Friedmanīs work. Both the Modigliani-Brumberg and the Friedman Theories are referred to as life-cycle theories and they certainly have many similar implications, but the one that is more closely related to the life cycle with emphasis on age ? Modigliani and Brumberg ? is the one to which the following review concentrates.

The key which rendered the multi-period analysis tractable under subjective certainty was the specification that the life-time utility function be homothetic ? this permitted planned consumption for each future period to be written as a function of expected wealth as seen at the planning date, the functional parameters being in no way dependent upon wealth, but upon age and tastes. The authors further sharpened their hypothesis. They specified that an individual would plan to consume the same amount in real discounted terms each year. Throughout, desired bequest and initial assets were set to zero. However, the authors did show that bequests could be accounted for within the homothetic utility function itself if that became necessary.

From the outset, such sharp hypothesis was desired for empirical testing. For Modigliani at least, a propelling influence had been the debate about the explanatory power of the Keynesian consumption function for forecasting postwar consumption and income. The inadequacies revealed had led already to several refined theories, notably by Duesenberry (1949) and by Modigliani (1949) himself. In the 1940s, cross-section studies had been carefully carried through at the National Bureau of Economic Research (NBER), and empirical results from these studies were promoting theoretical insights. Any new theory had to be consistent with these findings. The tighter specification of the hypothesis enabled the spelling out of the pattern of accumulating savings in the working years to finance the retirement years ? hump savings. Assuming that real income of each member of the populationwide sample remained the same throughout working life, it was shown that the independent of the age and income distribution and dependent only on the proportion of retirement years to expected lifetime. This alerted economists to the fact that cross-section results do not directly translate into estimates of the marginal propensity to save of an individual planning function. This insight is of broader significance not confined to the simple hypothesis. The implications of the hypothesis for time series analysis were disseminated much more slowly as the companion paper to that on cross section interpretation was never published, accounts not being freely available until 1963 and the original text itself not until 1980.

Real consumption, including the depreciation of durable goods, is a proportion of expected real wealth, and wealth is the addition of initial assets at the planning date, current income and expected (discounted) future income. By then assuming that the proportionality factor referred to is identical across individuals, they devised an aggregate relation for each and every age group. Next they proceed to aggregate across age groups. Here the proportionality factor, depending as it does on age, is not independent of assets, and bias may be introduced, if the strictest set of assumptions used in the cross-section analysis is employed, the authors show that when aggregated real income follows an exponential growth trend the parameters of the aggregate relation remain constant over time. They are, however, sensitive to the magnitude of the growth rate of real income (a sum of growth rates in productivity and population), the saving-income ratio being larger the greater the rate of income growth.

If income and/or assets at any time move out of line with previous planning expectations, plans can be revised. Suppose income rises, yet income expectations are not revised, the change being viewed as an on-off event. Then the individual marginal propensity to save at that date would rise to finance subsequent consumption at a higher level until death. If income expectations were revised upwards permanently, then the marginal propensity to save would also rise but to a lesser degree than in the on-off case as higher consumption can more easily be provided for out of later-period incomes. Allowance for income variability is straightforward in cross section; with time series expected income, here labor income, may be set equal to a weighted average of aggregated past and expected future income, or subdivided according to whether the reference is to employed or unemployed consumers at any time (Modigliani & Ando (1963)).

Source: SFB 504

LIFT

Acronym for "Let It Function Today" - a concept very comparable to rationality (for a repeated discussion see Bogart, 1985):


    Everyone believes it exists, although some pessimist critics say it consists only theoretically and has no everyday value whatsoever (e.g. Lotterbottel, 1983);
    It has just one entry (the economist view), but still people can get into it coming from very different places or levels. Superficially, these levels all look the same (red), but they really are dependent on context factors (the psychologist view);
    It is at the core of the SonderForschungsBereich. However, there will never be more than four people being able to use it at the same time. The probability is high that this also is the time when the bell rings and the concept breaks down (Hausmeister, 1952);
    People are really into it and they talk a lot about it (Funk & Stoer, 1997). Behavioral observation has proved, however, that in fact nobody gets in (although some people report spiritual experiences of "being in a flight-like state" or "getting closer to the heavenly Geschäftsstelle" or being "lifted up", while others believe in "the key"). Instead people circle around it using the dissatisficing strategy of climbing the stairs of experimental simulation;
    It is supposed to work perfectly, but it could happen that at some point it wouldn't. Therefore it is not worked with preventively. As one result the concept just never works, as a second result people sweat a whole lot;
    There is some speculation about what would happen if it worked at some point, but empirical evidence for these theories is still weak (Autorenkollektiv, 1997).

    Source: SFB 504

likelihood function

In maximum likelihood estimation, the likelihood function (often denoted L()) is the joint probability function of the sample, given the probability distributions that are assumed for the errors. That function is constructed by multiplying the pdf of each of the data points together:
L(q) = L(q; X) = f(X; q) = f(X0;q)f(X1;q)...f(XN;q)

Source: econterms

Limdep

A program for the econometric study of limited dependent variables. Limdep's web site is at 'http://www.limdep.com'.

Source: econterms

limited dependent variable

A dependent variable in a model is limited if it is discrete (can take on only a countable number of values) or if it is not always observed because it is truncated or censored.

Source: econterms

LIML

stands for Limited Information Maximum Likelihood, an estimation idea

Source: econterms

Lindeberg-Levy Central Limit Theorem

For {wt} an iid sequence, Ewt=mu, and var(wt)=s2:
Let W=the average of the T wt's. Then: T1/2(W-mu)/s converges in distribution as T goes to infinity to a N(0,1) distribution

Source: econterms

linear algebra

linear algebra

Source: econterms

linear model

An econometric model is linear if it is expressed in an equation which the parameters enter linearly, whether or not the data require nonlinear transformations to get to that equation.

Source: econterms

linear pricing schedule

Say the number of units, or quantity, paid for is denoted q, and the total paid is denoted T(q), following the notation of Tirole. A linear pricing schedule is one that can be characterized by T(q)=pq for some price-per-unit p.

For alternative pricing schedules see nonlinear pricing or affine pricing schedule.

Source: econterms

linear probability models

Econometric models in which the dependent variable is a probability between zero and one. These are easier to estimate than probit or logit models but usually have the problem that some predictions will not be in the range of zero to one.

Source: econterms

Linear separability

The method typically used to combine the attribute weights was adapted from Tversky's (1977) contrast model of similarity. The attribute weights are assumed to be independent and combined by adding (that means they are linearly separable).

Source: SFB 504

link function

Defined in the context of the generalized linear model, which see.

Source: econterms

Lipschitz condition

A function g:R->R satisfies a Lipschitz condition if
|g(t1)-g(t2) <= C|t1-t2|
for some constant C. For a fixed C we could say this is "the Lipschitz condition with constant C."

A function that satisfies the Lipschitz condition for a finite C is said to be Lipschitz continuous, which is a stronger condition than regular continuity; it means that the slope so steep as to be outside the range (-C, C).

Source: econterms

Lipschitz continuous

A function is Lipschitz continuous if it satisfies the Lipschitz condition for a finite constant C. Lipschitz continuity is a stronger condition than regular continuity. It means that the slope is never outside the range (-C, C).

Source: econterms

liquid

A liquid market is one in which it is not difficult or costly to buy or sell.

More formally, Kyle (1985), following Black (1971), describes a liquid market as "one which is almost infinitely tight, which is not infinitely deep, and which is resilient enough so that prices eventually tend to their underlying value."

Source: econterms

liquidity

A property of a good: a good is liquid to the degree it is easily convertible, through trade, into other commodities. Liquidity is not a property of the commodity itself but something established in trading arrangements.

Source: econterms

liquidity constraint

Many households, e.g. young ones, cannot borrow to consume or invest as much as they would want, but are constrained to current income by imperfect capital markets.

Source: econterms

liquidity trap

A Keynesian idea. When expected returns from investments in securities or real plant and equipment are low, investment falls, a recession begins, and cash holdings in banks rise. People and businesses then continue to hold cash because they expect spending and investment to be low. This is a self-fulfilling trap.

See also Keynes effect and Pigou effect.

Source: econterms

Literature

: Gibbons (1992)

Source: SFB 504

Ljung-Box test

Same as portmanteau test.

Source: econterms

locally identified

Linear models are either globally identified or there are an infinite number of observably equivalent ones. But for models that are nonlinear in parameters, "we can only talk about local properties." Thus the idea of locally identified models, which can be distinguished in data from any other 'close by' model. "A sufficient condition for local identification is that" a certain Jacobian matrix is of full column rank.

Source: econterms

locally nonsatiated

An agent's preferences are locally nonsatiated if they are continuous and strictly increasing in all goods.

Source: econterms

log

In the context of economics, log always means 'natural log', that is loge, where e is the natural constant that is approximately 2.718281828. So x=log y <=> ex=y.

Source: econterms

log utility

A utility function. Some versions of this are used often in finance.
Here is the simplest version. Define U() as the utility function and w as wealth. a is a positive scalar parameter.
U(w) = ln-w

is the log utility function.

Source: econterms

log-concave

A function f(w) is said to be log-concave if its natural log, ln(f(w)) is a concave function; that is, assuming f is differentiable, f''(w)/f(w) - f'(w)2 <= 0 Since log is a strictly concave function, any concave function is also log-concave. A random variable is said to be log-concave if its density function is log-concave. The uniform, normal, beta, exponential, and extreme value distributions have this property. If pdf f() is log-concave, then so is its cdf F() and 1-F(). The truncated version of a log-concave function is also log-concave. In practice the intuitive meaning of the assumption that a distribution is log-concave is that (a) it doesn't have multiple separate maxima (although it could be flat on top), and (b) the tails of the density function are not "too thick". An equivalent definition, for vector-valued random variables, is in Heckman and Honore, 1990, p 1127. Random vector X is log-concave iff its density f() satisfies the condition that f(ax1+(1-a)x2)≥[f(x1)]a[f(x2)](1-a) for all x1, and x2 in the support of X and all a satisfying 0≤a≤1.

Source: econterms

log-convex

A random variable is said to be log-convex if its density function is log-concave. Pareto distributions with finite means and variances have this property, and so do gamma densities with a coefficient of variation greater than one. [Ed.: I do not know the intuitive content of the definition.] A log-convex random vector is one whose density f() satisfies the condition that f(ax1+(1-a)x2) ≤ [f(x1)]a[f(x2)](1-a) for all x1, and x2 in the support of X and all a satisfying 0≤a≤1.

Source: econterms

Logic of conversation

Inferring the pragmatic meaning of a semantic utterance requires to go beyond the information given. "In making these inferences, speakers and listeners rely on a set of tacit assumptions that govern the conduct of conversation in everyday life" (Schwarz, 1994, p. 124). According to Grice (1975) these assumptions can be expressed by four maxims which constitute the "co-operative principle". "First, a maxim of quantity demands that contributions are as informative as required, but not more informative than required. Second, a maxim of quality requires participants to provide no information they believe is false or lack adequate evidence for. Third, according to a maxim of relation, contributors need to be relevant for the aims of the ongoing interaction. Finally, a maxim of manner states that contributors should be clear, rather than obscure or ambiguous" (Bless, Strack & Schwarz, 1993, p. 151). These maxims have been demonstrated to have a pronounced impact of how individuals perceive and react to semantically presented social situations and problem scenarios.

Source: SFB 504

logistic distribution

Has the cdf F(x) = 1/(1+e-x)
This distribution is quicker to calculate than the normal distribution but is very similar. Another advantage over the normal distribution is that it has a closed form cdf. pdf is f(x) = ex(1+ex)-2 = F(x)F(-x)

Source: econterms

logit model

A univariate binary model. That is, for dependent variable yi that can be only one or zero, and a continuous indepdendent variable xi, that:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is the logistic cdf. The probit model is the same but with a different cdf for F.

Source: econterms

lognormal distribution

Let X be a random variable with a standard normal distribution. Then the variable Y=eX has a lognormal distribution.
Example: Yearly incomes in the United States are roughly log-normally distributed.

Source: econterms

longitudinal data

a synonym for panel data

Source: econterms

Lorenz curve

used to discuss concentration of suppliers (firms) in a market. The horizontal axis is divided into as many pieces as there are suppliers. Often it is given a percentage scale going from 0 to 100. The firms are in order of decreasing size. On the vertical axis are the market sales in percentage terms from 0 to 100. The Lorenz curve is a graph of the sales of all the firms to the right of each point on the horizontal axis.

So (0,0) and (100,100) are the endpoints on the Lorenz curve and it is weakly convex, and piecewise linear, between. See also Gini coefficient.

Source: econterms

loss function

Or, 'criterion function.' A function that is minimized to achieve a desired outcome. Often econometricians minimize the sum of squared errors in making an estimate of a function or a slope; in this case the loss function is the sum of squared errors. One might also think of agents in a model as minimizing some loss function in their actions that are predicated on estimates of things such as future prices.

Source: econterms

lower hemicontinuous

No appearing points

Source: econterms

LRD

Longitudinal Research Database, at the U.S. Bureau of the Census. Used in the study of labor and productivity. The data is not publicly available without special certification from the Census. The LRD extends back to 1982.

Source: econterms

Lucas critique

A criticism of econometric evaluations of U.S. government policy as they existed in 1973, made by Robert E. Lucas. "Keynesian models consisted of collections of decision rules for consumption, investment in capital, employment, and portfolio balance. In evaluating alternative policy rules for the government,.... those private decision rules were assumed to be fixed.... Lucas criticized such procedures [because optimal] decision rules of private agents are themselves functions of the laws of motion chosen by the government.... policy evaluation procedures should take into account the dependence of private decision rules on the government's ... policy rule." In Cochrane's language: "Lucas argued that policy evaluation must be performed with models specified at the level of preferences ... and technology [like discount factor beta and permanent consumption c* and exogenous interest rate r], which presumably are policy invariant, rather than decision rules which are not." [I believe the canonical example is: what happens if government changes marginal tax rates? Is the response of tax revenues linear in the change, or is there a Laffer curve to the response? Thus stated, this is an empirical question.]

Source: econterms

M

m-estimators

Estimators that maximize a sample average. The 'm' means 'maximum-likelihood-like'. (from Newey-McFadden)

The term was introduced by Huber (1967). "The class of M-estimators included the maximum likelihood estimator, the quasi-maximum likelihood estimator, multivariate nonlinear least squares" and others. (from Wooldridge, p 2649)

I think all m-estimators have scores.

Source: econterms

M1

A measure of total money supply. M1 includes only checkable demand deposits.

Source: econterms

M2

A measure of total money supply. M2 includes everything in M1 and also savings and other time deposits.

Source: econterms

MA

Stands for "moving average." Describes a stochastic process (here, et) that can be described by a weighted sum of a white noise error and the white noise error from previous periods. An MA(1) process is a first-order one, meaning that only the immediately previous value has a direct effect on the current value:
et = ut + put-1
where p is a constant (more often denoted q) that has absolute value less than one, and ut is drawn from a distribution with mean zero and finite variance, often a normal distribution.
An MA(2) would have the form:
et = ut + p1ut-1 + p2ut-2
and so on. In theory a process might be represented by an MA(infinity).

Source: econterms

MA(1)

A first-order moving average process. See MA for details.

Source: econterms

macro

macro

Source: econterms

MacroInstitutionalism


Generally, macroinstitutionalisation is seen as the tendency of organizations to arrange their formal structure not in response to the technical needs of the organizations but in accordance to certain widely accepted rules. This is done in order not to loose the legitimacy towards important stakeholders like banks, clients etc. (Scott, 1987, Scott, 1995, Zucker, 1991, Meyer & Rowan, 1977). Organizations are expected to conform to the institutionalized rules. So firms react to these expectations of good practice rather than looking for the most rational solutions. An example for this notion would be the implementation of computing facilities in organizational settings just because competing firms use computing facilities in similar settings too. "We arrive at the conclusion that formal organization, as it expands in a domain or society, becomes less explicitly rational in its structure. ... Every aspect of rationalized organizational structure comes under exogenous institutional control ..." (Meyer, 1992: 268). The consequence of this tendency is that within an organization the institutionalized routines might be decoupled from the actual practice of the organization. The formal rules signal to the environment that the organization complies with the institutionalized norms of organizing. However, the strict appliance of the rules would lead to inconsitencies. Therefore the organizational members (OM) have the freedom to arrange the tasks in a way which they consider most efficient - thereby violating the official rules (Meyer & Rowan, 1977: 357).

Source: SFB 504

Magic cards

" the Gathering game scenario, players assume the roles of dueling wizards, each with their own libraries of magic spells(represented by decks of cards) that may potentially be used against the player's opponent. Cards are sold in random assortments, just like baseball cards, at retail stores. Launched in August 1993, this product has already grossed hundreds of millions of retail dollars, and now has over a million players worldwide." - description from Reily (1999), " Using Field Experiments to Test Equivalence Between Auction Formats: Magic on the Internet." AER, 89

main effect

As contrasted to interaction effect.

In the regression

yi = aXi + bXZi + cZi + errors

The bXZi term measures the interaction effect. The main effect is cZi.

This term is usually used in an ANOVA context, where its meaning is presumably analogous but this editor has not verified that.

Source: econterms

maintained hypothesis

Synonym for 'alternative hypothesis'. "The hypothesis that the restriction or set of restrictions to be tested does NOT hold." Often denoted H1.

Source: econterms

Malmquist index

An index number enabling a productivity comparison between economy A and economy B. Imagine that we have an aggregate production function QAA=fA(KA,LA) that describes economy A and an aggregate production QBB=fB(KB,LB) that describes economy B. K and L stand for capital and labor inputs. We substitute the inputs of B into the production function of A to compute QAB=fA(KB,LB). We also compute QBA=fB(KA,LA) with the inputs from country A.

The Malmquist index of A with respect to B is the geometric mean of QAA/QAB and QBA/QBB. It will be greater than one if A's aggregate production technology is better than B's.

Source: econterms

mantissa

Fractional part of a real number.

Source: econterms

MAR

a rare abbreviation, for moving-average representation

Source: econterms

March CPS

Also known as the Annual Demographic File. Conducted in March of each year by the Census Bureau in the U.S. Gets the information from the regular monthly CPS survey, plus additional data on work experience, income, noncash benefits, and migration.

Source: econterms

marginal significance level

a synonym for 'P value'

Source: econterms

market

An organized exchange between buyers and sellers of a good or service.

Source: econterms

market capitalization

Total number of shares times the market price of each. May be said of a firm's shares, or of all the shares on an equity market.

Source: econterms

market failure

A situation, usually discussed in a model not in the real world, in which the behavior of optimizing agents in a market would not produce a Pareto optimal allocation. Sources of market failures:
-- monopoly. Monopoly or oligopoly producers have incentives to underproduce and to price above marginal cost, which then gives consumers incentives to buy less than the Pareto optimal allocation.
-- externalities

Source: econterms

market for corporate control

Shares of public firms are traded, and in large enough blocks this means control over corporations is traded. That puts some pressure on managers to perform, otherwise their corporation can be taken over.

Source: econterms

market power

Power held by a firm over price, and the power to subdue competitors.

Source: econterms

market power theory of advertising

That established firms use advertising as a barrier to entry through product differentiation. Such a firm's use of advertising differentiates its brand from other brands to a degree that consumers see its brand is a slightly different product, not perfectly substituted by existing or potential competitors. This makes it hard for new competitors to gain consumer acceptance.

Source: econterms

market price of risk

Synonym for Sharpe ratio.

Source: econterms

Markets

In a very abstract sense, a market is where private goods are exchanged. A good can be anything for which a well-defined property right exists. This abstract definition is related to the concepts of an allocation, the competitive market equilibrium, and an economic equilibrium in general.

More intuitively, the concept of a market describes the idea that the suppliers of a product (or a service) meet the demand side, and both sides negotiate over the price until an optimal combination of price and quantity is reached. Typically, the supply side offers a higher quantity the higher the price, whereas the demanded quantity falls the higher the price. In equilibrium, suppliers and consumers trade at a price at which the supplied quantity equals the demanded quantity.

Source: SFB 504

Markov chain

A stochastic process is a Markov chain if:
(1) time is discrete, meaning that the time index t has a finite or countably infinite number of values;
(2) the set of possible values of the process at each time is finite or countably infinite; and
(3) it has the Markov property of memorylessness.

Source: econterms

Markov perfect

A characteristic of some Nash equilibria. "A Markov perfect equilibrium (MPE) is a profile of Markov strategies that yields a Nash equilibrium in every proper subgame." A Markov strategy is one that does not depend at all on variables that are functions of the history of the game except those that affect payoffs.
A tiny change to payoffs can discontinuously change the set of Markov perfect equilibria, because a state variable with a tiny effect on payoffs can be part of a Markov perfect strategy, but if its effect drops to zero, it cannot be included in a strategy; that is, such a change makes many strategies disappear from the set of Markov perfect strategies.

Source: econterms

Markov process

A stochastic process where all the values are drawn from a discrete set. In a first-order Markov process only the most recent draw affects the distribution of the next one; all such processes can be represented by a Markov transition density matrix. That is,
Pr{xt+1 is in A | xt, xt-1,...} = Pr{xt+1 is in A | xt}
Example 1: xt+1 = a + bxt + et is a Markov process
For a=0, b=1 it is a martingale.


A Markov process can be periodic only if it is of higher than first order.

Source: econterms

Markov property

A property that a set of stochastic processes may have. The system has the Markov property if the present state predicts future states as well as the whole history of past and present states does -- that is, the process is memoryless.

Source: econterms

Markov strategy

In a game, a Markov strategy is one that does not depend at all on state variables that are functions of the history of the game except those that affect payoffs.
[Ed.: I believe random elements can be in a Markov strategy: e.g. a mixed strategy could be a Markov strategy.]

Source: econterms

Markov transition matrix

A square matrix describing the probabilities of moving from one state to another in a dynamic system. In each ?row? are the probabilities of moving from the state represented by that row, to the other states. Thus the rows of a Markov transition matrix each add to one. Sometimes such a matrix is denoted something like Q(x' | x) which can be understood this way: that Q is a matrix, x is the existing state, x' is a possible future state, and for any x and x' in the model, the probability of going to x' given that the existing state is x, are in Q. (An example would be good here)

Source: econterms

Markov's inequality

Quoting almost strictly from Goldberger, 1994, p 31:

If Y is a nonnegative random variable, that is, if Pr(Y<0)=0, and k is any positive constant, then E(Y) ≥ kPr(Y ≥ k).

The proof is amazingly quick. See Goldberger page 31 or Hogg and Craig page 68.

Source: econterms

markup

In macro, the ratio of price to marginal cost. Can be used as a measure of market power across firms, industries, or economies.

Source: econterms

Marshallian demand function

x(p,m) -- the amount of a factor of production that is demanded by a producer given that it costs p per unit and the budget limit that can be spent on all factors is m. p and x can be vectors.

Source: econterms

martingale

Same as martingale difference sequence.

Source: econterms

martingale difference sequence

** This definition is not usable as is ** A stochastic process {Xt} is a martingale (or, equivalently, martingale difference sequence) with respect to information {Yt} if and only if:
(i) E|Xt| < infinity
(ii) E[Xn+1 | Y0, Y1, ... , Yn] = Xn E(gt+1) = gt.

Martingale differences are uncorrelated but not necessarily independent.

Source: econterms

mass production

'A production system characterized by mechanization, high wages, low prices, and large-volume output.' (Hounshell, p.305) Usually refers to factory processes on metalwork, not to textiles or agriculture. The term came into use in the 1920s and referred to production approaches analogous to those of the Ford Motor Company in the US.

Source: econterms

Matching pennies

Extremely simplistic, symmetric, two player 2x2 game (which is said to be played by children), in which each player chooses either Head or Tail. If the choices differ, player 1 pays a dollar to player 2; if they are the same, player 2 pays player 1 a dollar. This game does not have an equilibrium in pure strategies, but the unique equilibrium involves each player selecting one of the two actions with equal probability. The game illustrates that interactively optimizing behavior may create the need to take actions randomly, in order not to be predictable by the opponent. For the exact determination of mixed equilibrium strategies, the assumption of expected utility is important. For a real-world situation closely resembling this game, think of penalty shooting in sports: both the goal-keeper and the player who shoots the ball play randomized strategies. They randomize their actions (left or right, upper corner or not) in a way such that the other player cannot improve by either action he takes, given the own probabilities of selecting the actions.

Source: SFB 504

Matching Pennies

Player Two
CD
Player One C1,-1 -1,1
D-1,1 1,-1
A zero-sum game with two players. Each shows either heads or tails from a coin. If both are heads or both are tails then player One wins, otherwise Two wins. The payoff matrix is at right.

There is no Nash equilibrium to this game.

Source: econterms

Matlab

A matrix programming language and programming environment. Used more by engineers but increasingly by economists. There's a very brief tutorial at Tutorial: Matlab.
The software is made by The Mathworks, Inc.

Source: econterms

maximin principle

A justice criterion proposed by the philosopher Rawls. A principle about the just design of social systems -- e.g., rights and duties. According to this principle the system should be designed to maximize the position of those who will be worst off in it.

"The basic structure is just throughout when the advantages of the more fortunate promote the well-being of the least fortunte, that is, when a decrease in their advantages would make the least fortunate even worse off than they are. The basic structure is perfectly just when the prospects ofthe least fortunate are as great as they can be." -- Rawls, 1973, p 328

Source: econterms

maximum score estimator

A nonparametric estimator of certain coefficients of a binary choice model. Avoids assumptions about the distribution of errors that would be made by a probit or logit model in the same circumstances.

In the econometric model: the dependent variable yi is either zero or one; the regressors Xi are multiplied by a parameter vector b. yi often represents which of two choices was selected by a respondent. b is estimated to maximize an objective function that is given by an expression: maxb sumi=1 to N [(yi-.5)sign(Xib)]

where i indexes observations, of which there are N, and the function sign() has value one if its argument is greater than or equal to zero, and has value zero otherwise.

b chosen this way has the property that it maximizes the correct prediction of yi given the information in X. Notice that although the maximum value of the maximand may be well defined, b is not usually uniquely estimated in a finite data set, because values of b near betahat would make the same predictions. Often, however, b is estimated within a narrow range.

Source: econterms

MBO

Stands for Management Buy-Out, the purchase of a company by its management. Sometimes means Management By Objectives, a goal-oriented personnel evaluation approach.

Source: econterms

mean square error

A criterion for an estimator: the choice is the one that minimizes the sum of squared errors due to bias and due to variance.

Source: econterms

mean squared error

The mean squared error of an estimator b of true parameter vector B is:
MSE(b) = E[(b - B)2]
which is also
MSE(b) = var(b) + (bias(b))(bias(b)')

Source: econterms

measurable

If (S, A) is a measurable space, elements of A are A-measurable.

Source: econterms

measurable space

(S, A) is a measurable space if S is a set and A is a sigma-algebra of S. Elements of A are said to be A-measurable.

Source: econterms

measure

A noun, in the mathematical language of measure theory: a measure is a function from sets to the real line. Probability is a common kind of measure in economic models. Other measures are the counting measure, which is the number of elements in the set, the length measure, the area measure, and the volume measure. Length, area, and volume are defined along lines, planes, and spaces just as one would expect, and they have the natural meanings.
Formally: a measure is a mapping m from a sigma algebra A to the extended real line such that
(i) m(null) = 0
(ii) m(B) >= 0 for all B in A
(iii) m(any countable union of disjoint sets in A) = the sum of m(each of those sets)
The third property is called the countable additivity property. An example: imagine probability mass distributed evenly on a unit square. Probability is then defined on any area within the square. The measure (probability, here) is the size (area) of the subset.
The kinds of subsets on which measures such as probability are defined are called sigma-algebras (which see).

Source: econterms

measure theory

measure theory

Source: econterms

measurement error

The data used in an econometric problem may have been measured with some error, and if so this violates a basic condition of the abstract environment in which OLS is validly derived. This turns out not to be seriously problematic if the dependent variable is affected by an iid mean-zero measurement error, but if the regressors have been measured with a mean-zero iid error the estimates can be biased. There are standard approaches to this problem, notably the use of instrumental variables. Paraphrasing from Schennach, 2000, p 1: In a linear econometric specification, a measurement error on the regressors can be viewed as a particular type of endogeneity problem causing the disturbance to be correlated with the regressors, which is precisely the problem addressed by standard IV techniques.

Source: econterms

mechanism design

A certain class of principal-agent problems are called mechanism design problems. In these, a principal would like to condition her own actions on the private information of agents. The principal must offer incentives for the agents to reveal information.
Examples from the theoretical literature are auction design, monopolistic price discrimination, and optimal taxation. In an auction the seller would like to set a price just below the highest valuation of a potential buyer, but does not know that price, and an auction is a mechanism to at least partially reveal it. In a price discrimination, the seller would like to offer the product at different prices to groups with different valuations but may not be able to identify which group an agent is a member of in advance.

Source: econterms

Mediator variable

In general, a given variable may be said to function as a mediator to the extend that it accounts for the relation between the predictor and the criterion. Mediators explain how external physical events take on internal psychological significance. Whereas moderator variables specify when certain effects will hold, mediators speak to how or why such effects occur (Baron & Kenny, 1986, p. 1176).

Path diagram:

IV = independent variable
OV = dependent (outcome) variable

The authors clarify the meaning of mediation, with introducing a path diagram as a model for depicting a causal chain. The basic causal chain involved in mediation is diagramed in the figure above. "This model assumes a three-variable system such that there are two causal paths feeding into the outcome variable: the direct impact of the independent variable (Path c) and the impact of the mediator (Path b). There is also a path from the independent variable to the mediator (Path a).

A variable functions as a mediator when it meets the following conditions:
(a) variations in levels of the independent variable significantly account for the variations in the presumed mediator (i.e., Path a),
(b) variations in the mediator significantly account for variations in the dependent variable (i.e., Path b), and (c) when Paths a and b are controlled, a previously significant relation between the independent and dependent variable is no longer significant, with the strongest demonstration of mediation occurring when Path c is zero.
In regard to the last condition we may envisage a continuum. When Path c is reduced to zero, we have strong evidence for a single, dominant mediator. If the residual Path c is not zero, this indicates the operation of multiple mediating factors. Because most areas of psychology, including social, treat phenomena that have multiple causes, a more realistic goal may be to seek mediators that significantly decrease Path c rather than eliminating the relation between the independent and the dependent variables altogether. From a theoretical perspective, a significant reduction demonstrates that a given mediator is indeed potent, albeit not both a necessary and a sufficient condition for an effect to occur (Baron & Kenny, 1986, p. 1176).

Source: SFB 504

medium of exchange

A distinguishing characteristic of money is that it is taken as a medium of exchange, that is, in the language of Wicksell (1935) p. 17, that it is "habitually, and without hesitation, taken by anybody in exchange for any commodity."

Source: econterms

meet

Given a space of possible events, the meet is the finest common coarsening of the information sets of all the players. The meet is the finest partition of the space of possible events such that all players have beliefs about the probabilities of the elements of the partition.

Source: econterms

mesokurtic

An adjective describing a distribution with kurtosis of 3, like the normal distribution. See by contrast leptokurtic and platykurtic.

Source: econterms

metaproduction function

Means best-practice production function -- depending on context, either the most efficient feasible practice, or most efficient actual practice of the existing entities converting inputs X into output y. Often in practice y is an agricultural output, and data from a sample of farms, and the meta-production function could be estimated by estimating production functions for the farms and choosing among the most efficient ones. In the (macro) context of the quote below, the entities are not farms but countries, producing GDP. 'The term 'meta-production function' is due to Hayami and Ruttan (1970, 1985). For an exposition of the meta-production function approach, see Lau and Yotopoulos (1989) and Boskin and Lau (1990).... The two most important maintained hypotheses [of this approach] are: (1) that the aggregate production functions of all countries are identical in terms of 'efficiency-equivalent' units of output and inputs; and (2) that technical progress in all countries can be represented in the commodity-augmentation form, with constant geometric augmentation factors....' The framework allows 'the researcher to consider and potentially to reject the maintained hypotheses of traditional growth accounting [such as] (1) constant returns to scale, (2) neutrality of technical progress; and (3) profit maximization.' (p66) An assumption related to the second maintained hypothesis above, which the theory depends on (p69) is that 'the measured outputs and inputs of the different countries may be converted into unobservable standardized, or 'efficiency-equivalent,' quantities of output and inputs by multiplicative country- and output- and input-specific time-varying augmentation factors....' (where 'time-varying' seems to conflict with the requirement, above, that the augmentation factors be 'constant'.) (p69) In this approach 'countries may differ in the quantities of their factor inputs and intensities and possibly in the qualities and efficiencies of their inputs and outputs, but they do not differ with regard to the technological opportunities .... [T]hey are assumed to have equal access to technologies.' From p66, 69, 73 of Lau (1996).

Source: econterms

metatheorem

An informal term for a proposition that can be proved in a class of economic model environments.

Source: econterms

method of moments

A way of generating estimators: set the distribution moments equal to the sample moments, and solve the resulting equations for the parameters of the distribution.

Source: econterms

MFP

Abbreviation for Multi-factor productivty.

Source: econterms

MGF

stands for 'moment generating function', which see.

Source: econterms

Microeconomics

Microeconomics is the analysis of individual economic units and their interactions. It includes the theories of the consumer (i.e., of households), the producer (i.e., firms), and the markets in which they interact. The tools of microeconomic analysis are also employed in other fields, such as the theory of optimal taxation in public economics. Microeconomics is often contrasted with macroeconomics which is concerned with economic aggregates, such as aggregate consumption and aggregate production, unemployment, or with economic growth in general.

Source: SFB 504

Minitab

Data analysis software, discussed at http://www.minitab.com.

Source: econterms

Mixed strategy

In contrast to pure strategies, mixed strategies are strategies that involve random draws. A mixed strategy is a probability distribution over a player's (pure) strategies. For example in penalty shooting, the goalgetter typically does not expect the goalkeeper to jump to the right corner for sure, but he regards the goalkeeper's behaviour as a mix between the pure strategies "jump to the right" and "jump to the left".

Source: SFB 504

mixing

In the context of stochastic processes, events A and B (that is, subsets of possible outcomes of the process) "are mixing" if they are asymptotically independent in the following way.

Let L be a lag operator that moves all time subscripts back by one (e.g. replacing t by t-1). Iff A and B are mixing, then taking the limit as h goes to infinity:
lim Pr(A intersected with LhB) = Pr(A)Pr(B).

The event Lh is the event B, but h periods ago; it's NOT some kind of stochastic ancestor of B.

If two events are independent, they are mixing.
If two events are mixing, they are ergodic.

I *believe* that a stochastic process is mixing iff all pairs of possible values it can take, taken as events, are mixing.

Source: econterms

MLE

maximum likelihood estimator

Source: econterms

MLRP

Abbreviation for monotone likelihood ratio property of a statistical distribution.

Source: econterms

models

Generally means theoretical or structural models. Can also mean econometric models which in this glossary are listed separately. models

Source: econterms

Models of microeconomic decisions

Economics proceeds by building models of behavior. These models are supposed to be simplified representations of reality which specify how variables in a system relate to each other. Economists use many techniques in the construction and analysis of economic models, but most of the techniques fall into the categories of optimization analysis and equilibrium analysis.

Nearly all models of individual behavior in microeconomics are models of optimizing behavior which can broadly be interpreted as rational behavior. In building a model of behavior, economists are naturally led to identify agents that make the choices, the kinds of choices that are feasible for them, how the choices of other agents constrain them, and so on. Once the economist is able to write down an optimizing problem describing an economic choice, he or she can apply the standard mathematical methods of microeconomic analysis.

Once we have understood the nature of the optimal choice problem facing individual agents, we can investigate how these choices fit together. In general, some of the variables that influence a given agentīs behavior ? such as prices ? will be determined, at least in part, by the behavior of other agents. An economic equilibrium is a situation of consistent optimal choices: No agent has an incentive to change any of his choices, given his perceptions of the behavior of other agents.

Source: SFB 504

Moderator variable

Moderator variables are important, because specific factors (e.g. context information) are often assumed to reduce or enhance the influence that specific independent variables have on specific responses in question (dependent variable).

Specifically within a correlational analysis framework, a moderator is a third variable that affects the zero-order correlation between two other variables. For example, Stern, McCants & Pettine (1982) found that the positivity of the relation between changing life events and severity of illness was considerably stronger for uncontrollable events (e.g., death of a spouse) than for controllable events (e.g., divorce). A moderator effect within a correlational framework may also be said to occur where the direction of the correlation changes. Such an effect would have occured in the Stern et al. study if controllable life changes had reduced the likelihood of illness, thereby changing the direction of the relation between life-event change and illness from positive to negative.

In the more familiar analysis of variance (ANOVA) terms, a basic moderator effect can be represented as an interaction between a focal independent variable and a factor that specifies the appropriate conditions for its operation" (Baron & Kenny, 1986, p. 1174).

Source: SFB 504

modernization

Quoting from Landes: "Modernization comprises such developments as urbanization (the concentration of the population in cities that serve as nodes of industrial production, administration, and intellectual and artistic activity); a sharp reduction in both death rates and birth rates from traditional levels (the so-called demographic transition); the establishment of an effective, fairly centralized bureaucratic government; the creation of an educational system capable of training and socializing the children of the society to a level compatible with their capacities and best contemporary knowledge; and, of course, the acquisition of the ability and means to use an up-to-date technology."

Source: econterms

Modigliani-Miller theorem

that the total value of the bonds and equities issued by a firm in a model is independent of the number of bonds outstanding or their interest rate.

The theorem was shown by Modigliani and Miller, 1958 in a particular context with no fixed costs, transactions costs, asymmetric information, and so forth. Analogous theorems are shown in various contexts. The assumptions made by such theorems offer a way of organizing what it would be that makes corporations choose to offer various levels of bonds. The choice of numbers and types of bonds and stocks a corporation offers is the choice of capital structure. Among the factors affecting the capital structure of a firm are taxes, bankruptcy costs, agency costs, signalling, bargaining position in litigation, and differences between firms and investors in access to capital markets.

Source: econterms

moment-generating function

Denoted M(t) or MX(t), and describes a probability distribution. A moment-generating function is defined for any random variable X with a pdf f(x). M(t) is defined to be E[etX], which is the integral from minus infinity to infinity of etXf(x). A use for these is that the tth moment of X is M(t)(0), that is the tth derivative of M() at zero.

Source: econterms

monetarism

The view that monetary policy is a prime source of the business cycle, and that the time path of the money stock is a good index of monetary policy. As presented by Milton Friedman and Anna Schwartz, monetarism emphasizes the relation between the level of the money stock and the level of output without a detailed theory of why changes in the money stock are not neutral in the short run. Later versions posed an explicit basis for noneutrality in the form of barriers to information flow about prices.

In policy terms monetarists, notably Friedman, advocated a monetary rule, that is, a steady growth in the money supply to match economic growth, without allowing central banks room for discretion. If the rule is credible, public expectations of inflation be low, and thus inflation itself, if high, would fall almost immediately.

Source: econterms

monetarist view

In extreme form: that only the quantity of money matters by way of aggregate demand policy. Relevant only in an overheated economy (Branson p 391).

Source: econterms

monetary base

In a modern industrialized monetary economy, the monetary base is made up of (1) the currency held by individuals and firms and (2) bank reserves kept within a bank or on deposit at the central bank.

Source: econterms

monetary regime

"A monetary regime can be thought of as a set of rules governing the objectives and the actions of the monetary authority."

Examples: (1) "A gold standard is one example of a monetary regime -- the monetary authority is obligated to maintain instant convertibility between its liabilities and the gold. Th monetary authority may have considerable room to maneuver in that monetary regime, but it can do nothing that would cause it to violate its commitment."
(2) "The same remarks would apply to a monetary regime obligating the monetary authority to maintain a fixed exchange rate between its own and another currency." (3) "A monetary regime of a very different sort could be based on a Monetarist rule specifying the rate of growth of some monetary aggregate. The basic distinction is between regimes based on a convertibility or redemption principle and those based on a quantity principle."

Source: econterms

monetary rule

See the policy discussion in monetarism.

Source: econterms

monetized economy

A model economy that has a medium of exchange: money

Source: econterms

money

A good that acts as a medium of exchange in transactions. Classically it is said that money acts as a unit of account, a store of value, and a medium of exchange. Most authors find that the first two are nonessential properties that follow from the third. In fact, other goods are often better than money at being intertemporal stores of value, since most monies degrade in value over time through inflation or the overthrow of governments.

Theory: Ostroy and Starr, 1990, p. 25, define money in certain models "as a commodity of positive price and zero transaction cost that does not directly enter in production or consumption."

History: See this Web site on the History of Money.

Related terms: money

Source: econterms

money illusion

'the belief that money [that is, a particular currency] represents a constant value'

Source: econterms

money-in-the-utility-function models

A modeling idea. In a basic Arrow-Debreu general equilibrium there is no need for money because exchanges are automatic, through a 'Walrasian auctioneer'. To study monetary phenomena, a class of models was made in which money was a good that brought direct utility to the agent holding it; e.g., a utility function took the form u(x,m) where x is a vector of other commodities, and m is a scalar quantity of real money held by the agent. Using this mechanism money can have a positive price in equilibrium and monetary effects can be seen in such models. Contrast 'cash-in-advance constraint' for an alternative approach.

Source: econterms

monopoly

If a certain firm is the only one that can produce a certain good, it has a monopoly in the market for that good.

Source: econterms

monopoly power

The degree of power held by the seller to set the price for a good. In U.S. antitrust law monopoly power is not measured by market share. (Salon magazine, 1998/11/11)

Source: econterms

monopsony

A state in which demand comes from one source. If there is only one customer for a certain good, that customer has a monopsony in the market for that good. Analogous to monopoly, but on the demand side not the supply side. A common theoretical implication is that the price of the good is pushed down near the cost of production. The price is not predicted to go to zero because if it went below where the suppliers are willing to produce, they won't produce. Market power is a continuum from perfectly competitive to monopsony and there's an extensive practice/industry/science of measuring the degree of market power.

Examples: For workers in an isolated company town, created by and dominated by one employer, that employer is a monopsonist for some kinds of employment. For some kinds of U.S. medical care, the government program Medicare is a monopsony.

Source: econterms

monotone likelihood ratio property

A property of a set of pdfs which is assumed in theoretical models to characterize risk and uncertainty because it makes more conclusions feasible and is often plausible.

Example: Let e ('effort') be an input variable into a stochastic production function, and y be the random variable that represent output. Let f(y | e) be the pdf of y for each e. Then the statement that f() has the monotone likelihood ratio property (MLRP) is the same as the statement that:
for e2>e1, f(y|e2)/f(y|e1) is increasing in y.
This says that output is positively related to effort, and something stronger, something like: of two outcomes or ranges of outcomes, the worse one will not become relatively more likely than the better one if effort were to rise. By relatively more likely is meant that the likelihood ratio, above, rises.

The set of pdfs for which the MLRP is assumed above is the set of f()'s indexed by values of e. Each holds that specified relationship to the others. In practice the MLRP assumption tends to rule out multimodal classes of distributions, and this is its main effect. (By multimodal we mean those with multiple-peaked pdfs.)

Normally e is scalar, taking on either discrete or continuous sets of values. An analogous definition, for a multidimensional (vector) e, is feasible. Whether it is used in existing models is not known to this author.

Source: econterms

monotone operator

An operator that preserves inequalities of its arguments. That is, if T is a monotone operator, then: (i) iff x>y, then Tx>Ty, and iff x<y, then Tx<Ty.

Same basic meaning as monotone transformation.

The most common monotone operator is the natural log function. For example in maximum likelihood estimation, one usually maximizes the log of the likelihood function, not the likelihood function itself, because this is more tractable and the log is a monotone operator so it doesn't change the answer.

Source: econterms

monotone transformation

A transformation that preserves inequalities of its arguments. That is, if T is a monotone transformation, then: (i) iff x>y, then Tx>Ty, and iff x<y, then Tx<Ty.

Same basic meaning as monotone operator.

Source: econterms

Monte Carlo simulations

These are data obtained by simulating a statistical model in which all parameters are numerically specified.

One might use Monte Carlo simulations to test how an estimation procedure would behave, for example under conditions when exact analytic descriptions of the performance of the estimation are not algebraically feasible, or when one wants to verify that one's analytic calculation for a confidence interval is correct.

Source: econterms

Moore-Penrose inverse

Same as pseudoinverse.

Source: econterms

morbidity

Incidence of ill health. It is measured in various ways, often by the probability that a randomly selected individual in a population at some date and location would become seriously ill in some period of time. Contrast to mortality.

Source: econterms

mortality

Incidence of death in a population. It is measured in various ways, often by the probability that a randomly selected individual in a population at some date and location would die in some period of time. Contrast to morbidity.

Source: econterms

MSA

Same as SMSA.

Source: econterms

MSE

mean squared error (which see)

Source: econterms

multi-factor productivity

Same as total factor productivity, a certain type of Solow residual.

MFP = d(ln f)/dt = d(ln Y)/dt - sLd(ln L)/dt - sKd(ln K)/dt where f is the global production function; Y is output; t is time; sL is the share of input costs attributable to labor expenses; sK is the share of input costs attributable to capital expenses; L is a dollar quantity of labor; K is a dollar quantity of capital.

Source: econterms

multinomial

In the context of discrete choice models, multinomial means there are more than two possible values of the dependent variable, the choice, which is a scalar.

For specific constructions see multinomial logit and multinomial probit.

Source: econterms

multinomial logit

Relatively easy to compute but has the problematic IIA property by construction. Multinomial probit with correlation between structural residuals does not suffer from the IIA problem but is computationally expensive. (Ed.: I don't know why the IIA problem gets sucked into this when the actual different between logit and probit is the functional form.) Multinomial logit is available in more software packages than is multinomial probit.

Source: econterms

multinomial probit

Multinomial probit with correlation between structural residuals does not suffer from the IIA problem but is computationally expensive. Multinomial logit which solves a similar problem is relatively easy to compute but has the problematic IIA property by construction. (Ed.: I don't know why the IIA problem gets sucked into this when the actual different between logit and probit is the functional form.) Multinomial logit is available in more software packages than is multinomial probit.

Source: econterms

multivariate

A discrete choice model in which the choice is made from a set with more than one dimension is said to be a multivariate discrete choice model.

Source: econterms

Mundell-Tobin effect

That nominal interest rates would rise less than one-for-one with inflation because in response to inflation the public would hold less in money balances and more in other assets, which would drive interest rates down.

Source: econterms

mutatis mutandis

"The necessary changes having been made; substituting new terms."

Source: econterms

MVN

An abbrevation for 'multivariate normal' distribution.

Source: econterms

N

Nadaraya-Watson estimator

Used to estimate regression functions based on data {Xi, Yi}. See the equation in the middle of Hardle's page 25. The equation produces an estimate for Y at any requested value of X (not only the ones in the data), using as input (1) the data set {Xi, Yi}, and (2) a kernel function (which see) describing the weights to be put on values in the data set near X in estimating Y. The kernel function itself can be parameterized by the choice of its functional form and its 'bandwidth' which scales its width in the X-direction. (!@#$ must add the equation when math avail in html)

Source: econterms

NAICS

North American Industry Classification System, a set of industry categories standardized between the U.S. and Canada. In the U.S. it is taking over from the SIC code system.

Source: econterms

NAIRU

Non-Accelerating Inflation Rate of Unemployment. That is, a steady state unemployment rate above which inflation would fall and below which inflation would rise. By some estimates the NAIRU is 6% in the U.S. NAIRU is approximately a synonym for the natural rate of unemployment.

Paraphrased from Eisner's article: The essential hypotheses of the theory that there is a stable NAIRU are that (1) an existing rate of inflation self-perpetuates by generating expectations of future inflation; (2) higher unemployment reduces inflation and lower unemployment raises inflation.

Source: econterms

narrow topology

Synonym for weak topology.

Source: econterms

NASDAQ

National Association of Securities Dealers automatic quotation market. A mostly-electronic market of stocks in the United States. There is no 'pit' -- market makers in each stock offer buy and sell prices which are different.

Source: econterms

Nash Equilibrium

A profile of strategies such that given the other players conform to the (hypothesized) equilibrium strategies, no player has an incentive to unilaterally deviate from his (hypothesized) equilibrium strategy. The self-reference in this definition can be made more explicit by saying that a Nash equilibrium is a profile of strategies that form 'best responses' to one another, or a profile of strategies which are 'optimal reactions' to 'optimal reactions'. Nash equilibrium is the pure form of the basic concept of strategic equilibrium; as such, it is useful mainly in normal form games with complete information. When allowing for randomized strategies, at least one Nash equilibrium exists in any game(unless the players' payoff functions are irregular); for an example, see the game of matching pennies in the entry on game theory. Typically, a game possess several Nash equilibria, and the number of these is odd.

Source: SFB 504

Nash equilibrium

Sets of strategies for players in a noncooperative game such that no single one of them would be better off switching strategies unless others did.

Formally: Using the normal form definitions, let utility functions as functions of payoffs for the n players u1() ... un() and sets of possible actions A=A1 x ... x An, be common knowledge to all the players. Also define a-i as the vector of actions of the other players besides player i. Then a Nash equilibrium is an array of actions a* in A such that ui(a*) >= ui(a-i* | ai) for all i and all ai in Ai.
In a two-player game that can be expressed in a payoff matrix, one can generally find Nash equilibria if there are any by, first, crossing out strictly dominated strategies for each player. After crossing out any strategy, consider again all the strategies for the other player. When done crossing out strategies, consider which of the remaining cells fail to meet the criteria above, and cross them out too. At the end of the process, each player must be indifferent among his remaining choices, GIVEN the action of the others.

In most noncooperatives games of interest, each player has to calculate what the strategies of the others will be before his own Nash equilibrium strategy can become clear. Introspection may also be needed to envision his own payoffs. This approach tends to presume that the payoffs are known, or knowable, and that the players are rational. An alternative line of thought with its own detailed theory, is that the players can arrive at Nash equilibria by repeated experimentation, searching for an optimal strategy. Theories of learning and evolutionary game theory are related.

A Nash equilibrium represents a prediction if there is a real world analog to the game.

Source: econterms

Nash product

The maximand of the Nash Bargaining Solution: (s1-d1)(s2-d2)
where d1 and d2 are the threat points, and s1 and s2 are the shares of the good to be divided.

Source: econterms

Nash strategy

The strategy of a player in a Nash equilibrium.

Source: econterms

national accounts

A measure of macroeconomic categories of production and purchase in a nation. The production categories are usually defined to be output in currency units by various industry categories, plus imports. (Output is usually approximately the same as industry revenue.) The purchase categories are usually government, investment, consumption, and exports, or subsets of these. The amount produced is supposed to be approximately equal to the amount purchased. Measures are in practice made by national governments.

a different definition, by Peter Wardley:

national accounts: a measure of all the income received by economic actors within an economy. It can be measured as expenditure (on investment and consumption), income (wages, salaries, profits and rent) or as the value of output (expenditure of all goods and services). Inevitably these three different methods of estimating national accounts will produce different results but these discrepancies are usually relatively small.

Source: econterms

natural experiment

If economists could experiment they could test some theories more quickly and thoroughly than is now possible. Sometimes an isolated change occurs in one aspect of the economic environment and economists can study the effects of that change as if it were an experiment; that is, by assuming that every other exogenous input was held constant.
An interesting example is that of the U.S. ban on the television and radio broadcast of cigarette advertising which took effect on Jan 2, 1971. The ban seems to have had substantial effects on industry profitability, the rate of new entrants, the rate of consumers switching brands and types of cigarettes, and so forth. The ban can be used as a natural experiment to test theories of the effects of advertising.

Source: econterms

natural increase

population increase due to more births and less mortality

Source: econterms

natural rate of unemployment

"The natural rate of unemployment is the level which would be ground out by the Walrasian system of general equilibrium equations, provided that there is [e]mbedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the cost of gathering information about job vacancies and labor availiabilities, the costs of mobility, and so on." -- Milton Friedman, "The Role of Monetary Policy" AER March 1968 1-21 This is a long-run rate. Transitory shocks could move unemployment away from the natural rate. Real wages would increase with productivity as long as unemployment were kept at the natural rate.

Source: econterms

Natural sampling

In research on probabilistic inference the "natural sampling"-approach offers an explanation for peopleīs poor performance on tasks concerning probability estimates like the base-rate fallacy. Natural Sampling refers to the sequential acquisition of information. It is assumed that, as humans evolved, the "natural" format of information was frequencies as actually experienced in a series of events, rather than probabilities or percentages. Because of this "evolutionary advantage" of frequencies, people have less problems in processing frequencies rather than probabilities. Thus, a frequency presentation format leads to better performances in a number of tasks in judgment and decision making.

Source: SFB 504

NBER

The U.S. National Bureau of Economic Research. At 1050 Massachusetts Avenue, Cambridge, MA 02138, USA. Focuses on macroeconomics. Data source by ftp: ftp nber.harvard.edu. NBER web site

Source: econterms

NBS

Nash Bargaining Solution

Source: econterms

NE

Nash Equilibrium

Source: econterms

NELS

National Educational Longitudinal Survey, a U.S. survey administered to 24,599 eighth grade students from 1052 schools in 1988, with follow-up surveys to the same students every two years afterward. Many similar questions were asked of the parents of the students as well to obtain more accurate information.

Source: econterms

neoclassical growth model

A macro model in which the long-run growth rate of output per worker is determined by an exogenous rate of technological progress, like those following from Ramsey (1928), Solow (1956), Swan (1956), Cass (1965), and Koopmans (1965).

Source: econterms

neoclassical model

Often means Walrasian general equilibrium model.

Describes a model in which firms maximize profits and markets are perfectly competive.

Source: econterms

neolassical

According to Lucas (1998), neoclassical theory has explicit reference to preferences. Contrast classical.

Source: econterms

nests

We say "model A nests model B" if every version of model B is a special case of model A. This can be said of either structural (theoretical) or estimated (econometric) models.

Example: Model B is "Nominal wage is an affine function of the age of the worker." Model A is "Nominal wage is an affine function of the age and education of the worker." Here model A nests model B.

Source: econterms

netput

Stands for "net output". A quantity, in the context of production, that is positive if the quantity is output by the production process and negative if it is an input to the production process. A technology is often be defined in a model by restrictions on the vector of netputs with the dimension of the number of goods.

Source: econterms

network externalities

The effects on a user of a product or service of others using the same or compatible products or services. Positive network externalities exist if the benefits are an increasing function of the number of other users. Negative network externalities exist if the benefits are a decreasing function of the number of other users.

Katz and Shapiro, 1985 consider two types of positive network externalities. A communication externality or direct externality describes a communication network in which the more subscribers there are the greater the services provided by the network (e.g. the telephone system or the Internet). An indirect externality or hardware-software externality exists if a durable good (e.g. computer) is compatible with certain complementary goods or services (e.g. software) and the owner of the durable good benefits if their system is compatible with a large pool of such complementary goods. Liebowitz and Margolis, 1994 have an insightful commentary on this subject, and offer among other things the following example: "if a group of breakfat-eaters joins the network of orange juice drinkers, their increased demand raises the price of orange juice concentrate, and thus most commonly effects a transfer of wealth from their fellow network members to the network of orange growers." The new group negatively affects the old group without compensation, but it is through the price system and is therefore a pecuniary externality. These authors strongly make the case that big network externalities are not often observed, and cite evidence against two common examples, the QWERTY and VHS standards.

Source: econterms

neutral technological change

Refers to the behavior of technological change in models. Barro and Salai-i-Martin (1995), page 33, refer to three types:

A technological innovation is Hicks neutral (following Hicks (1932)) if the ratio of capital's marginal product to labor's marginal product is unchanged for a given capital to labor ratio. That is: Y=T(t)F(K,L).

A technological innovation is Harrod neutral (following Hicks (1932)) if the technology is labor-augmenting ... contd, see barro p 33 ...

Source: econterms

neutrality

"Money is said to be neutral [in a model] if changes in the level of nominal money have no effect on the real equilibrium." -- Blanchard and Fisher, p. 207
.
Money might not be neutral in a model if changes in the level of nominal money induce self-fulfilling expectations or interact with real frictions like fixed nominal wages, fixed nominal prices, information asymmetries, or slow reactions by households to adjust their money holding quickly. (This list from a talk by Martin Eichenbaum, 11/11/1996.)

Source: econterms

New Classical view

On policy -- that no systematic (that is, predictable) monetary policy matters.

Source: econterms

New Economy

A proper noun, describing one of several aspects of the late 1990s. Lipsey (2001) has discerned these meanings:
(1) An economy characterized by the absence of business cycles or inflations.
(2) The industry sectors producing computers and related goods and presumably services such as e-commerce.
(3) An economy characterized by an accelerated rate of productivity growth.
(4) The 'full effects on social, economic, and political systems of the [information and communications technologies] revolution' centered on the computer. This is Lipsey's meaning.

Source: econterms

new growth theory

Study of economic growth. Called 'new' because unlike previous attempts to model the phenomenon, the new theories treat knowledge as at least partly endogenous. R&D is one path. Hulten (2000) says that the new growth theories have the new assumption that the marginal product of capital is constant rather than in diminishing as in the neoclassical theories of growth. Capital often in the new growth models includes investments in knowledge, research and development of products, and human capital.

Source: econterms

new institutionalism

A school of thought in economic history, linked to the work of Douglas North.

New institutionalist. 'This body of literature has claimed that, in history, institutions matter, and in empirical analyses of history, institutions typically refer to those provided by the state: a currency, stock market, property rights, legal system, patents, insurance schemes, and so on.' The literature Hopcroft cites includes: North 1990b; North 1994; North and Thomas 1973; North and Weingast 1989; Bates 1990, p. 52; Campbell and Lindberg 1990; Eggertson 1990, pp 247-8; Cameron 1993, p. 11. p 35: 'Using the terminology of the new institutionalizsm, field systems in preindustrial Europe were produces of local institutions. Institution is defined as a system of social rules, accompanied by some sort of enforcement mechanism. Rules may be formal in nature -- for exapmle, legislation, constitutions, legal specifications of property rights, and so on (Coase 1960; Barzel 1989; North 1982: 23) -- or informal in nature -- for example, cultural norms, customs, and mores (North 1990a: 192; Knight 1992) . . . .' All these are from Hopcroft, Rosemary L. 'Local Institutions and Rural Development in European History' Social Science History 27:1 (spring 2003), pp 25-74.

Source: econterms

NIPA

Stands for the National Income and Product Accounts. This is a GDP account for the United States.

Source: econterms

NLLS

Stands for Nonlinear least squares, an estimation technique. The technique is to choose the parameter, b, of assumed distribution pdf f(), to minimize this expression: sum over all i of (yi-f(xi, b))2
where the xi's are the independent data, yi's are the dependent data.

Source: econterms

NLREG

Stands for Nonlinear Statistical Regression program, discussed at http://www.sandh.com/sherrod/nlreg.html.

Source: econterms

NLS

National Longitudinal Survey, done at the U.S. Bureau of Labor Statistics.

Source: econterms

NLSY

"The National Longitundinal Survey of Youth is a detailed survey of more than 12,000 young people from 1979 through 1987. The original 1979 sample contained 12,686 youths age 14 to 21, of whom 6,111 represented the entire population of youths and 5,295 represented an oversampling of civilian Hispanic, black, and economically disadvantages non-Hispanic, nonblack youth. An additional 1,280 were in the military. [ed.: meaning, their parents were?] The survey had a remarkably low attrition rate -- 4.9 percent through 1984 -- and thus represents the largest and best available longitudinal data set on youths in the period under study."

NLS web site.

Source: econterms

NLSYW

National Longitudinal Survey of Young Women, done at the U.S. Bureau of Labor Statistics.

Source: econterms

NNP

Net National Product. "Net national product is the net market value of the goods and services produced by labor and property located in [a nation]. Net national product equals GNP [minus] the capital consumption allowances, which are decudted from gross private domestic fixed investment to express it on a net basis." -- Survey of Current Business

Source: econterms

no-arbitrage bounds

Describes the outer limits on a price in a model where that price must meet a no-arbitrage condition.
In many models a price is completely determined by a no-arbitrage condition, but if some frictions are modeled -- transactions costs or liquidity constraints, for example -- then a no-arbitrage condition defines a range of possible prices, because tiny variations from the theoretical no-arbitrage price are not large enough to make arbitrage profits feasible. The range of possible prices is bounded by the "no-arbitrage bounds"

Source: econterms

noise trader

In models of asset trading, a noise trader is one who doesn't have any special information but trades for exogenous reasons; e.g., to raise cash.

Such trades make a market liquid for other traders; that is, they give a given trader someone to exchange with.

Source: econterms

noncentral chi-squared distribution

If n random values z1, z2, ..., zn are drawn from normal distributions with known nonzero means and constant variance, then squared, and summed, the resulting statistic is said to have a noncentral chi-squared distribution with n degrees of freedom: z12 + z22 + ... + zn2) ~ X2(n, q) This is a two-parameter family of distributions. Parameter n is conventionally labeled the degrees of freedom of the distribution. Parameter q is the noncentrality parameter. It is related to the means mi and variance s2 of the normal distributions thus: q=(sum for i=1 to n) of (mi2 / s2). The mean of a distribution that is X2(n, q) is (n+q). The variance of that distribution is (2n+4q).

Source: econterms

noncooperative game

A game structure in which the players do not have the option of planning as a group in advance of choosing their actions. It is not the players who are uncooperative, but the game they are in.

Source: econterms

nondivisibility of labor

If one models labor as contractible in continuous units, workers as identical, and workers' utility functions as concave in leisure and income, an optimal outcome is often for all workers to work some fraction of the time. Then none are unemployed. We do not observe this.

If instead one presumes that labor cannot be effectively contracted in continuous units but must be purchased in blocks (e.g. of eight hours per day, or forty per week), this aspect can generate unemployed workers in the model while others work long schedules, even if the workers are otherwise identical. Labor may have to be sold in such blocks for several observed reasons: (a) because there are fixed costs to the employer of employing each worker; (b) because there are fixed costs (e.g. transportation; dressing for work) to the employee of each job. This idea of labor as nondivisible has been used in macro models by Gary Hansen (1985) and Richard Rogerson (1988).

Source: econterms

nonergodic

A time series process {xt} is nonergodic if it is so strongly dependent that it does not satisfy the law of large numbers. (Paraphrased straight from Wooldridge.)

Source: econterms

nonlinear pricing

A pricing schedule where the mapping from quantity purchased to total price is not a strictly linear function. An example is affine pricing.

Source: econterms

nonparametric

In the context of production theory (e,g, hulten 2000 circa p 21) a nonparametric index number would not be derived from a specific functional form of the production function.

See also nonparametric estimation.

Source: econterms

nonparametric estimation

Allows the functional form of the regression function to be flexible. Parametric estimation, by contrast, makes assumptions about the functional form of the regression function (e.g. that it is linear in the independent variables) and the estimate is of those parameters that are free.

Source: econterms

nonprofit

A nonprofit organization is one that has committed legally not to distribute any net earnings (profits) to individuals with control over it such as members, officers, directors, or trustees. It may pay them for services rendered and goods provided.

Source: econterms

nonuse value

Synonym for existence value.

Source: econterms

NORC

National Opinion Research Center at the University of Chicago.

Source: econterms

normal distribution

A continuous distribution of major importance. Cdf is often denoted by capital F(x). Pdf is often denoted by little f(x).
The cdf and pdf are not representable in html. The distribution has two parameters, mean m and variance s2. Has moment-generating function M(t)=exp(m*t + .5*s2t2).

Source: econterms

normal form

A way of writing out a game.
Formally:
let n be the number of players,
let Ai be the set of possible actions (or strategies) of player i,
and let ui:A1 x A2 x ... x An -> R represents the payoff function (or utility function) for player i. That is once all players have chosen that set of actions, the payoff for player i is the value of that function.
Then the normal form of the game is characterized by G = (A1, A2, ... An, u1, ... , un)

Source: econterms

Normal form vs extensive form game

In normal (or strategic) form games, the players move (choose their actions) simultaneously. Whenever the strategy spaces of the players are discrete (and finite), the game can be represented compactly as an NxM-game (see below). By contrast, a game in extensive form specifies the complete order of moves (along the direction of time), typically in a game tree (see below), in addition to the complete list of payoffs and the available information at each point in time and under each contingency. As any normal form can be 'inflated' to an extensive form game, concepts of strategic equilibrium in general relate to extensive form games. Whenever the exact timing of actions is irrelevant to the payoffs, however, a game is represented with more parsimony in normal form.

Source: SFB 504

notation

Unusual notation, hard to put in glossary for definition, is listed here:

2A has a particular meaning. For a finite set A, the expression 2A means "the set of all subsets of A.". If as is standard we denote the number of elements in set A by |A|, the number of elements in 2A is 2|A|.

Source: econterms

NPV

Net Present Value. Same as PDV (present discounted value).

Source: econterms

NSF

The U.S. National Science Foundation, which funds much economic research.

Source: econterms

null hypothesis

The hypothesis being tested. "The hypothesis that the restriction or set of restrictions to be tested does in fact hold." Often denoted H0.

Source: econterms

numeraire

The money unit of measure within an abstract macroeconomic model in which there is no actual money or currency. A standard use is to define one unit of some kind of goods output as the money unit of measure for wages.

Source: econterms

NxM game

A normal form game for two players, where one player has N possible actions and the other one has M possible actions. In such a game, the payoffs pairs to any strategy combination can be neatly arranged in a matrix, and the game is easily analyzable. NxM-games thus provide an easy way to gain an idea of what the structure of a more complex game looks like.

Source: SFB 504

NYSE

New York Stock Exchange, the largest physical exchange in the U.S. Is in New York City.

Source: econterms

O

Objectivity

A test is considered to be objective if different independent researchers obtain the same results.

Source: SFB 504

Obsolescence

If the organizational routines and structures have not been altered for a long time, the probability that these structures loose their fit with the environmental conditions increases when the environment is turbulent. This means, that these routines get obsolescent (Schulz, 1993). The consequence on the organizational level is that old organizations in a highly turbulent environment with obsolescent (core) routines should have a higher probability of dying than young ones (Barron, West and Hannan, 1994).

Source: SFB 504

obsolescence

An object's attribute of losing value because the outside world has changed. This is a source of price depreciation.

Source: econterms

ocular regression

A term, generally intended to be amusing, for the practice of looking at the data to estimate by eye how data variables are related. Contrast formal statistical regressions like OLS.

Source: econterms

ODE

Abbreviation for 'ordinary differential equation'.

Source: econterms

OECD

Organization of Economic Cooperation and Development; includes about 25 industrialized democracies.

Source: econterms

offer curve

Consider an agent in a general equilibrium (e.g., an Edgeworth box). Assume that agent has a fixed known budget and known preferences which predict what set (or possible sets) of quantities that agent will demand at various relative prices. The offer curve is the union of those sets, for all relative prices, and can be drawn in an Edgeworth box.

Source: econterms

OLG

Abbreviation for overlapping generations model, in which agents live a finite length of time long enough to live one period at least with the next generations of agents.

Source: econterms

oligopsony

The situation in which a few, possibly collusive, buyers are the only ones who buy a certain good.
Has the same relation to monopsony that oligopoly has to monopoly.

Source: econterms

OLS

Ordinary Least Squares, the standard linear regression procedure. One estimates a parameter from data and applying the linear model
y = Xb + e
where y is the dependent variable or vector, X is a matrix of independent variables, b is a vector of parameters to be estimated, and e is a vector of errors with mean zero that make the equations equal.
The estimator of b is: (X'X)-1X'y
A common derivation of this estimator from the model equation (1) is:
y = Xb + e
Multiply through by X'. X'y = X'Xb + X'e
Now take expectations. Since the e's are assumed to be uncorrelated to the X's the last term is zero, so that term drops. So now:
E[X'Xb] = E[X'y] Now multiply through by (X'X)-1 E[(X'X)-1X'Xb] = E[(X'X)-1X'y] E[b] = E[(X'X)-1X'y] Since the X's and y's are data the estimate of b can be calculated.

Source: econterms

omitted variable bias

There is a standard expression for the bias that appears in an estimate of a parameter if the regression run does not have the appropriate form and data for other parameters.

Define: y as a vector of N dependent variable observations, X1 as an (N by K1) matrix of regressors, X2 as an (N by K2 matrix of additional regressors), and e as an (N by 1) vector of disturbance terms with sample mean zero.
Suppose the true regression is:
y = X1b1 + X2b2 + e
for fixed values of b1 and b2. (If 'true regression' seems ambiguous, imagine for the rest of the description that the values of X1, X2, b1, and b2 were chosen in advance by the econometrician and e will be chosen by a random number generator with expectation zero, and y is determined by these choices; in this framework we can be certain what the true regression is and can study the behavior of possible estimators.)

Suppose given the data above one ran the OLS regression

y = X1c1 +errors

Would E[c1]=b1 despite the absence of X2b2? It will turn out in the following derivation that in most cases the answer is no and the difference between the two values is called the omitted variable bias.

The OLS estimator for c1 will be:

c1OLS = (X1'X1)-1X1'y
= (X1'X1)-1X1'(X1b 1 + X2b2 + e)
= (X1'X1)-1X1'X1b1 + (X1'X1)-1X1'X2b2 + (X1'X1)-1X1'e
= b1 + (X1'X1)-1X1'X2b2 + (X1'X1)-1X1'e

So since E[X1'e] = 0, taking expectations of both sides gives:

E[c1] = b1 + (X1'X1)-1X1'X2b2

In general c1OLS will be a biased estimator of b1. The omitted variables bias is (X1'X1)-1X1'X2b2 . An exception occurs if X1'X2=0. Then the estimator is unbiased.

There is more to be learned from the omitted variables bias expression. Leaving off the final b2, the expression (X1'X1)-1X1'X2b2 is the OLS estimator from a regression of X2 on X1.

Source: econterms

Op(1)

statistical abbreviation for "converges in distribution" or, equivalently, "the average is bounded in probability."
That is Xt/n is bounded in probability.

Source: econterms

open

An economy is said to be open if it has trade with other economies. (Implicitly these are usually assumed to be countries.)
One measure of a country's openness is the fraction of its GDP devoted to imports and exports.

Source: econterms

option

A contract that gives the holder the right, but not the duty, to make a specified transaction for a specified time.

The most common option contracts give the holder the right buy a specific number of shares of the underlying security (equity or index) at a fixed price (called the exercise price or strike price) for a given period of time. Other option contracts allow the holder to sell.

This is its most common practical business meaning, and the use in theoretical economics is analogous -- e.g. that owning a plant gives a firm the option to manufacture in it at any time or to sell it at any time.

Source: econterms

Options and hedging

Options are contracts, which give the owner the right, to buy (call option) or to sell (put option) a specific amount of an underlying asset for a specific price (exercise price) only at the end (european option) or at any time prior to the specified expiration date (american option). For this option right, the owner has to pay a premium, the option price, at the conclusion date. The opposition of the contract, the seller of the option, has the obligation, to sell (call option) or to buy (put option) the underlying asset at the exercise price if the owner exercises the option.

Source: SFB 504

order condition

In a econometric system of simultaneous equations, each equation may satisfy the order condition, or not do so. If it does not, its parameters are not all identified.

The order condition is often easy to verify. Often the econometrician verifies that the order condition is satisfied and assumes with this justification that the equation is identified, although formally a stronger requirement, the rank condition, must be satisfied. For each equation there must be enough instrumental variables available for the equation to have as many instruments as there are parameters.

The system can satisfy a form of the order condition: that there be as many exogenous variables in the reduced form of the system as there are parameters.

Source: econterms

order of a kernel

The order of a kernel function is defined as the first nonzero moment.

Source: econterms

order of a sequence

Two relevant concepts are denoted O() and o().

Let cn be a random sequence. Quoting from Greene, p 110: "cn is of order 1/n, denoted O(1/n), if plim ncn is a nonzero constant."
And
"cn is of order less than 1/n, denoted o(1/n), if plim ncn equals 0."

Source: econterms

order statistic

The first order statistic of a random sample is the smallest element of the sample. The second order statistic is the second smallest. And the nth order statistic in a sample of size n is the largest element. The pdf of the order statistics can be derived from the pdf from which the random sample was drawn.

Source: econterms

organizational capital

'whatever makes a collection of people and assets more productive together than apart. Firm-specific human capital (Becker 1962), management capital (Prescott and Visscher 1980), physical capital (Ramey and Schapiro 1996), and a cooperative disposition in the firm's workforce (Eeckhout 2000 and Rb and Zemsky 1997) are examples of organizational capital.' -- from Boyan Jovanovic and Peter L. Rousseau, Sept 20 2000, 'Technology and the Stock Market: 1885-1998' NYU and Vanderbilt University, working paper

Source: econterms

Organizational learning

The notion of Organizational Learning (OL) has become very prominent in the near past. Managers see OL as a powerful tool to improve the performance of an organization. Thus, it is not only the scholars of organization studies who are interested in the phenomenon of OL but also the practitioners who have to deal with the subject of OL.

Generally, one can distinguish between two different processes of organizational change that are associated with OL:
adaptive learning, i.e. changes that have been made in reaction to changed environmental conditions and
proactive learning, i.e organizational changes that have been made on a more willful basis. This is learning which goes beyond the simple reacting to environmental changes.

In general, it is assumed that adaptive learning comes along with a lower degree of organizational change. This means that adaptive learning is seen as a process of incremental changes. What is more, adaptive learning is also seen as more automatic and less cognitively induced than proactive learning. The inferiorities of adaptive learning compared to proactive learning are also expressed by the different labels which have been used to describe these two types of OL: ?Single-Loop versus Double-Loop Learning? (Argyris and Schön, 1978), ?Lower Level versus Higher Level Learning? (Fiol and Lyles, 1985), ?Tactical versus Strategic Learning? (Dodgson, 1991) ?Adaptive versus Generative Learning? (Senge, 1990).

Cyert and March (1963) started the discussion about OL. In their view OL is mainly an adaptive process in which goals, attention rules (or standard operating procedures), e.g. which parts of the environment the organization should listen to, and search rules that stir the organization in a particular way to find problem-solutions are adapted to the experiences that are made within the organization. Cyert and March did not concentrate on the question whether these experiences were made because of environmental changes. Rather they focus on the problem solving quality of the attention- and search rules. So even in stable environments, organizations can learn how to adjust their procedures in order to better perform.

Within the behavioral school of James March (e.g. Levitt and March, 1988; Levinthal and March, 1988; Levinthal and March, 1993) it was always emphasized that OL is executed on the basis of rules. Organizational decisions depend on certain rules. The experiences which have been made within the organization determine the contents of these rules. If the rules no longer fit the experiences they have to be altered. This process of rule change can be affected by different disturbances, e.g. false interpretation of events or the impediment of the realisation of personal insights (March and Olsen, 1975). These affections of the process of learning reveal that OL can only be regarded as a limited rational process.

Source: SFB 504

Organizational studies behavioral

Within the discussion about organizational learning the expression of behavioral is used in two different respects.


The approaches within the Carnegie School and later within the March School of organization studies are regarded as behavioral approaches that contrast the neo-classic concepts of organizing. The behavioral approaches are prominent for the notion that organizational actions are mainly rule based because the organizational members have only limited rational abilities. Therefore they need certain definite rules (or standard operating procedures) that relief them from the continuous task of creative problem solving. These rule based actions have only a satisficing outcome which contradicts the neoclassic view of a human who is able to willfully find the optimal decision by rational search.


The second meaning of behavioral within the discussion of organizational learning is quite similar to the meaning within the psychology of learning. In this respect behavioral is regarded as the opposite of cognitive, i.e. it is the automatic response to a stimulus of the environment. When there are some shifts in the environment of the organization the behavioral reaction to these shifts is the automatically changing of routines and strategies without reflecting cognitively what has happened and which reaction would be most appropriate (Fiol and Lyles, 1985).

Source: SFB 504

Organizational studies cognitive

The notion of cognitive activity has two different meanings within organization studies. The first meaning is the opposite of behavioral and means that decisions are made by reflective insights and not just by automatic response to certain stimuli.

The second meaning is prominent within the instititionalist debate. Herein, the expression cognitve denotes the tendency of humans within institutionalized settings to comply with the environment. Because humans have to create reliable frameworks in which they can repeatingly act in a stable way, the cognitive task of the human mind is to reassure the social structures they are living in. This has to be carried out actively, so that the categories of life are brought about by the actual human conduct. The result is a taken-for-granted world which is not reflectively questioned but which is actively constructed. An example would be a daily meeting of superiors which is seen as a basic element of organizing and is associated with many typical procedures which this meeting cannot do without. Otherwise it would not be the same essential part of organizing. However, organizational members have to actively dedicate themselves to the conduct of this meeting without questioning it in order to keep it as a stable element of the organizationīs activities.

Source: SFB 504

organizations

organizations

Source: econterms

outside money

monetary base. Is held in net positive amounts in an economy. Is not a liability of anyone's. E.g., gold or cash. Contrast inside money.

Source: econterms

Overconfidence

The concept of overconfidence is based on a large body of evidence from cognitive psychological experiments and surveys showing that individuals overestimate their own abilites or knowledge as well as the precision of their information.

Svenson (1981), Taylor & Brown (1988), Tiger (1979) and Weinstein (1980) provide empirical evidence for the first category of overconfidence: Most people rate themselves above the mean on almost every positive personal trait - including driving ability, a sense of humor, managerial risk taking, and expected longevity. For instance, when a sample of U.S. students assessed their own driving safety, 82% judged themselves to be in the top 30% of the group.

The sources of overconfidence can be indirect, like computational constraints and frictions which diminish the marginal benefits of additional iterations in judgment. Or they can be linked to a different cognition and decision process. For example, individuals may think that they can interpret information better than they really do.

In behavioral finance, the concept of overconfidence might help to explain the high volume of trade observed in financial markets. If one connects the phenomenon of overconfidence with the phenomenon of anchoring, one can see the origins of differences of opinion among investors, and one possible source of the high volume of trade among them.

Source: SFB 504

overshooting

Describes "a situation where the initial reaction of a variable to a shock is greater than its long-run response."

Source: econterms

own

This word is used in a very particular way in the discussion of time series data. In the context of a discussion of a particular time series it refers to previous values of that time series. E.g. 'own temporal dependence' as in Bollerslev-Hodrick 92 p 8 refers to the question of whether values of the time series in question were detectably a function of previous values of that same time series.

Source: econterms

Ox

An object-oriented matrix language sometimes used for econometrics. Details are at http://hicks.nuff.ox.ac.uk/Users/Doornik/doc/ox/ .

Source: econterms

P

p value

Is associated with a test statistic. It is "the probability, if the test statistic really were distributed as it would be under the null hypothesis, of observing a test statistic [as extreme as, or more extreme than] the one actually observed."
The smaller the P value, the more strongly the test confirms the null hypothesis.

A p-value of .05 or less confirms the null hypothesis "at the 5% level" that is, the statistical assumptions used imply that only 5% of the time would the supposed statistical process produce a finding this extreme if the null hypothesis were false.

5% and 10% are common significance levels to which p-values are compared.

Source: econterms

Paasche index

A kind of index number. The official method for US price deflators computes them as a Paasche index. The algorithm is just like the Laspeyres index but the base quantities are chosen from the second, later period.

See also http://www.geocities.com/jeab_cu/paper2/paper2.htm.

Source: econterms

panel data

Data from a (usually small) number of observations over time on a (usually large) number of cross-sectional units like individuals, households, firms, or governments.

Source: econterms

par

Can by a synonym for 'face value' as in the expression "valuing a bond at par".

Source: econterms

paradox

This word is used in a particular way within the literature of economics -- not to describe a situation in which facts are apparently in conflict, but to describe situations in which apparent facts are in conflict with models or theories to which some class of people holds allegiance. This use of the word implies strong belief in the measured facts, and in the theory, and the resolution to economic paradoxes tend to be of the form that the data do not fit the model, the data are mismeasured or, (the most common case) the model or theory does not fit the environment measured.

In some ways the term paradox is awkward in economics since the data are so poorly measured, the models so brutally simplified, and the mapping between environment and evidence so stochastic. So this editor avoids the term where possible, but often it is a compact and vigorous way of telling the reader the context of the subsequent discussion.

A list of these that an economist may be expected to recognize includes: Allais paradox, Ellsberg paradox, Condorcet voting paradox, Scitovsky paradox, and productivity paradox.

Source: econterms

parametric

adjective. A function is 'parametric' in a given context if its functional form is known to the economist.
Example 1: One might say that the utility function in a given model is increasing and concave in consumption. But it only becomes parametric once one says that u(c)=ln(c) or u(c)=c1-A/1-A. At this point only parameters such as A remain to be specified or estimated.
Example 2: In an econometric model one often imposes assumptions such as that the the relationship being estimated is linear, thence to do a linear regression. These are parametric assumptions. One might also make some estimates of the 'regression function' (the relationship) without such parametric assumptions. This field is called nonparametric estimation.

Source: econterms

Pareto chart

The message below was posted to a Stata listserv and is reproduced here without any permission whatsoever.

  
Date: Thu, 28 Jan 1999 08:59:57 -0500  
From: 'Steichen, Thomas'   
Subject: RE: statalist: Re: Pareto diagrams    

[snip]    

Pareto charts are bar charts in which the bars are arranged in descending 
order, with the largest to the left. Each bar represents a problem.  The chart 
displays the relative contribution of each sub-problem to the total problem.   

 Why:  This technique is based on the Pareto principle, which states that a  
few of the problems often account for most of the effect.  The Pareto  
chart makes clear which 'vital few' problems should be addressed first.    

How:  List all elements of interest.  Measure the elements, using the same  
unit of measurement for each element.  Order the elements according to  
their measure, not their classification.  Create a cumulative distribution  
for the number of items and elements measured and make a bar and line  
graph.  Work on the most important elements first.    

Reference:  Wadsworth, Stephens and Godfrey. Modern Methods for Quality  
Control and Improvement, New York: John Wiley, 1986 and Kaoru Ishikawa,  
Guide To Quality Control, Asian Productivity Organization, 1982, Quality  
Resources, 1990.    

(Note: above info 'borrowed' from a web page)  

Source: econterms

Pareto distribution

Has cdf H(x) = 1 - x(-a) where x>=0, a>0. This distribution is unbounded above. (A slightly different version, with two parameters, is shown in Hogg and Craig on p. 207.)

Source: econterms

Pareto efficiency

An economic allocation is inefficient if there is an alternative allocation in which all agents are better off in terms of their own objective functions (utilities, profits, payoffs); it is said to be Pareto efficient if there is none such alternative allocation. Put differently, in an Pareto efficient state, it is impossible to improve one agent's state without making at least one other agent worse-off. This criterion generalizes the one of a maximal aggregate surplus to situations with incomparable objective functions (preferences). It is weak in that typically entire continua of Pareto efficient states exist; the criterion is therefore important mainly as a negative one, ruling out (institutions leading to) inefficient states as undesirable. With incomplete information about the agents' preferences, the notion of Pareto efficiency is ambiguous. An operational criterion of efficiency then depends on the state of the resolution of uncertainty: different allocations can be compared in terms of their ex-ante, their interim, or their ex-post efficiency.

Source: SFB 504

Pareto optimal

In an endowment economy, an allocation of goods to agents is Pareto Optimal if no other allocation of the same goods would be preferred by every agent. Pareto optimal is sometimes abbreviated as PO.

Optimal is the descriptive adjective, whereas optimum is a noun. A Pareto optimal allocation is one that is a Pareto optimum. There may be only one such optimum.

Source: econterms

Pareto set

The set of Pareto-efficient points, usually in a general equilibrium setting.

Source: econterms

partially linear model

Refers to a particular econometric model which is between a linear regression model and a completely nonparametric model:
y=b'X+f(Z)+e
where X and Z are known matrices of independent variables, y is a known vector of the dependent variable, f() is not known but often some assumptions are made about it, and b is a parameter vector. Assumptions are often made on e such as that e~N(0,s2I) and that E(e|X,Z)=0.
The project at hand is to estimate b and/or to estimate f() in a non-parametric way, e.g. with a kernel estimator.

Source: econterms

partition

"[A] partition of a finite set (capital omega) is a collection of disjoint subsets of (capital omega) whose union is (capital omega)." -- Fudenberg and Tirole p 55

Source: econterms

passive measures (to combat unemployment)

unemployment and related social benefits and early retirement benefits. (contrast active)

Source: econterms

path dependence

Following David (97): describes allocative stochastic processes. Refers to the way the history of the process relates to the limiting distribution of the process. "Processes that are non-ergodic, and thus unable to shake free of their history, are said to yield path dependent outcomes." (p. 13) "A path-dependent stochastic process is one whose asymptotic distribution evolves as a consequence" of the history of the process. (p. 14) The term is relevant to the outcome of economic processes through history. For example, the QWERTY keyboard standard would not be the standard if it had not been chosen early; thus the keyboard standard evolved through a path-dependent process.

Source: econterms

path dependency

The view that technological change in a society depends quantitatively and/or qualitatively on its own past. "A variety of mechanisms for the autocorrelation can be proposed. One of them, due to David (1975) is that technological change tends to be 'local,' that is, learning occurs primarily around techniques in use, and thus more advanced economies will learn more about advanced techniques and stay at the cutting edge of progress." (Mokyr, 1990, p 163) A noted example of technological path dependence is the QWERTY keyboard, which would not be in use today except that it happened to be chosen a hundred years ago. A special interest in the research literature was taken in the question of whether technological path dependence has been observed to lead to noticeably Pareto-inferior outcomes later. Liebowitz and Margolis in a series of papers (e.g. in the JEP) have made the case that it has not -- that is that the QWERTY keyboard is not especially inferior to alternatives in productivity, and that the VHS videotapes were not especially inferior to Beta videotapes at the time consumers chose between them.

Source: econterms

payoff matrix

In a game with two players, the payoffs to each player can be shown in a matrix. The one at right is from the classic Prisoners Dilemma game:
Player Two
CD
Player One C3,3 0,4
D4,0 1,1


Here, player one's strategy choices (shown, conventionally, on the left) are C and D, and player two's, shown on the top, are also C and D. The payoffs of each possible choice of strategy pairs is in each cell of the matrix. The first number is the payoff to player one, and the second is the payoff to player two.

Source: econterms

pdf

probability distribution function. This function describes a statistical distribution. It has the value, at each possible outcome, of the probability of receiving that outcome. A pdf is usually denoted in lower case letters. Consider for example some f(x), with x a real number is the probability of receiving a draw of x. A particular form of f(x) will describe the normal distribution, or any other unidimensional distribution.

Source: econterms

PDV

Present Discounted Value

Source: econterms

pecuniary externality

An effect of production or transactions on outside parties through prices but not real allocations.

Source: econterms

perfect equilibrium

In a noncooperative game, a profile of strategies is a perfect equilibrium if it is a limit of epsilon-equilibria as epsilon goes to zero.

There can be more than one perfect equilibrium in a game.

For a more formal definition see sources. This is a rough paraphrase.

Source: econterms

Personality

"A global concept referring to all those relatively permanent traits, dispositions, or characteristics within the individual, which give some degree of consistency to that person?s behaviour" (Feist, 1994). In 1927 Allport found almost 50 different definitions, so for a deeper understanding it should be explained according to its role in personality theory.

Source: SFB 504

Personality psychology

Personality psychology is concerned with differences between individuals; it overlaps with developmental and social psychology to some extent. General orientations within personality psychology are the following:
Trait theories propose traits as underlying properties; as such traits account for behavioural consistencies and stable individual differences between persons.
Situationism argues that behaviour is mainly determined by the situation rather than by internal personality types or traits.
Interactionism postulates that observed behaviour is a function of the interaction between the traits of an individual and the specific characteristics of the situation including traits of other persons who are present.

Source: SFB 504

PERT

Program Evaluation and Review Technique (is this used?)

Source: econterms

phase portrait

graph of a dynamical system, depicting the system's trajectories (with arrows) and stable steady states (with dots) and unstable steady states (with circles) in a state space. The axes are of state variables.

Source: econterms

Phillips curve

A relation between inflation and unemployment. Follows from William Phillips' 1958 "The relation between unemployment and the rate of change of money wage rates in the United Kingdom, 1861-1957" in _Economica_. In the subsequent discussion the relation was thought to be a negative one -- high unemployment would correlate with low inflation. That stylized fact lost empirical support with the stagflation of the U.S. in the 1970s, in which high inflation and high unemployment occurred together. More recent evidence suggests that over the long term, across countries, there is a POSITIVE correlation between inflation and unemployment. Discussion continues on which of these is more 'causal to' the other and less 'caused by' the other. In recent use, "[T]he 'Phillips curve' has become a generic term for any relationship between the rate of change of a nominal price or wage and the level of a real indicator of the intensity of demand in the economy, such as the unemployment rate." -- Gordon, Robert G., "Foundations of the Goldilocks Economy" for Brookings Panel on Economic Activity, Sept 4, 1998.

Source: econterms

Phillips-Perron test

A test of a unit root hypothesis on a data series.

(Ed.: what follows is my best, but imperfect, understanding.) The Phillips-Perron statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. In one example a value of -4.49 constituted rejection at the p-value of .10.

Source: econterms

phrases

phrases

Source: econterms

physical depreciation

Decline in ability of assets to produce output. For example, computers, light bulbs, and cars have low physical depreciation; they work until they expire. Could be said to be made up of deterioration and exhaustion.

Source: econterms

Pigou effect

The wealth effect on consumption as prices fall. A lower price level leads to a greater existing private wealth of nominal value, leading to a rise in consumption. Contrast the Keynes effect.

Source: econterms

plant

a plant is an integrated workplace, usually all in one location.

Source: econterms

platykurtic

An adjective describing a distribution with low kurtosis. 'Low' means the fourth central moment is less than three times the second central moment; such a distribution has less kurtosis than a normal distribution.
Platy- means 'fat' in Greek and refers to the central part of the distribution. Platykurtic distributions are not as common as leptokurtic ones.

Source: econterms

PO

Pareto Optimal

Source: econterms

Poisson distribution

A discrete distribution. Possible values for x are the integers 1,2,3,...

Denoting mean as mu, the Poisson distribution has mean mu, variance mu, and pdf (e-mumu-x)/x!. Moment-generating function (mgf) is exp(mu(et-1)).

Source: econterms

Poisson process

In such a process, let n be the number of events that occur in a given time. n will have a Poisson distribution.

Source: econterms

political science

The academic subject centering on the relations between governments and other governments, and between governments and peoples.

Source: econterms

polity

Group with an organized governance. Normally a politically organized population or can be a religious one.

Or, form of governance.

Examples needed. Use is very context-sensitive; that is, the definition is not too informative without examples.

Source: econterms

polychotomous choice

Multiple choice. In the context of discrete choice econometric models, means that the dependent variable has more than two possible values.

Source: econterms

pooling of interests

One of two ways to do the accounting for a U.S. firm after a merger. The alternative is purchase accounting.

A pooling of interests is the method usually taken for all-stock deals.

Source: econterms

poor

In poverty, which see.

Source: econterms

portmanteau test

a test for serial correlation in a time series, not just of one period back but of many. Standard reference is Ljung and Box (1978). The equation characterizing this test is given on page 18, footnote 15, of Bollerslev-Hodrick 1992 and will go in here when html has an equation format.

Source: econterms

poverty

As commonly defined by U.S. researchers: the state of living in a family with income below the federally defined poverty line. poverty

Source: econterms

power

"The power of a test statistic T is the probability that T will reject the null hypothesis when the hypothesis is not true.

Formally, it is the probability that a draw of T is in the rejection region given that the hypothesis is not true.

Source: econterms

power distribution

A continuous distribution with a parameter that we will denote k. Pdf is kxk-1. Mean is k/(k+1). Variance is k/[(1+k)2(2+k)].

This distribution has not been found to correspond to natural or economic phenomena, but is useful in practice problems because it is algebraically tractable.

Source: econterms

PPF

Short for Production Possibilities Frontier.

Source: econterms

PPP

Stands for purchasing power parity, a criterion for an appropriate exchange rate between currencies. It is a rate such that a representative basket of goods in country A costs the same as in country B if the currencies are exchanged at that rate.

Actual exchange rates vary from the PPP levels for various reasons, such as the demand for imports or investments between countries.

Source: econterms

Prais-Winsten transformation

An improvement to the original Cochrane-Orcutt algorithm for estimating time series regressions in the presence of autocorrelated errors. The implicit reference is to Prais-Winsten (1954).

The Prais-Winsten tranformation makes it possible to include the first observation in the estimation.

Source: econterms

pre-fisc

Means before taking account of the government's fiscal policy. Usually refers to personal incomes before taxes and government transfers between people. For example a researcher might take more interest in pre-fisc income inequality than in post-fisc income inequality because the effects of government transfers are designed specifically to reduce inequality.

Source: econterms

precautionary savings

Savings accumulated by an agent to prepare for future periods in which the agent's income is low.

Source: econterms

precision

reciprocal of the variance

Source: econterms

predatory pricing

The practice of selling a product at low prices in order to drive competitors out, discipline them, weaken them for possible mergers, and/or to prevent firms from entering the market. It is an expensive strategy.

In the United States there is no legal (statutory) definition of predatory pricing, but pricing below marginal cost (the Areeda-Turner test) has been used by the Supreme Court in 1993 as a criterion for pricing that is predatory. (Salon magazine, 1998/11/11)

Source: econterms

predetermined variables

Those that are known at the beginning of the current time period. In an econometric model, means exogenous variables and lagged endogenous variables.

Source: econterms

Preferences

A statement of preference is a statement of judgement. It is subjective in the sense that it expresses somebodyīs preference of something over something else. It is relative because something is preferred over something else, and because subjectīs pure preferences may change over time (the latter could be viewed as a change of taste).

In a narrow sense, the concept of preferences as used by economists can be understood in terms of consumer preferences over consumption goods. The consumerīs preferences order the set of consumption bundles available to him. (A consumption bundle is a combination of all available commodities that are consumed, where the share of each commodity can either be zero or have some positive value.) The expression xPy means that the consumer prefers some bundle x over a bundle y, i.e., the consumer thinks that the bundle x is at least as good as the bundle y. Accordingly, preferences can be understood as a mathematical relation on the set of available consumption bundles. The following properties of this relation are assumed to hold: The preference relation is complete (i.e., any two bundles can be compared), reflexive, and transitive.

The assumption of transitivity is required for any discussion of preference maximization; if preferences were not transitive, there might be sets of bundles which had no best elements. Additionally, certain continuity assumptions may be required. Given these properties of preferences over consumption bundles, a utility function can be shown to exist. In experimental studies, however, transitivity has repeatedly been shown to be violated (see Tversky (1977) or Goldstone, Medin & Halberstadt (1997)).

Source: SFB 504

present-oriented

A present-oriented agent discounts the future heavily and so has a HIGH discount rate, or equivalently a LOW discount factor. See also 'future-oriented', 'discount rate', and 'discount factor'.

Source: econterms

price ceiling

Law requiring that a price for a certain good be kept below some level. May lead to shortage and a black market.

Source: econterms

price complements

Inputs i and j to a production function are "price complements in production" if when the price of i goes down the use of both i and j go up.

Source: econterms

price elasticity

A measure of responsiveness of some other variable to a change in price. See elasticity for the the general equation.

Source: econterms

price floor

Law requiring that a price for a certain good be kept above some level.

Source: econterms

price index

A single number summarizing price levels.

A larger number conventionally represents higher prices. A variety of algorithms are possible and a precise specification (which is rare) requires both an algorithm (an example of which is a Laspeyres index) and a set of goods, fixed known quantities of each (the basket),

Source: econterms

price substitutes

Inputs i and j to a production function are "price substitutes in production" if when the price of i goes down the use of j goes up.

Source: econterms

Prices

Exchange ratio for economic goods and services, denominated in terms of a numeraire good called 'money' and determined endogenously as part of a market equilibrium. Prices are terms of trade for exchanging economic goods on markets, like wages as prices for labor duties, interest rates as prices for gain access to future liquidity today, insurance premiums as prices for bearing risk, or option prices as prices for the right to buy or sell at prespecified conditions at a later date.

Prices indicate the 'social' opportunity cost of giving up a marginal unit of goods in exchange for a marginal unit of the numeraire good (money). Thus, they reflect the relative marginal desirability (and costliness) of trading a particular good, as implied by the traders' preferences, technologies, and resources, and hence signal the marginal profitability of supplying and demanding additional units of the good. In particular in a competitive market equilibrium, equilibrium prices are then characterized by the equality of aggregate demand and supply.

Typically, all units of a specified good are traded in a market for the same (uniform) price. Uniform prices leave the market participants some rents from exchanging at fixed terms of trade, thus providing incentives for the voluntary engagement in trade. In competitive environments (see the entry on competitive market equilibrium), the free formation of prices leads to stationary market situations which even maximize the total gains from trade. More generally, equilibrium prices are characterized by the absence of incentives to change aggregate demand and aggregate supply for the particular good in question. Equilibrium prices for derivative financial assets, for example, are determined solely from the principle that no gains from trade are possible (no arbitrage principle); see the entry on option pricing.

Source: SFB 504

pricing kernel

same as "stochastic discount factor" in a model of asset prices.

Source: econterms

pricing schedule

A mapping from quantity purchased to total price paid

Source: econterms

principal strip

A bond can be resold into parts that can be thought of as components: a principal component that is the right to receive the principal at the end date, and the right to receive the coupon payments. The components are called strips. The principal component is the principal strip.

Source: econterms

principal-agent

The general name for a class of games faced by a player, called the principal, who by the nature of the environment does not act directly but instead by giving incentives to other players, called agents, who may have different interests.

Source: econterms

principal-agent problem

A particular game-theoretic description of a situation. There is a player called a principal, and one or more other players called agents with utility functions that are in some sense different from the principal's. The principal can act more effectively through the agents than directly, and must construct incentive schemes to get them to behave at least partly according to the principal's interests. The principal-agent problem is that of designing the incentive scheme. The actions of the agents may not be observable so it is not usually sufficient for the principal just to condition payment on the actions of the agents.

Source: econterms

principle of optimality

The basic principle of dynamic programming, which was developed by Richard Bellman: that an optimal path has the property that whatever the initial conditions and control variables (choices) over some initial period, the control (or decision variables) chosen over the remaining period must be optimal for the remaining problem, with the state resulting from the early decisions taken to be the initial condition.

Source: econterms

Prisoner's Dilemma

A classic game with two players. Imagine that the two players are criminals being interviewed separately by police. If either gives information to the police, the other will get a long sentence. Either player can Cooperate (with the other player) or Defect (by giving information to the police). Here is an example payoff matrix for a Prisoner's Dilemma game:
Player Two
CD
Player One C3,3 0,4
D4,0 1,1


(D,D) is the Nash equilibrium, but (C,C) is the Pareto optimum. (That difference is often discussed extensively for various games in the research literature.) If this same game is repeated more than once with a high enough discount factor, there exist Nash equilibria in which (C,C) is a possible outcome of the early stages.

Source: econterms

Prisoners dilemma

Consider the following story. Two suspects in a crime are put into separate cells. If they both confess, each will be sentenced to three years. If only one of them confesses, he will be freed and used to witness against the other, who will receive a sentence of ten years. If neither confesses, they will both convicted of a minor offense and spend just a year in prison. This game is easily put in matrix form as a 2x2 game (see above). Once this is done, it is pretty obvious that each prisoner (player) has a dominant strategy to confess. The unique equilibrium of this game thus leads to the (Pareto) inefficient outcome (efficiency). This provides the most famous example that strategic equilibrium typically implies inefficient outcomes, and even can lead to the worst possible outcome (any other outcome is pareto-dominating the equilibrium outcome.) The prisoners' dilemma game illustrates the structure of interaction in an oil cartel, or any oligolistic industry of quantity competition, where each firm has an incentive to 'spoil' the market by unilaterally increasing its own output. The same structure of interaction characterizes the problem of providing public goods (free rider problem), i.e. of voluntarily paying taxes.

Source: SFB 504

pro forma

describes a presentation of data, typically financial statements, where the data reflect the world on an 'as if' basis. That is, as if the state of the world were different from that which is in fact the case. For example, a pro forma balance sheet might show the balance sheet as if a debt issue under consideration had already been issued. A pro forma income statement might report the transactions of a group on the basis that a subsidiary acquired partway through the reporting period had been a part of the group for the whole period. This latter approach is often adopted in order to ensure comparability between financial statements of the year of acquisition with those of subsequent years.

Source: econterms

probability

probability

Source: econterms

probability function

synonym for pdf.

Source: econterms

probit model

An econometric model in which the dependent variable yi can be only one or zero, and the continuous indepdendent variable xi are estimated in:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is the normal cdf. The logit model is the same but with a different cdf for F.

Source: econterms

process

see "stochastic process"

Source: econterms

product differentiation

This is a product market concept. Chamberlin (1933) defined it thus: 'A general class of product is differentiated if any significant basis exists for distinguishing the goods of one seller from those of another.'

Source: econterms

production function

Describes a mapping from quantities of inputs to quantities of an output as generated by a production process. Standard example is:

y = f(x1, x2)

Where f() is the production function, the x's are inputs, and the y is an output quantity.

Source: econterms

production possibilities frontier

A standard graph of the maximum amounts of two possible outputs that can be made from a given list of input resources.

A basic outline of how to draw one.

Source: econterms

production set

The set of possible input and output combinations. Often put into the notation of netputs, so that this set can be defined by restrictions on a collection of vectors with the dimension of the number of goods, one element for each kind of good, and a positive or negative real quantity in each element.

Source: econterms

productivity

A measure relating a quantity or quality of output to the inputs required to produce it.

Often means labor productivity, which is can be measured by quantity of output per time spent or numbers employed. Could be measured in, for example, U.S. dollars per hour.

Source: econterms

productivity paradox

Standard measures of labor productivity in the U.S. suggest that computers, at least until 1995, were not improving productivity. The paradox is the question: why, then, were U.S. employers investing more and more heavily in computers?

Resolving the paradox probably requires an understanding of the gap between what the productivity statistics measure and the goals of the U.S. organizations getting computers. Sichel (1990), pp 33-36 lists these six:

  • the mismanagement hypothesis is that computers underestimate the costs of new computer technology, such as training, and therefore buy too many for optimum short-run profitability
  • the redistribution hypothesis is that private rates of return on computers are high enough, but the effect is only to compete over business with other firms in the same industry, which does not overall show greater productivity; the analogy is to an arms race, in which both players invest heavily but the overall effect is not to increase security
  • the long learning-lags hypothesis is that information technology will generate a substantial productivity effect when society is organized around its availability, but it is too soon for that
  • the mismeasurement hypothesis is that national economic accounts do not tend to measure the services brought by information technology such as quality, variety, customization, and convenience
  • the offsetting factors hypothesis is that other factors unrelated to computers have dragged down productivity measures
  • the small share of computers in the capital stock hypothesis is just that computers are too small a share of plant and equipment to make a difference.
Two other hypotheses on this subject are:
  • the externalities hypothesis is that computers in organization A improve the long-run productivity of organization B but this is not attributable in the national accounts to the computers in A.
  • the reorganization hypothesis is that computers in a firm do not raise much the quantity of capital stock but they cause a more productive long run organization of the capital stock within that firm and a more efficient split of tasks between that firm and other organizations.
Technophiles (such as this writer, or venture capitalists, or Silicon Valley publications) and technology historians tend to believe in the long learning-lags hypothesis, the mismeasurement hypothesis, the externalities/network-effects hypothesis, and the reorganization hypothesis. The gap in beliefs and understandings between technophiles and national accounts and pricing experts, such as Sichel and Robert J. Gordon (see e.g. the 1996 paper) is astonishing as of early 1999. They talk past one another. The national accounts experts tend to take the labor/capital models more seriously, and technology history less seriously, than do the technophiles. The Federal Reserve Bank under Greenspan has piloted between these views.

Note, March 2002: The national accounts experts have come now to the view of the technophiles and it is now commonly thought thtat the productivity measure lags the other indicators in the boom.

Source: econterms

proof

A mathematical derivation from axioms, often in principle in the form of a sequence of equations, each derived by a standard rule from the one above.

Source: econterms

propensity score

An estimate of the probability that an observed entitiy like a person would undergo the treatment. This probability is itself a predictor of outcomes sometimes.

Source: econterms

proper equilibrium

Any limit of epsilon-proper equilibria as epsilon goes to zero. -- Myerson (1978), p 78

Source: econterms

property income

Nominal revenues minus expenses for variable inputs including labor, purchased materials, and purchased services. Property income can serve as an approximation to the services rendered by capital. It contains the returns to national wealth. It can be thought to include technology and organizational components as well as 'pure' returns to capital.

Source: econterms

Prospect theory

Kahneman & Tversky (1979) developed this theory to remedy the descriptive failures of subjectively expected utility (SEU) theories of decision making. Prospect theory attempts to describe decisions under uncertainty, and has also been applied to the field of social psychology. Like SEU-theories, prospect theory assumes that the value V of an option or alternative is calculated as the summed products over specified outcomes x. Each product consists of a utility v(x) and a weight w attached to the objective probability p of obtaining x. Thus the value V of an option is w(p) v(x).

Prospect theory differs from expected utility theory in a number of important respects. First, it differs from expected utility theory in the way it handles the probabilities attached to particular outcomes. Prospect theory treats preferences as a funcion of "decision weights", and it assumes that these weights do not always correspond to probabilities. Specifically, prospect theory postulates that decision weights tend to overweight small probabilities and underweight moderate and high probabilities.

Prospect theory also replaces the notion of "utility" with "value". Whereas utility is usually defined only in terms of net wealth, value is defined in terms of gains and losses (deviations from a reference point). The value function has a different shape for gains and losses. For losses it is convex and relatively steep, for gains it is concave and not quite so steep.

Based on these assumptions some deviations from normative theories can be explained like loss aversion, reflection effect or framing effect.

Source: SFB 504

Prototype

The prototype or family resemblance view (Rosch & Mervis, 1975; Rosch, 1978) is one of five theories of conceptual structure in categorization (Komatsu, 1992). According to this view, people form summary representations - prototypes - that abstract across specific instances (e.g. several chairs) to give information about how members of the category, on average, are like. Other theories of conceptual structure are the classical view (Katz, 1972; Katz & Fodor, 1963), the exemplar view (Medin & Schaffer, 1978), the schema view and, most recently, the explanation-based or theory view (Johnson-Laird, 1983; Murphy & Medin, 1985).

Characteristics of the prototype view are:

Source: SFB 504

pseudoinverse

Also called Moore-Penrose inverse. The pseudoinverse of a matrix X always exists, is unique and satisfies four conditions shown on p 37 of Greene (93).

Perhaps the most important case is when there are more rows can columns, and X is of full column rank. Then the pseudoinverse of X is: (X'X)-1X'. Notice how much this equation looks like the equation for the OLS estimator.

Source: econterms

PSID

Panel Study of Income Dynamics. Data set often used in labor economics studies. Data is from U.S. and is put together at the University of Michigan.
Since 1968 the PSID has followed and interviewed annually a national sample that began with about 5000 families. Low-income families were over-sampled in the original design. Interviews are usually conducted with the 'head' of each family.
Includes a lot of income and employment variables, and continues to track children who grow up and move out. For more information see the PSID's Web site at http://www.isr.mich.edu/src/psid/index.html

Source: econterms

public finance

public finance

Source: econterms

purchase accounting

One of two ways to do the accounting for a U.S. firm after a merger. The alternative is the pooling of interests.

Source: econterms

Put hedge protective put

A put hedge is a hedge strategy consisting of holding an underlying asset and simultaneously buying a put option (long put) of the same one. This combination leads to an asymmetric gain/loss position at the expiration date: on the one hand there is an effective hedge against losses and on the other hand it is possible to participate in the gains of the underlying asset, but reduced to the amount of the option price.

Source: SFB 504

put option

A put option is a security which conveys the right to sell a specified quantity of an underlying asset at or before a fixed date.

Source: econterms

put-call parity

A relationship between the price of a put option and a call option on a stock according to a standard model.
Define:
r as the risk-free interest rate, constant over time, in an environment with no liquidity constraints
S as a stock's price
t as the current date
T as the expiration date of a put option and a call option
K as the strike price of the put option and call option
C(S,t) as the price of the call option when the current stock price is S and the current date is t
P(S,t) as the price of the put option when the current stock price is S and the current date is t
Then the relationship is:
P(S,t) = C(S,t) - S + Ke-r(T-t)
The relationship is derived from the fact that combinations of options can make portfolios that are equivalent to holding the stock through time T, and that they must return exactly the same amount or an arbitrage would be available to traders.

Source: econterms

putting-out system

'A condition for the putting-out system to exist was for labor to be paid a piece wage, since working at home made the monitoring of time impossible.' -- Joel Mokyr, NU working paper: 'The rise and fall of the factory system: technology, firms, and households since the industrial revolution' Carnegie-Rochester Conference on macreconomics, Nov 17-19, 2000.

Source: econterms

putty-putty

As in Romer, JPE, Oct 1990. This describes an attribute of capital in some models. Putty-putty capital can be transformed into durable goods then back into general, flexible capital. This contrasts with putty-clay capital which if I understand correctly can be converted into durable goods but which cannot then be converted back into re-investable capital. The algebraic modeler chooses one of these to make an argument or arrive at a conclusion within the model. The term is not normally interpreted empirically although empirical analogues to each kind of capital exist.

Source: econterms

Q

Q ratio

Or, "Tobin's Q". The ratio of the market value of a firm to the replacement cost of everything in the firm. In Tobin's model this was the driving force behind investment decisions.

Source: econterms

Q-statistic

Of Ljung-Box. A test for higher-order serial correlation in residuals from a regression.

Source: econterms

QJE

Quarterly Journal of Economics

Source: econterms

QLR

quasi-likelihood ratio statistic

Source: econterms

QML

Stands for quasi-maximum likelihood.

Source: econterms

quango

Stands for quasi-non-governmental organization, such as the U.S. Federal Reserve. The term is British.

Source: econterms

quartic kernel

The quartic kernel is this function: (15/16)(1-u2)2 for -1<u<1 and zero for u outside that range. Here u=(x-xi)/h, where h is the window width and xi are the values of the independent variable in the data, and x is the value of the independent variable for which one seeks an estimate.
For kernel estimation.

Source: econterms

quasi rents

returns in excess of the short-run opportunity cost of the resources devoted to the activity

Source: econterms

quasi-differencing

a process that makes GLS easier, computationally, in a fixed-effects kind of case. One generates a (delta) with an equation [see B. Meyer's notes, installment 2, page 3] then subtracts delta times the average of each individual's x from the list of x's, and delta times each individual's y from the list of y's, and can run OLS on that. The calculation of delta requires some estimate of the idiosyncratic (epsilon) error variance and the individual effects (mu) error variance.

Source: econterms

quasi-hyperbolic discounting

A way of accounting in a model for the difference in the preferences an agent has over consumption now versus consumption in the future.

Let b and d be scalar real parameters greater than zero and less than one. Events t periods in the future are discounted by the factor bdt.

This formulation comes from a 1999 working paper of C. Harris and D. Laibson which cites Phelps and Pollak (1968) and Zeckhauser and Fels (1968) for this function.

Contrast hyperbolic discounting, and see more information on discount rates at that entry.

Source: econterms

quasi-maximum likelihood

Often abbreviated QML. Maximum likelihood estimation can't be applied to a econometric model which has no assumption about error distributions, and may be difficult if the model has assumptions about error distributions but the errors are not normally distributed. Quasi-maximum likelihood is maximum likelihood applied to such a model with the alteration that errors are presumed to be drawn from a normal distribution. QML can often make consistent estimates.

QML estimators converge to what can be called a quasi-true estimate; they have a quasi-score function which produces quasi-scores, and a quasi-information matrix. Each has maximum likelihood analogues.

Source: econterms

quasiconcave

A function f(x) mapping from the reals to the reals is quasiconcave if it is nondecreasing for all values of x below some x0 and nonincreasing for all values of x above x0. x0 can be infinity or negative infinity: that is, a function that is everywhere nonincreasing or nondecreasing is quasiconcave.

Quasiconcave functions have the property that for any two points in the domain, say x1 and x2, the value of f(x) on all points between them satisfies:
f(x) >= min{f(x1), f(x2)}.

Equivalently, f() is quasiconcave iff -f() is quasiconvex.

Equivalently, f() is quasiconcave iff for any constant real k, the set of values x in the domain of f() for which f(x) >= k is a convex set.

The most common use in economics is to say that a utility function is quasiconcave, meaning that in the relevant range it is nondecreasing.

A function that is concave over some domain is also quasiconcave over that domain. (Proven in Chiang, p 390).

A strictly quasiconcave utility function is equivalent to a strictly convex set of preferences, according to Brad Heim and Bruce Meyer (2001) p. 17.

Source: econterms

quasiconvex

A function f(x) mapping from the reals to the reals is quasiconvex if it is nonincreasing for all values of x below some x0 and nondecreasing for all values of x above x0. x0 can be infinity or negative infinity: that is, a function that is everywhere nonincreasing or nondecreasing is quasiconvex.

Quasiconvex functions have the property that for any two points in the domain, say x1 and x2, the value of f(x) on all points between them satisfies:
f(x) <= max{f(x1), f(x2)}.

Equivalently, f() is quasiconvex iff -f() is quasiconcave.

Equivalently, f() is quasiconvex iff for any constant real k, the set of values x in the domain of f() for which f(x) <= k is a convex set.

A function that is convex over some domain is also quasiconvex over that domain. (Proven in Chiang, p 390).

Source: econterms

R

R&D intensity

Sometimes defined to be the ratio of expenditures by a firm on research and development to the firm's sales.

Source: econterms

R-squared

Usually written R2. Is the square of the correlation coefficient between the dependent variable and the estimate of it produced by the regressors, or equivalently defined as the ratio of regression variance to total variance.

Source: econterms

Ramsey equilibrium

Results from a government's choice in certain kinds of models. Suppose that the government knows how private sector producers will respond to any economic environment, and that the government moves first, choosing some aspect of the environment. Suppose further that the government makes its choice in order to maximize a utility function for the population. Then the government's choice is a Ramsey problem and its solution pays off with the Ramsey outcome.

Source: econterms

Ramsey outcome

The payoffs from a Ramsey equilibrium.

Source: econterms

Ramsey problem

See Ramsey equilibrium.

Source: econterms

random

Not completely predetermined by the other variables available.

Examples: Consider the function plus(x,y) which we define to have the value x+y. Every time one applies this function to a given x and y, it would give the same answer. Such a function is deterministic, that is, nonrandom.

Consider by contrast the function N(0,1) which we define to give back a draw from a standard normal distribution. This function does not return the same value every time, even when given the same parameters, 0 and 1. Such a function is random, or stochastic.

Source: econterms

random effects estimation

The GLS procedure in the context of panel data.

Fixed effects and random effects are forms of linear regression whose understanding presupposes an understanding of OLS.

In a fixed effects regression specification there is a binary variable (also called dummy or indicator variable) marking cross section units and/or time periods. If there is a constant in the regression, one cross section unit must not have its own binary variable marking it.

From Kennedy, 1992, p. 222:
'In the random effects model there is an overall intercept and an error term with two components: eit + ui. The eit is the traditional error term unique to each observation. The ui is an error term representing the extent to which the intercept of the ith cross-sectional unit differs from the overall intercept. . . . . This composite error term is seen to have a particular type of nonsphericalness that can be estimated, allowing the use of EGLS for estimation. Which of the fixed effects and the random effects models is better? This depends on the context of the data and for what the results are to be used. If the data exhaust the population (say observations on all firms producing automobiles), then the fixed effects approach, which produces results conditional on the units in the data set, is reasonable. If the data are a drawing of observations from a large population (say a thousand individuals in a city many times that size), and we wish to draw inferences regarding other members of that population, the fixed effects model is no longer reasonable; in this context, use of the random effects model has the advantage that it saves a lot of degrees of freedom. The random effects model has a major drawback, however: it assumes that the random error associated with each cross-section unit is uncorrelated with the other regressors, something that is not likely to be the case. Suppose, for example, that wages are being regressed on schooling for a large set of individuals, and that a missing variable, ability, is thought to affect the intercept; since schooling and ability are likely to be correlated, modeling this as a random effect will create correlation between the error and the regressor schooling (whereas modeling it as a fixed effect will not). The result is bias in the coefficient estimates from the random effect model.'

[Kennedy asserts, then, that fixed and random effects often produce very different slope coefficients.]

The Hausman test is one way to distinguish which one makes sense.

Source: econterms

random process

Synonym for stochastic process.

Source: econterms

random variable

A nondeterministic function. See random.

Source: econterms

random walk

A random walk is a random process yt like:
yt=m+yt-1+et
where m is a constant (the trend, often zero) and et is white noise.

A random walk has infinite variance and a unit root.

Source: econterms

Rao-Cramer inequality

defines the Cramer-Rao lower bound, which see. (would like to put equation from Hogg and Craig p 372 here)

Source: econterms

rational

rational

An adjective. Has several definitions.:
(1) characterizing behavior that purposefully chooses means to achieve ends (as in Landes, 1969/1993, p 21).

(2) characterizing preferences which are complete and transitive, and therefore can be represented by a utility function (e.g. Mas-Colell).

(3) characterizing a thought process based on reason; sane; logical. Can be used in regard to behavior. (e.g. American Heritage Dictionary, p 1028)

Source: econterms

Rational behavior

In economics, rational behavior in economics means that individuals maximize some objective function (e.g., their utility function) under the constraints they face. The concept of rational behavior has ? in addition to making the analysis of individual behavior a good deal more tractable than a less structured assumption would permit ? two interpretations. First, it allows to derive optimal economic behavior in a normative sense. Second, models of rational behavior can be used to explain and predict actual (i.e., observed) economic behavior.

Source: SFB 504

rational expectations

An assumption in a model: that the agent under study uses a forecasting mechanism that is as good as is possible given the stochastic processes and information available to the agent.

Often in essence the rational expectations assumption is that the agent knows the model, and fails to make absolutely correct forecasts only because of the inherent randomness in the economic environment.

Source: econterms

rational ignorance

The option of an agent not to acquire or process information about some realm. Ordinarily used to describe a citizen's choice not to pay attention to political issues or information, because paying attention has costs in time and effort, and the effect a citizen would have by voting per se is usually zero.

Source: econterms

rationalizable

In a noncooperative game, a strategy of player i is rationalizable iff it is a best response to a possible set of actions of the other players, where those actions are best responses given beliefs that those other players might have.

By rationalizable we mean that i's strategy can be justified in terms of the other players choosing best responses to some beliefs (subjective probability distributions) that they may be conjectured to have.

Nash strategies are rationalizable.

For a more formal definition see sources. This is a rough paraphrase.

Source: econterms

rationalize

verb, meaning: to take an observed or conjectured behavior and find a model environment in which that behavior is an optimal solution to an optimization problem.

Source: econterms

RATS

A computer program for the statistical analysis of data, especially time series. Name stands for Regression Analysis of Time Series. First chapter of its manual has a nice tutorial.
The software is made by Estima Corp.

Source: econterms

RBC

stands for Real Business Cycle (which see) -- a class of macro theories

Source: econterms

real analysis

real analysis

Source: econterms

real business cycle theory

A class of theories explored first by John Muth (1961), and associated most with Robert Lucas. The idea is to study business cycles with the assumption that they were driven entirely by technology shocks rather than by monetary shocks or changes in expectations.

Shocks in government purchases are another kind of shock that can appear in a pure real business cycle (RBC) model. Romer, 1996, p 151

Source: econterms

real externality

An effect of production or transactions on outside parties that affects something entering their production or utility functions directly.

Source: econterms

Recent developments in theory and practice

Emphasizing that the young and old coexist at any time, overlapping generation models (of which Modigliani and Brumberg are now seen to be a special case) have been fruitful in depicting the equilibrium pattern of growth in an economy over time, in bringing into sharp relief the role of interest rates, and in weighing the welfare contribution of security and private market saving schemes. They have also sharpened up the treatment of bequests, both anticipated and accidental (Abel (1985)). They have lent themselves to simulation studies but have not proved rewarding for tests against empirical data. Also, models of dynamic labor supply have been developed in a life-cycle hypothesis framework; see Fisher (1971), Getz & Becker (1975) and MaCurdy (1981).

Recent applications and extensions have related to the rapid development of social security and its effects on private savings, and variation of dates of retirement (Feldstein (1974) and Kotlikoff (1984)) on the one hand and effects of switch from income or capital taxes to consumption taxes on the other (Seidman (1984)). The social security studies have necessitated the use of more carefully defined wealth and income figures. For Germany, these problems are discussed in Börsch-Supan et al. (1999).

The empirical research on the life-cycle hypothesis has raised questions as to the adequacy of the life-cycle model without much more attention to bequest issues or allowance for uncertainty as to date of death (e.g., Rodepeter & Winter, 1998). In part it is argued that the life cycle may apply to a large section of the population but the big savers and even the lowest earners may obey different criteria (Kotlikoff & Summers (1981)). Repeatedly in well-defined samples, through not in all, the decline in wealth with age was not significant; in more finely grouped data by cohorts it even rises with age.

Source: SFB 504

recession

A recession is defined to be a period of two quarters of negative GDP growth.

Thus: a recession is a national or world event, by definition. And statistical aberrations or one-time events can almost never create a recession; e.g. if there were to be movement of economic activity (measured or real) around Jan 1, 2000, it could create the appearance of only one quarter of negative growth. For a recession to occur the real economy must decline.

Source: econterms

reduced form

The reduced form of an econometric model has been rearranged algebraically so that each endogenous variable is on the left side of one equation, and only predetermined variables (exogenous variables and lagged endogenous variables) are on the right side.

Source: econterms

Reference point

The reference point is the individual's point of comparison, the "status quo" against which alternative scenarios are contrasted. This can be today's wealth or whatever measure of wealth that is psychologically important to the individual.

The reference point is an important element of the value function in Kahneman and Tversky's prospect theory. Taking value as a function of wealth, the Kahneman-Tversky value function is upward sloping everywhere, but with an abrupt decline in slope at the reference point. For wealth levels above the reference point, the value function is concave downward, just as are conventional utility functions. At the reference point, the value function may be regarded, from the fact that its slope changes abruptly there, as infititely concave downward. For wealth levels below the reference point, Kahneman and Tversky found evidence that the value function is concave upward (Shiller, 1997).

As a consequence of such a functional form, the risk attitude of decision makers will depend on whether they are in a win or a loss situation relative to their reference point. People become risk lovers in loss situations and risk averters in win situations.

In behavioral finance this kind of value function is utilized to explain the so called disposition effect: the phenomenon that investors are reluctant to realize their losses but sell winners too early.

Source: SFB 504

Refinement

Either a sharpening of the concept of strategic (or, Nash) equilibrium, or another criterion to discard implausible and to select plausible equilibria when a game exhibits multiple equilibria. For example, symmetric or Pareto efficient equilibria may more plausibly be played by the players in favor of asymmetric or inefficient equilibria. Likewise, equilibrium outcomes that are 'focal' in the cultural and psychological context in which the game is played might be more plausible than those which lack such salient features. Preferring symmetric outcomes in many games leads to the selection of an equilibrium in mixed strategies. In the following, we give an idea of the basic modifications of Nash equilibrium in more complex games.

Source: SFB 504

Reflection effect

The reflection effect (Tversky & Kahneman, 1981) refers to having opposite preferences for gambles differing in the sign of the outcomes (i.e. whether the outcomes are gains or losses). Reflection effects involve gambles whose outcomes are opposite in sign, although they do have the same magnitude. For example, most people would choose a certain gain of $20 over a one-third chance of gaining $60. But they would choose a one-third chance of losing $60 (and two-thirds chance of losing nothing) over a certain loss of $20. The outcomes actually involve different domains (gain versus loss), that is, they differ in sign (+$20 versus -$20).

The difference between reflection and framing effect is that in the framing effect the actual domain does not change (Fagley, 1993); the same outcome is phrased to appear to involve the other domain. So a loss of $20 might be framed to seem like a gain (as when an even larger loss was expected). Framing may cause it to seem like a gain, but it remains, objectively, a loss.

Reflection and framing effects are both predicted in prospect theory by the S shape of the value function: concave for gains indicating risk aversion and convex for losses indicating risk seeking.

Source: SFB 504

regression function

A regression function describes the relationship between dependent variable Y and explanatory variable(s) X. One might estimate the regression function m() in the econometric model
Yi = m(Xi) + ei
where the ei are the residuals or errors. As presented that is a nonparametric or semiparametric model, with few assumptions about m(). If one were to assume also that m(X) is linear in X one would get to a standard linear regression model:
Yi = (Xi)b + ei
where the vector b could be estimated.

Source: econterms

regrettables

consumption items that to not directly produce utility, such as health maintenance, transportation to work, and "waiting times"

Source: econterms

Regulation Q

A U.S. Federal Reserve System rule limiting the interest rates that U.S. banks and savings and loan institutions could pay on deposits.

Source: econterms

reinsurance

Insurance purchased by an insurer, often to protect against especially large risks or risks correlated to other risks the insurer faces.

Source: econterms

rejection region

In hypothesis testing. Let T be a test statistic. Possible values of T can be divided into two regions, the acceptance region and the rejection region. If the value of T comes out to be in the acceptance region, the null hypothesis (the one being tested) is accepted, or at any rate not rejected. If T falls in the rejection region, the null hypothesis is rejected.

The terms 'acceptance region' and 'rejection region' may also refer to the subsets of the sample space that would produce statistics T that go into the acceptance region or rejection region as defined above.

Source: econterms

Reliability

Reliability refers to the accuracy and consistency of a measurement or test; i.e. if a repetition of the measurement or testing under the same conditions reveals the same results. Note that reliability contains no information whether the behaviour or characteristic that is measured is the intended one.

Source: SFB 504

rents

Rents are returns in excess of the opportunity cost of the resources devoted to the activity.

Source: econterms

Repeated game

'Super'-game where a fixed group of players plays a given game repeatedly, with the outcome of all previous plays observed before the next play begins. Repetition vastly enlargens the set of possible equilibrium outcomes in a game, as it opens possibilities to 'punish' or 'reward' later actions such that certain strategies form an equilibrium which would not form one in the single, unrepeated ('one-shot') game. For example, repeating the prisoners' dilemma game (often enough) gives rise to many equilibria where both prisoners never confess.

Source: SFB 504

Representation problem representation

The cognitive representation of (social) information has been an important concern in (social) psychology since the mid-1970s. The central assumptions are, that people often attend to exposed information about a (social) stimulus (a person, an object, or an event) selectively, focussing on some features while disregarding others. They interpret the features in terms of previously acquired concepts and knowledge. Moreover, they often infer characteristics of the stimulus that were not actually mentioned in the information, and construe relations among these characteristics that were not specified ("going beyond the information given", Bruner, 1957b). In short, the cognitive representations that people form of a stimulus differ in a variety of ways from the information on which they were based.

Yet it is ultimately these representations, and not the original stimuli, that govern subsequent thoughts, judgments, and behaviors. Consequently, it is important to understand the nature of these mediating cognitive representations, to predict the influence of information on perceivers' judgments and/or behavioral decisions about the people and objects to which it refers.

To understand the cognitive determinants of judgments and decisions one must scrutinize the cognitive operations that were performed on information when it was first received, the mental representations that are formed as a result of these operations, and the manner in which these representations were later used to produce judgments or behaviors.

Source: SFB 504

Representativeness heuristic

What is the probability that person A (Steve, a very shy and withdrawn man) belongs to group B (librarians) or C (exotic dancers)? In answering such questions, people typically evaluate the probabilities by the degree to which A is representative of B or C (Steveīs shyness seems to be more representative for librarians than for exotic dancers) and sometimes neglect base rates (there are far more exotic dancers than librarians in a certain sample).

Source: SFB 504

resale price maintenance

The effect of rules imposed by a manufacturer on wholesale or retail resellers of its own products, to prevent them from competing too fiercely on price and thus driving profits down from the reselling activity. The manufacturer may do this because it wishes to keep resellers profitable. Such contract provisions are usually legal under US law but have not always been allowed since they formally restrict free trade.

Source: econterms

reservation wage property

A model has the reservation wage property if agents seeking employment in the model accept all jobs paying wages above some fixed value and reject all jobs paying less.

Source: econterms

Reserve price

Minimal amount that has to be bid in order that the the bid-taker concedes his property rights for the object to the highest bidder. If the highest bid fails to reach at least the reserve price, the auctioneer keeps the object (abstains from a sale). Although reserve prices reduce the probability of a sale, they can improve the seller's expected returns because they force bidders with higher valuations to bid more than they otherwise would. Appropriately designed reserve prices thus are devices to extract more of the bidders' information rents (see the entry on rents).

Source: SFB 504

residual claimant

The agent who receives the remainder of a random amount once predictable payments are made.

The most common example: consider a firm with revenues, suppliers, and holders of bonds it has issued, and stockholders. The suppliers receive the predictable amount they are owed. The bondholders receive a predictable payout -- the debt, plus interest. The stockholders can claim the residual, that is, the amount left over. It may be a negative amount, but it may be large. The same idea of a residual claimant can be applied in analyzing other contracts. There is a historical link to theories about wages; see http://britannica.com/bcom/eb/article/9/0,5716,109009+6+106209,00.html

Source: econterms

resiliency

An attribute of a market.

In securities markets, depth is measured by "the speed with which prices recover from a random, uninformative shock." (Kyle, 1985, p 1316).

Source: econterms

ReStat

An abbrevation for the Review of Economics and Statistics.

Source: econterms

restricted estimate

An estimate of parameters taken with the added requirement that some particular hypothesis about the parameters is true. Note that the variance of a restricted estimated can never be as low as that of an unrestricted estimate.

Source: econterms

restriction

assumption about parameters in a model

Source: econterms

ReStud

An abbreviation for the journal Review of Economic Studies.

Source: econterms

Retention of central tendencies

A concept provides a summary of a category in terms of the central tendencies of the members of that category rather than in terms of the representations of individual instances (which is the exemplar view).

Source: SFB 504

Retirement decisions

Retirement decisions are an increasingly important aspect of household behavior from an applied point of view. In the life cycle of an individual, retirement is the point in life from which on no labor income is received anymore; the income an individual receives after retirement stems from some social security system or from her own savings. The retirement decision is important because it the timing of retirement determines the amount of saving and dis-saving during both working life and old age, and hence the aggregate level of saving in an economy. Also, retirement decisions affect the financial situation of the social security system that provides pensions. The point of retirement affects the balance of years during which the individual contributes to the system and the number of years during which she receives a pension.

In the most simple case (where complications such as partial retirement are ignored), retirement is a classical example for an intertemporal decision under uncertainty. The main source of uncertainty is, of course, the point of death, because the individual has to assess the remaining life-time utility that she can derive from the choices (whether to retire or not in a given year) she has. Formally, the basic intertemporal trade-off is to compare the present value of future utility when retiring now with the utility of working at least one year longer and retiring then. If the individual does not retire now, she faces the same decision again next year. Therefore, the mathematical formulation of the problem has a recursive structure, a fact that makes the problem more tractable.

Of particular interest in applied work are the incentives to retirement that are provided by the institutional arrangements of the social security systems. In empirical studies it has been shown that individuals react quite strongly to these incentives (e.g., Börsch-Supan & Schnabel, 1997), and this in turn can be seen as evidence for rational behavior in a seemingly quite complicated decision situation.

Source: SFB 504

Return to

theoretical explanations

Source: SFB 504

Revelation mechanism revelation principle

Revelation mechanism: A particular mechanism representing a game of incomplete information where the players act simultanously, and where each player's action only consists of a report about his type, i.e. private information. In a revealing equilibrium of a revelation mechanism, for each player the incentive constraints for each type not to mimic another one are met, as well as the constraints of individual rationality that each type at least earning his reservation utility.

Source: SFB 504

Revelation principle

To any equilibrium of a game of incomplete information, there corresponds an associated revelation mechanism that has an equilibrium where the players truthfully report their types.

Source: SFB 504

revelation principle

That truth-telling, direct revelation mechanisms can generally be designed to achieve the Nash equilibrium outcome of other mechanisms; this can be proven in a large category of mechanism design cases.

Relevant to a modelling (that is, theoretical) context with:
-- two players, usually firms
-- a third party (usually the government) managing a mechanism to achieve a desirable social outcome
-- incomplete information -- in particular, the players have types that are hidden from the other player and from the government.

Generally a direct revelation mechanism (that is, one in which the strategies are just the types a player can reveal about himself) in which telling the truth is a Nash equilibrium outcome can be proven to exist and be equivalent to any other mechanism available to the government. That is the revelation principle. It is used most often to prove something about the whole class of mechanism equilibria, by selecting the simple direct revelation mechanism, proving a result about that, and applying the revelation principle to assert that the result is true for all mechanisms in that context.

Source: econterms

Revenue equivalence

Classical result in the theory of auctions about the division of expected social surplus among risk-neutral bidders and a risk-neutral bid-taker. Whenever the bidders have independent private valuations for the resource in sale, all auction formats lead to the same expected revenue to the bid-taker, and to the same expected profits of the bidders, which award the object to the bidder that submits the highest bid - regardless of the specific payment rule of the auction. In particular, the equilibrium expected payments in the first price sealed bid auction or the Dutch auction are the same as in the second price sealed bid auction, in the English auction, or in any all pay auction. The revenue equivalence theorem shows that in terms of the objective functions of risk neutral strategic traders which have independent private information, all 'reasonable' auction formats are equivalent exchange mechanisms. This equivalence extends to auctions of multiple identical goods if the bidder have unit demands. It does not hold, however, in common value auctions, with risk-averse traders, or in auction markets of multiple goods when the bidders bid for more than one item.

Source: SFB 504

Reverse hindsight bias

People who are exposed to a very surprising or unexpected event may react by expressing an I did not expect this to happen response. They attribute the surprise of the unexpected event to their inability to have foreseen an outcome such as the one obtained and recall predictions opposite to their judgement of the event after its occurrence. In other words, the attempt to explain an unexpected event leads to an exaggerated adjustment in a direction opposite to the hindsight bias.

Source: SFB 504

Ricardian proposition

that tax financing and bond financing of a given stream of government expenditures lead to equivalent allocations. This is the Modigliani-Miller theorem applied to the government.

Source: econterms

ridit scoring

A way of recoding variables in a data set so that one has a measure not of their absolute values but their positions in the distribution of observed values. Defined in this broadcast to the list of Stata users:

Date: Sat, 20 Feb 1999 14:13:35 +0000
From: Ronan Conroy 
Subject: Re: statalist: Standardizing Variables

Paul Turner said (19/2/99 9:54 pm)

>I have two variables--X1 and X2--measured on ordinal scales. X1 ranges
>from 0 to 10; X2 ranges from 0 to 12. What I want to do is to standardize
>X1 and X2 to a common metric in order to explore how differences between
>the two affect the dependent variable of interest. Converting values to
>percentages of the maximum values (10 and 12) is the first approach that
>occurs to me, but I don't know if there's something I'm forgetting

This sort of thing is possible, and called ridit scoring.  You replace
each of the original scale points with the percentage (or proportion) of
the sample who scored at or below that value.  This gives the scales a
common interpretation as percentiles of the sample, and means that they
are now expressed on an interval metric, though the data are still grainy.

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

    _/_/_/      _/_/     _/_/_/     _/     Ronan M Conroy
   _/    _/   _/   _/  _/          _/      Lecturer in Biostatistics
  _/_/_/    _/          _/_/_/    _/       Royal College of Surgeons
 _/   _/     _/              _/  _/        Dublin 2, Ireland
_/     _/     _/_/     _/_/_/   _/         voice +353 1 402 2431
             rconroy@rcsi.ie               fax   +353 1 402 2329
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
I'm not an outlier; I just haven't found my distribution yet

Source: econterms

Riemann-Stieltjes integral

A generalization of regular Riemann integration.

Let | denote the integral sign. Quoting from Priestly:

"...when we have two deterministic functions g(t),F(t), the Riemann-Stieltjes integral
R = |ab g(t)dF(t)
is defined as the limiting value of the discrete summation"
(sum from i=1 to i=n of) g(ti)[F(ti)-F(ti-1)]
for t1=a and tn=b as n goes to infinity and "as max(ti-ti-1)->0."

If F(t) is differentiable, then the above integral is the same as the regular integral R=|ab g(t)F'(t) dt, but the Reimann-Stieltjes integral can be defined in many cases even when F() is not differentiable.

One of the most common uses is when F() is a cdf.

Examples: The expectation of a random variable can be written:
mu=| xf(x) dx
if f(x) is the pdf. It can also be written:
mu=| x dF(x)
where F(x) is the cdf. The two are equivalent for a continuous distribution, but notice that for a discrete one (e.g. a coin flip, with X=0 for heads and X=1 for tails) the second, Riemann-Stieltjes, formulation is well defined but no pdf exists to calculate the first one.

Source: econterms

risk

If outcomes will occur with known or estimable probability the decisionmaker faces a risk. Certainty is a special case of risk in which this probability is equal to zero or one. Contrast uncertainty.

Source: econterms

Risk attitude

A decision maker?s risk attitude characterizes his willingness to engage in risky prospects. Focusing on risky prospects with monetary outcomes, a decision maker displays risk aversion if and only if he strictly prefers a certain consequence to any risky prospect whose mathematical expectation of consequences equals that certain amount. Equivalently, a decision maker is said to be risk averse if and only if he strictly refuses to participate in fair games (i.e. games with an expected net outcome of zero). He is said to be a risk preferrer if and only if he strictly prefers the above mentioned risky prospect to its certain consequence. He displays risk neutrality if and only if he is indifferent between the risky prospect and the certain consequence.

Let u(x) denote a decision maker?s utility function on amounts of money. Risk aversion, risk neutrality, and risk preference correspond to the strict concavity, linearity, and strict convexity of u(x), respectively.

Source: SFB 504

Risk aversion

Let u(x) denote a decision maker?s utility function on amounts of money. Risk aversion is equivalent to the strict concavity of u(x), implying decreasing marginal utility of money.

For a risk averter the certainty equivalent of a risky prospect, which is the amount of money for which the individual is indifferent between the risky prospect and the certain amount, is strictly less than the mathematical expectation of the outcomes of the risky prospect. The degree of (absolute) risk aversion can be measured by means of the Arrow-Pratt coefficient of risk aversion, which is suitable for both comparisons across individuals and comparisons across wealth levels of a single decision maker. Risk aversion of investors belongs to the crucial assumptions of numerous models in finance theory (e.g., the Capital Asset Pricing Model, CAPM).

Source: SFB 504

risk free rate puzzle

See equity premium puzzle.

Source: econterms

Risk uncertainty and ambiguity

Many different definitions of risk, uncertainty, and ambiguity can be found in the literature. This entry follows the notion commonly used in modern decision theory, e.g. employed by Tversky and Kahneman (1992) and much earlier proposed by Knight (1921). Camerer and Weber (1992) provide a review of various definitions and formalizations.

A decision is called risky when the probabilities that certain states will occur in the future are precisely known, e.g. in a fair roulette game. In contrast, a decision is called uncertain when the probabilities are not precisely known. Examples are the outcomes of sports events, elections or most real investments. Decisions under risk can be seen as a special case of decisions under uncertainty with precisely known probabilities. Risk and uncertainty can be distinguished by the degree with which probabilities are known. In case of uncertainty, probabilities are not precisely known but people can form more or less vage beliefs about probabilities. If people are definitely not able to form any beliefs about probabilities, this special case is termed complete ignorance. The above notion of uncertainty corresponds to the widely used term ambiguity.

Source: SFB 504

risk-neutral bidder

Bidder only cares about the expected monetary value,regardless of the level of uncertainty

RJE

An abbreviation for the RAND Journal of Economics, which was previously called the Bell Journal of Economics.

Source: econterms

RMPY

Stands for a standard VAR run on standard data, with interest rates (R), money stock (M), inflation (P), and output (Y). In Faust and Irons (1996), these are operationalized by the three-month Treasury bill rate, M2, the CPI, and the GNP.

Source: econterms

Robinson-Patman Act

U.S. legislation of 1936 which made rules against price discrimination by firms. Agitation by small grocers was a principal cause of the law. They were under competitive pressure and displaced by the arrival of chain stores. The Act is thought by many to have prevented reasonable price competition, since it made many pricing actions illegal per se. For many of its provisions, 'good faith' was not a permitted defense. So it can be argued that it was confusing, vague, unnecessarily restrictive, and designed to prevent some competitors in retailing from being driven out rather than to further social welfare generally, e.g. by allowing pricing decisions that would benefit consumers. Other causes: glitches in an earlier law, the Clayton Act.

Source: econterms

robust smoother

A robust smoother is a smoother (an estimator of a regression function) that gives lower weights to datapoints that are outliers in the y-direction.

Source: econterms

Roll critique

That the CAPM may appear to be rejected in tests not because it is wrong but because the proxies for the market return are not close enough to the true market portfolio available to investors.

Source: econterms

roughness penalty

A loss function that one might incorporate into an estimate of a function to prevent the estimated function from matching the data closely but at the cost of jerkiness. See 'spline smoothing' and 'cubic spline' for example uses.
An example roughness penalty would be LI[m"(u)]2du, where L is a 'smoothing parameter', I stands for the integral sign, m"() is the second derivative of the estimated function, and u is a dummy variable that ranges over the domain of the estimated function.

Source: econterms

Rybczynski theorem

Paraphrasing from Hanson and Slaughter (1999): In the context of a Heckscher-Ohlin model of international trade, open trade between regions means changes in relative factor supplies between regions can lead to an adjustment in quantities and types of outputs between regions that would return the system toward equality of production input prices like wages across countries (the state of factor price equalization).

Such theorems are named this way by analogy to Rybczynski (1955), and refer to that part of the mechanism that has to do with output adjustments.

Source: econterms

S

S-Plus

Statistical software published by Mathsoft.

Source: econterms

s.t.

An abbreviation meaning "subject to" or "such that", where constraints follow.
In a common usage:

maxx f(x) s.t. g(x)=0

The above expression, in words, means: "The value of f(x) that is greatest among all those for which the argument x satisfies the constraint that g(x)=0." (Here f() and g() are fixed, possibly known, real-valued functions of x.)

Source: econterms

saddle point

In a second-order [linear difference equation] system, ... if one root has absolute value greater than one, and the other root has absolute value less than one, then the steady state of the system is called a saddle point. In this case, the system is unstable for almost all initial conditions. The exception is the set of initial conditions that begin on the eigenvector associated with the stable eigenvalue.

Source: econterms

Sargan test

A test of the validity of instrumental variables. It is a test of the overidentifying restrictions. The hypothesis being tested is that the instrumental variables are uncorrelated to some set of residuals, and therefore they are acceptable, healthy, instruments.

If the null hypothesis is confirmed statistically (that is, not rejected), the instruments pass the test; they are valid by this criterion.

In the Shi and Svensson working paper (which shows that elected national governments in 1975-1995 had larger fiscal deficits in election years, especially in developing countries), the Sargan statistic was asymptotically distributed chi-squared if the null hypothesis were true.

See test of identifying restrictions, which is not exactly the same thing, I think.

Source: econterms

SAS

Statistical analysis software. SAS web site

Source: econterms

Saving

As an accounting concept, saving can be defined as the residual that is left from income after the consumption choice has been made as part of the household's utility maximization process. Substantially, saving is future consumption, and it is an important example of an intertemporal decision. The division of income between consumption and saving is driven by preferences between present and future consumption (or the utility derived from consumption).

The main determinants of the consumption-saving trade-off are the interest rate and the individualīs rate of time preference, reflecting the intertemporal substitution from one period to a future period: Income that is not used for consumption purposes can be saved and consumed one period later, earning an interest payment and hence allowing for more consumption in the future. This increase in the absolute amount available for consumption, as reflected in the interest rate, has then to be compared with the individualīs rate of time preference (the latter expressing her patience with respect to later consumption, or, more generally, to delayed utility derived from consumption. In the optimum, the interest rate and the rate of time preference have to be equal. This is one of the fundamentals of intertemporal choice (as a special form of rational behavior).

This intertemporal trade-off is the central building block of the life-cycle model of saving. Note that this model is firmly grounded in expected utility theory and assumes rational behavior. In recent years, there is much research on psychological aspects of savings. Wärneryd (1999) contains a good introduction to that literature.

Source: SFB 504

scale economies

Same as economies of scale.

Source: econterms

scatter diagram

A graph of unconnected points of data. If there are many of them the result may be 'clouds' of data which are hard to interpret; in such a case one might want to use a nonparametric technique to estimate a regression function.

Source: econterms

scedastic function

Given an independent variable x and a dependent variable y, the scedastic function is the conditional variance of y given x. That variance of the conditional distribution is:
var[y|x] = E[(y-E[y|x])2|x]

= integral or sum of (y-E[y|x])2f(y|x) dy
= E[y2|x] - (E[y|x])2.

Source: econterms

SCF

Stands for Survey of Consumer Finances.

Source: econterms

Schema

A schema is an abstract or generic knowledge structure, stored in memory, that specifies the defining features and relevant attributes of some stimulus domain, and the interrelations among those attributes. Schemata are often organized in an hierarchical order and they can consist of sub-schemata. These organized knowledge structures are highly important for information processes as they guide interpretation, help control attention, and affect memory encoding and retrieval. To give an example for a schema, the typical schema of "skinheads" might consist of attributes such as "bald", "aggressive" and "young".

Source: SFB 504

Schumpeterian growth

Paraphrasing from Mokyr (1990): Schumpeterian growth of economic growth brought about by increase in knowledge, most of which is called technological progress.

Source: econterms

Schwarz Criterion

A criterion for selecting among formal econometric models. The Schwarz Criterion is a number:
T ln (RSS) + K ln(T)
The criterion is minimized over choices of K to form a tradeoff between the fit of the model (which lowers the sum of squared residuals) and the model's complexity, which is measured by K. Thus an AR(K) model versus an AR(K+1) can be compared by this criterion for a given batch of data.

Source: econterms

Scitovsky paradox

The problem that some ways of aggregating social welfare may make it possible that a switch from allocation A to allocation B seems like an improvement in social welfare, but so does a move back. (An example may be Condorcet's voting paradox.)

Scitovsky, T., 1941, 'A Note on Welfare Propositions in Economics', Review of Economic Studies, Vol 9, Nov 1941, pp 77-88.

The Scitovsky criterion (for a social welfare function?) is that the Scitovsky paradox not exist.

Source: econterms

score

In maximum likelihood estimation, the score vector is the gradient of the likelihood function with respect to the parameters. So it has the same number of elements as the parameter vector does (often denoted k). The score is a random variable; it's a function of the data. It has expectation zero, and is set to zero exactly for a given sample in the maximum likelihood estimation process.

Denoting the score as S(q), and the likelihood function as L(q), where in both cases the data are also implied arguments:

S(q) = dL(q)/d(q)

Example: In OLS regression of Yt=Xtb+et, the score for each possible parameter value, b, is Xt'et(b).

The variance of the score is E[score2]-(E[score])2) which is E[score2] since E[score] is zero. E[score2] is also called the information matrix and is denoted I(q).

Source: econterms

screening game

A game in which an uninformed player offers a menu of choices to the player with private information (the informed player). The selection of the elements of that menu (which might be, for example, employment contracts containing pairs of pay rates and working hours) is a choice for the uninformed player to optimize on the basis of expectations about they possible types of the informed player.

Source: econterms

Script

A script is a knowledge structure, which describes the adequate sequence of events of familiar situations, for instance the script of a restaurant situation. It includes information about the invariant aspect of the situation, for example, all restaurants serve food. Moreover, it has slots for variables that apply to specific restaurants, for example, how expensive a particular restaurant is. Scripts combine single scenes to an integrated sequence from the point of view of a specific actor. Besides this temporal organization scripts ? like any other schema ? are organized in a hierarchical order.

Source: SFB 504

second moment

The second moment of a random variable is the expected value of the square of the draw of the random variable. That is, the second moment is EX2. Same as 'uncentered second moment' as distinguished from the variance which is the 'centered second moment.'

Source: econterms

Second price sealed bid auction

Simultaneous bidding game where the bidder that has submitted the highest bid is awarded the object, and he pays the highest competing bid only (which is the 'second highest' bid). In second price auctions with statictically independent private valuations, each bidder has a dominant strategy - lieber joachim, diesen eintrag bekommst du noch von karsten fieseler.- to bid exactly his valuation. The second price auction also is called Vickrey auction ; its multi-object form is the uniform price auction.

Source: SFB 504

Second Welfare Theorem

A Pareto efficient allocation can be achieved by a Walrasian equilibrium if every agent has a positive quantity of every good, and preferences are convex, continuous, and strictly increasing.
(My best understanding of 'convex preferences' is that it means 'concave utility function'.)

Source: econterms

secular

an adjective meaning "long term" as in the phrase "secular trends." Outside the research context its more common meaning is 'not religious'.

Source: econterms

See also

: decision strategies (in the social sciences), dominant strategies

Source: SFB 504

seigniorage

Alternate spelling for seignorage.

Source: econterms

seignorage

"The amount of real purchasing power that [a] government can extract from the public by printing money.'" -- Cukierman 1992
Explanation: When a government prints money, it is in essence borrowing interest-free since it receives goods in exchange for the money, and must accept the money in return only at some future time. It gains further if issuing new money reduces (through inflation) the value of old money by reducing the liability that the old money represents. These gains to a money-issuing government are called "seignorage" revenues.

The original meaning of seignorage was the fee taken by a money issuer (a government) for the cost of minting the money. Money itself, at that time, was intrinsically valuable because it was made of metal.

Source: econterms

Selection problem

An important condition for empirical work (using either field or experimental data) is that the sample be drawn randomly from the population. If this is not the case; then, the sample is said to be selected (i.e., elected according to some rule, not randomly). Statistical inference might not be valid when the sample is non-randomly selected.

Formally, the selection problem can be described as follows (see Manski, 1993). Each member of the population (say, an individual, a household, or a firm) is characterized by a triple (y,z,x). Suppose that the researcher is interested in a relationship between x, the independent variable, and y, the dependent variable. (The variables y and x can be discrete or real numbers, and they can be scalars or vectors.) The variable z is a binary indicator variable that takes only the values 0 and 1; for example, it might indicate whether an individual has answered a set of survey questions about y or not.

In general, the relationship in question can be described by a probability measure P(y|x); the most common example for such a relationship is the normal regression model. To learn about this relationship, the researcher draws a random sample from the population, observes all the realizations of (z,x), but observers realizations of z only when z = 1. The selection problem is the failure of the sampling process to identify the population probability measure P(y|x).

Without going into technical details any further, note that some conditional probabilities can still be identified from the data if one takes into account that the data are drawn conditional on z=1. Accordingly, statistical and econometric methods which deal with the selection problem combine the identified conditional probabilites with either additional a priori information or (untestable) identifiying assumptions about some of the unidentified conditional probabilities involved. These methods allow to characterize the relationship of x and y in the population (under the identifying assumptions made).

One example of commonly made assumptions is that of ignorable non-response ? i.e., the assumption that y and z are statistically independent, conditional on x. Recent research concentrates on those cases in which one is unwilling to make such strong assumptions and where there is no other prior information that can be exploited. While means are difficult to handle under these circumstances, some bounds can be derived for other population statistics such as quantiles and distribution functions.

Source: SFB 504

self-generating

Given an operator B() that operates on sets, a set W is self-generating if W is contained in B(W).

This definition is in Sargent (98) and may come from Abreu, Pearce, and Stacchetti (1990).

Source: econterms

semi-nonparametric

synonym for semiparametric.

Source: econterms

semi-strong form

Can refer to the semi-strong form of the efficient markets hypothesis, which is that any public information about a security is fully reflected in its current price.
Fama (1991) says that a more common and current name for tests of the semi-strong form hypothesis is 'event studies.'

Source: econterms

semilog

The semilog equation is an econometric model:

Y = ea+bX+e

or equivalently

ln Y = a + bX + e

Commonly used to describe exponential growth curves. (Greene 1993, p 239)

Source: econterms

semiparametric

An adjective that describes an econometric model with some components that are unknown functions, while others are specified as unknown finite dimensional parameters.

An example is the partially linear model.

Source: econterms

senior

Debts may vary in the order in which they must legally be paid in the event of bankruptcy of the individual or firm that owes the debt. The debts that must be paid first are said to be senior debts.

Source: econterms

SES

socioeconomic status

Source: econterms

shadow price

In the context of a maximization problem with a constraint, the shadow price on the constrain is the amount that the objective function of the maximization would increase by if the constraint were relaxed by one unit.

The value of a Lagrangian multiplier is a shadow price.

This is a striking and useful fact, but takes some practice to understand.

Source: econterms

shakeout

A period when the failure rate or exit rate of firms from an industry is unusually high.

Source: econterms

sharing rule

A function that defines the split of gains between a principal and agent. The gains are usually profits, and the split is usually a linear rule that gives a fraction to the agent. For example, suppose profits are x, which might be a random variable. The principal and agent might agree, in advance of knowing x, on a sharing rule s(x). Here s(x) is the amount given to the agent, leaving the principal with the residual gain x-s(x).

Source: econterms

Sharpe ratio

Computed in context of the Sharpe-Linter CAPM. Defined for an asset portfolio a that has mean ma, standard deviation sa, and with risk-free rate rf by:

[ma-rf]/sa

Higher Sharpe ratios are more desirable to the investor in this model.
The Sharpe ratio is a synonym for the "market price of risk." Empirically, for the NYSE, the Sharpe ratio is in the range of .30 to .40.

Source: econterms

SHAZAM

Econometric software published at the University of British Columbia. See http://shazam.econ.ubc.ca.

Source: econterms

Shephard's lemma

Source: econterms

Sherman Act

1890 U.S. antitrust law. It has been described as vague, leading to ambiguous interpretations over the years.
Section one of the law forbids certain joint actions: "Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several states, or with foreign nations, is hereby declared illegal...."
Section two of the law forbids certain individual actions: "Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several states, or with foreign nations, shall be deemed guilty of a felony..."
The reasons for the passage of the Sherman Act:
(1) To promote competition to benefit consumers,
(2) Concern for injured competitors,
(3) Distrust of concentration of power.

Source: econterms

short rate

Abbreviation for 'short term interest rate'; that is, the interest rate charged (usually in some particular market) for short term loans.

Source: econterms

Shortfall risk measures

Starting-point of the shortfall risk measures is a target return, for example an one-month market return, defined by the investor. Risk is than to be considered as the possibility not to come up to this target return. Special cases of shortfall risk measures are the shortfall probability, the shortfall expectation and the shortfall variance.

Source: SFB 504

Shubik model

A theoretical model designed to study the behavior of money. There are N goods traded in N(N-1) markets, one for each possible combination of good i and good j that could be exchanged. One assumes that only N of these markets are open; that good 0, acting as money, is traded for each of the other commodities but they are not exchanged for one another. Then one can study the behavior of the money good.

Source: econterms

SIC

Standard Industrial Classification code -- a four-digit number assigned to U.S. industries and their products. By "two-digit industries" we mean a coarser categorization, grouping the industries whose first two digits are the same.

Source: econterms

sieve estimators

flexible basis functions to approximate a function being estimated. It may be that orthogonal series, splines, and neural networks are examples. Donald (1997) and Gallant and Nychka (1987) may have more information.

Source: econterms

sigma-algebra

A collection of sets that satisfy certain properties with respect to their union. (Intuitively, the collection must include any result of complementations, unions, and intersections of its elements. The effect is to define properties of a collection of sets such that one can define probability on them in a consistent way.) Formally:
Let S be a set and A be a collection of subsets of S. A is a sigma-algebra of S if:
(i) the null set and S itself are members of A
(ii) the complement of any set in A is also in A
(iii) countable unions of sets in A are also in A.
It follows from these that a sigma-algebra is closed under countable complementation, unions, and intersections.

Source: econterms

signaling game

A game in which a player with private information (the informed player) sends a signal of his private type to the uninformed player before the uninformed player makes a choice. An example: a candidate worker might suggest to the potential employer what wage is appropriate for himself in a negotiation.

Source: econterms

significance

A finding in economics may be said to be of economic significance (or substantive significance) if it shows a theory to be useful or not useful, or if has implications for scientific interpretation or policy practice (McCloskey and Ziliak, 1996). Statistical significance is property of the probability that a given finding was produced by a stated model but at random: see significance level.

These meanings are different but sometimes overlap. McCloskey and Ziliak (1996) have a substantial discussion of them. Ambiguity is common in practice, but not hard to avoid. (Editorial comment follows.) When the second meaning is intended, use the phrase "statistically significant" and refer to a level of statistical significance or a p-value. Avoid the aggressive word "insignificant" unless it is clear whether the word is to be taken to mean substantively insignificant or not statistically significant.

Source: econterms

significance level

The significance level of a test is the probability that the test statistic will reject the null hypothesis when the [hypothesis] is true. Significance is a property of the distribution of a test statistic, not of any particular draw of the statistic.

Source: econterms

Simpson paradox

The Simpson-Paradox refers to a tri-variate statistical problem in which a contingency between two variables, x and y, can be accounted for by a third variable, z. Solving the problem thus calls for a cognitive routine analogous to analysis of covariance. Cognitive research on the Simpson-Paradox addresses the question of whether the human mind can correctly handle these statistical tools.

One example for the Simpson-Paradox: One supermarket may be more expensive on aggregate (correlation between supermarket (x) and price (y)), but only because the latter supermarket sells more high-quality products (correlation between supermarket (x) and the percentage of high-quality products (z), correlation between price (y) and the percentage of high-quality products (z)). Considering high and low-quality products separately, the seemingly more expensive supermarket may turn out to be cheaper at any quality level.

Source: SFB 504

simulated annealing

A method of finding optimal values numerically. Simulated annealing is a search method as opposed to a gradient based algorithm. It chooses a new point, and (for optimization) all uphill points are accepted while some downhill points are accepted depending on a probabilistic criteria.

Unlike the simplex search method provided by Matlab, simulated annealing may allow "bad" moves thereby allowing for escape from a local max. The value of a move is evaluated according to a temperature criteria (which essentially determines whether the algorithm is in an "hot" area of the function).

Source: econterms

simultaneous equation system

By 'system' is meant that there are multiple, related, estimable equations. By simultaneous is meant that two quantities are jointly determined at time t by one another's values at time t-1 and possibly at t also.
Example, from Greene, (1993, p. 579), of market equilibrium:

  qd=a1p+a2+ed (Demand equation)  qs=b1p+es    (Supply equation)  qd=qs=q  
Here the quantity supplied is qs, quantity demanded is qd, price is p, the e's are errors or residuals, and the a's and b's are parameters to be estimated. We have data on p and q, and the quantities supplied and demanded are conjectural.

Source: econterms

single-crossing property

Distributions with cdfs F and G satisfy the single-crossing property if there is an x0 such that: F(x) >= G(x) for x<=x0 and G(x) >= F(x) for x>=x0

Source: econterms

sink

"In a second-order [linear difference equation] system, if both roots are positive and less than one, then the system converges monotonically to the steady state. If the roots are complex and lie inside the unit circle then the system spirals into the steady state. If at least one root is negative, but both roots are less than one in absolute value, then the system will flip from one side of the steady state to the other as it converges. In all of these cases the steady state is called a sink." Contrast 'source'.

Source: econterms

SIPP

The U.S. Survey of Income and Program Participation, which is conducted by the U.S. Census Bureau.

A tutorial is at: http://www.bls.census.gov/sipp/tutorial/SIPP_Tutorial_Beta_version/LAUNCHtutorial.html

Source: econterms

size

a synonym for significance level

Source: econterms

skewness

An attribute of a distribution. A distribution that is symmetric around its mean has skewness zero, and is 'not skewed'. Skewness is calculated as E[(x-mu)3]/s3 where mu is the mean and s is the standard deviation.

Source: econterms

skill

In regular English usage means "proficiency". Sometimes used in economics papers to represent the experience and formal education. (Ed.: in this editor's opinion that is a dangerously misleading use of the term; it invites errors of thought and understanding.)

Source: econterms

SLID

Stands for Survey of Labour and Income Dynamics. A Canadian government database going back to 1993 at least. Web pages on this subject can be searched from: http://www.statcan.ca/english/search/index.htm.

Source: econterms

SLLN

Stands for strong law of large numbers.

Source: econterms

SMA

Structural Moving Average model, which see

Source: econterms

Smithian growth

Paraphrasing directly from Mokyr, 1990: Economic growth brought about by increases in trade.

Source: econterms

smoothers

Smoothers are estimators that produce smooth estimates of regression functions. They are nonparametric estimators. The most common and implementable types are kernel estimators, k-nearest-neighbor estimators, and cubic spline smoothers.

Source: econterms

smoothing

Smoothing of a data set {Xi, Yi} is the act of approximating m() in a regression such as:
Yi = m(Xi) + ei
The result of a smoothing is a smooth functional estimate of m().

Source: econterms

SMR

Standardized mortality ratio

Source: econterms

SMSA

Stands for Standard Metropolitan Statistical Area, a U.S. term for the standard boundaries of urban regions used in academic studies.

Source: econterms

SNP

abbreviation for 'seminonparametric', which means the same thing as semiparametric.

Source: econterms

social capital

The relationships of a person which lead to economically productive consequences. E.g., they may produce something analogous to investment returns to that person, or socially productive consequences to a larger society. ''Social capital' refers to the benefits of strong social bonds. [Sociologist James] Coleman defined the term to take in 'the norms, the social interworks, the relationships between adults and children that are of value for the children's growing up.' The support of a strong community helps the child accumulate social capital in myriad ways; in the [1990s U.S.] inner city, where institutions have disintegrated, and mothers often keep children locked inside out of fear for their safety, social capital hardly exists.' -- Traub (2000)

Source: econterms

Social cognition

Social cognition is a label for processes that are instigated by social information. As a meta-theoretical perspective, social cognition has been applied to a variety of topic domains including person perception, stress and coping, survey responding, attitude-behavior relations, group processes, political decisions and even more domains. The basic processes that presumably operate in the range of domains are conceptualized within the research on perception of behavior, encoding, inferring, explaining, storing, retrieving, judging, generating overt responses. Research in this domain takes this information flow beyond a single individual into social interaction.

Source: SFB 504

social planner

One solving a Pareto optimality problem. The problem faced by a social planner will have as an answer an allocation, without prices.
Also, "the social planner is subject to the same information limitations as the agents in the economy." -- Cooley and Hansen p 185 That is, the social planner does not see information that is hidden by the rules of the game from some of the agents. If an agent happens not to know something, but it is not hidden from him by the rules of the game, then the social planner DOES see it.

Source: econterms

Social psychology

Social psychology ?attempts to understand and explain how thoughts, feelings or behaviour of individuals are influenced by the actual imagined or implicit presence of others? (Allport, 1985).
Issues studies by social psychologists range from intrapersonal processes (e.g. person perception, attitude theory, social cognition) and interpersonal relations (e.g. aggression, interdependence) to intergroup behaviour.

Source: SFB 504

social savings

A measurement of a new technology discussed in Crafts (2002). 'How much more did [a new technology] contribute than an alternative investment might have yielded?' and cites Fogel (1979).

Source: econterms

social welfare function

A mapping from allocations of goods or rights among people to the real numbers.
Such a social welfare function (abbreviated SWF) might describe the preferences of an individual over social states, or might describe outcomes of a process that made allocations, whether or not individuals had preferences over those outcomes.

Source: econterms

SOFFEX

Swiss Options and Financial Futures Exchange

Source: econterms

Solas

Software for imputing values to missing data, published by Statistical Solutions.

Source: econterms

Solovian growth

Paraphrasing from Mokyr (1990): Economic growth brought about by investment, meaning increases in the capital stock.

Source: econterms

Solow growth model

Paraphrasing pretty directly from Romer, 1996, p 7:
The Solow model is meant to describe the production function of an entire economy, so all variables are aggregates. The date or time is denoted t. Output or production is denoted Y(t). Capital is K(t). Labor time is denoted L(t). Labor's effectiveness, or knowledge, is A(t). The production function is denoted F() and is assumed to have constant returns to scale. At each time t, the production function is:
Y = F(K, AL)
which can be written: Y(t) = F(K(t), A(t)L(t))

AL is effective labor.

Note variants of the way A enters into the production function. This one is called labor-augmenting or Harrod-neutral. Others are capital-augmenting, e.g. Y=F(AK,L), or , like Y=AF(K,L). -------------------- From _Mosaic of Economic Growth_: DEFN of Solow-style growth models: They come from the seminal Solow (1956). 'In Solow-style models, there exists a unique and globally stable growth path to which the level of labor productivity (and per capita output) will converge, and along which the rate of advance is fixed (exogenously) by the rate of technological progress.' Many subsequent models of agg growth (like Romer 1986) have abandoned the assumption that all forms of kap accumulation run into diminishing marginal returns, and get different global convergence implications. (p22)

Source: econterms

Solow residual

A measure of the change in total factor productivity in a Solow growth model. This is a way of doing growth accounting empirically either for an industry or more commonly for a macroeconomy. Formally, roughly following Hornstein and Krusell (1996):

Suppose that in year t an economy produces output quantity yt with exactly two inputs: capital quantity kt and labor quantity lt. Assume perfectly competitive markets and that production has constant returns to scale. Let capital's share of income be fixed over time and denoted a. Then the change in total factor productivity between period t and period t+1, which is the Solow residual, is defined by:

Solow residual = (log TFPt+1) - (log TFPt)

= (log yt+1) - (log yt)
- a(log kt+1) - a(log kt)
- (1-a)(log lt+1) - (1-a)(log lt)
Analogous definitions exist for more complicated models (with other factors besides capital and labor) or on an industry-by-industry basis, or with capital's share varying by time or by industry.

The equation may look daunting but the derivations are not difficult and students are sometimes asked to practice them until they are routine. Hulten (2000) says about the residual that:
-- it measures shifts in the implicit aggregate production function.
-- it is a nonparametric index number which measures that shift in a computation that uses prices to measure marginal products.
-- the factors causing the measured shift include technical innovation, organizational and institutional changes, fluctuations in demand, changes in factor shares (where factors are capital, labor, and sometimes measures of energy use, materials use, and purchased services use), and measurement errors.

From an informal discussion by this editor, it looks like the residual contains these empirical factors, among others: public goods like highways; externalities from networks like the Internet; some externalities and losses of capital services from disasters like September 11; theft; shirking; and technical / technological change.

Source: econterms

solution concept

Phrase relevant to game theory. A game has a 'solution' which may represent a model's prediction. The modeler often must choose one of several substantively different solution methods, or solution concepts, which can lead to different game outcomes. Often one is chosen because it leads to a unique prediction. Possible solution concepts include:

iterative elimination of strictly dominated strategies
Nash equilibrium
Subgame perfect equilibrium
Perfect Bayesian equilibrium

Source: econterms

source

"In a second-order [linear difference equation] system, ... if both roots are positive and greater than one, then the system diverges monotonically to plus or minus infinity. If the roots are complex and [lie] outside the unit circle then the system spirals out away from the steady state. If at least one root is negative, but both roots are greater than one in absolute value, then the system will flip from one side of the steady state to the other as it diverges to infinity. In each of these cases the steady state is called a source." Constrast 'sink'.

Source: econterms

sparse

A matrix is sparse if many of its values are zero. A division of sample data into discrete bins (that is into a multinomial table) is sparse if many of the bins have no data in them.

Source: econterms

spatial autocorrelation

Usually autocorrelation means correlation among the data from different time periods. Spatial autocorrelation means correlation among the data from locations. There could be many dimensions of spatial autocorrelation, unlike autocorrelation between periods. Nick J. Cox () wrote, in a broadcast to a listserv discussing the software Stata, this discussion of spatial autocorrelation. It is quoted here without any explicit permission whatsoever. (Parts clipped out are marked by 'snip'.) If 'Moran measure' and 'Geary measure' are standard terms used in economics I'll add them to the glossary.

Date: Thu, 15 Apr 1999 12:29:10 GMT
From: "Nick Cox" 
Subject: statalist: Spatial autocorrelation

[snip...]

First, the kind of spatial data considered here is data in two-dimensional
space, such as rainfall at a set of stations or disease incidence in a set of
areas, not three-dimensional or point pattern data (there is a tree or a
disease case at coordinates x, y).  Those of you who know time series might
expect from the name `spatial autocorrelation' estimation of a function,
autocorrelation as a function of distance and perhaps direction.  What is
given here are rather single-value measures that provide tests of
autocorrelation for problems where the possibility of local influences is of
most interest, for example, disease spreading by contagion. The set-up is that
the value for each location (point or area) is compared with values for its
`neighbours', defined in some way.

The names Moran and Geary are attached to these measures to honour the pioneer
work of two very fine statisticians around 1950, but the modern theory is due
to the statistical geographer Andrew Cliff and the statistician Keith Ord.

For a vector of deviations from the mean z, a vector of ones 1, and a matrix
describing the neighbourliness of each pair of locations W, the Moran measure
for example is

       (z' W z) / (z' z)
   I = -----------------
       (1' W 1) / (1' 1)

where ' indicates transpose. This measure is for raw data, not regression
residuals.

[snip; and the remainder discusses a particular implementation of a spatial
autocorrelation measuring function in Stata.]

For n values of a spatial variable x defined for various locations,
which might be points or areas, calculate the deviations
            _
    z = x - x

and for pairs of locations i and j, define a matrix

    W = ( w   )
           ij

describing which locations are neighbours in some precise sense.
For example, w   might be assigned 1 if i and j are contiguous areas
              ij
and 0 otherwise; or w   might be a function of the distance between
                     ij
i and j and/or the length of boundary shared by i and j.

The Moran measure of autocorrelation is

        n   n                      n   n         n   2
   n ( SUM SUM z  w   z  ) / ( 2 (SUM SUM w  )  SUM z  )
       i=1 j=1  i  ij  j          i=1 j=1  ij   i=1  i

and the Geary measure of autocorrelation is

             n   n               2           n   n         n   2
   (n -1) ( SUM SUM w   (z  - z )  ) / ( 4 (SUM SUM w  )  SUM z  )
            i=1 j=1  ij   i    j            i=1 j=1  ij   i=1  i

and these measures may used to test the null hypothesis of no spatial
autocorrelation, using both a sampling distribution assuming that x
is normally distributed and a sampling distribution assuming randomisation,
that is, we treat the data as one of n! assignments of the n values to
the n locations.

In a toy example, area 1 neighbours 2, 3 and 4  and has value 3
                       2            1 and 4                   2
                       3            1 and 4                   2
                       4            1, 2 and 3                1

This would be matched by the data

^_n^ (obs no)    ^value^ (numeric variable)  ^nabors^ (string variable)
- -----------    ------------------------  ------------------------
    1                      3                    "2 3 4"
    2                      2                      "1 4"
    3                      2                      "1 4"
    4                      1                    "1 2 3"

That is, ^nabors^ contains the observation numbers of the neighbours of
the location in the current observation, separated by spaces. Therefore,
the data must be in precisely this sort order when ^spautoc^ is called.

Note various assumptions made here:

1. The neighbourhood information can be fitted into at most a ^str80^
variable.

2. If i neighbours j, then j also neighbours i and both facts are
specified.

By default this data structure implies that those locations listed
have weights in W that are 1, while all other pairs of locations are not
neighbours and have weights in W that are 0.

If the weights in W are not binary (1 or 0), use the ^weights^ option.
The variable specified must be another string variable.

^_n^ (obs no)  ^nabors^ (string variable)  ^weight^ (string variable)
- -----------  ------------------------  ------------------------
    1                "2 3 4"             ".1234 .5678 .9012"
    etc.

that is, w   = 0.1234, and so forth. w   need not equal w  .
          12                          ij                 ji

[snip]

References
- ----------

Cliff, A.D. and Ord, J.K. 1973. Spatial autocorrelation. London: Pion.

Cliff, A.D. and Ord, J.K. 1981. Spatial processes: models and
applications. London: Pion.

Author
- ------
         Nicholas J. Cox, University of Durham, U.K.
         n.j.cox@@durham.ac.uk

- ------------------------- end spautoc.hlp

Nick
n.j.cox@durham.ac.uk

Source: econterms

SPE

Abbreviation for: Subgame perfect equilibrium

Source: econterms

specie

A commodity metal backing money; historically specie was gold or silver.

Source: econterms

spectral decomposition

The factorization of a positive definite matrix A into A=CLC' where L is a diagonal matrix of eigenvalues, and the C matrix has the eigenvectors. That decomposition can be written as a sum of outer products:

A = (sum from i=1 to i=N of) Licici'

where ci is the ith column of C.

Source: econterms

spectrum

Summarizes the periodicity properties of a time series or time series sample xt. Often represented in a graph with frequency, or period, (often denoted little omega) on the horizontal axis, and Sx (omega), which is defined below, on the vertical axis. Sx is zero for frequencies that are not found in the time series or sample, and is increasingly positive for frequencies that are more important in the data.

Sx(omega) = (2pi)-1(sum for j from -infinity to +infinity of) gammaje-ijomega

where gammaj is the jth autocovariance, omega is in the range [-pi, pi], and i is the square root of -1.

Example 1: If xt is white noise, the spectrum is flat. All cycles are equally important. If they were not, the series would be forecastable.

Example 2: If xt is an AR(1) process, with coefficient in (0, 1), the spectrum has a peak at frequency zero and declines monotonically with distance from zero. This process does not have an observable cycle.

Source: econterms

speculative demand

The speculative demand for money is inversely related to the interest rate.

Source: econterms

spline function

The kind of estimate producted by a spline regression in which the slope varies for different ranges of the regressors. The spline function is continuous but usually not differentiable.

Source: econterms

spline regression

A regression which estimates different linear slopes for different ranges of the independent variables. The endpoints of the ranges are called knots.

Source: econterms

spline smoothing

A particular nonparametric estimator of a function. Given a data set {Xi, Yi} it estimates values of Y for X's other than those in the sample. The process is to construct a function that balances the twin needs of (1) proximity to the actual sample points, (2) smoothness. So a 'roughness penalty' is defined. See Hardle's equation 3.4.1 near p. 56 for the 'cubic spline' which seems to be the most common.

Source: econterms

SPO

stands for Strongly Pareto Optimal, which see.

Source: econterms

SPSS

Stands for 'Statistical Product and Service Solutions', a corporation at www.spss.com

Source: econterms

SSEP

Social Science Electronic Publishing, Inc.

Source: econterms

SSRN

Social Science Research Network Their web site

Source: econterms

stabilization policy

'Macroeconomic stabilization policy consists of all the actions taken by governments to (1) keep inflation low and stable; and (2) keep the short-run (business cycle) fluctuations in output and employment small.' Includes monetary and fiscal policies, international and exchange rate policy, and international coordination. (p129 in Taylor (1996)).

Source: econterms

stable distributions

See Campbell, Lo, and MacKinlay pp 17-18. Ref to French probability theories Levy. The normal, Cauchy, and Bernoulli distrbutions are special cases. Except for the normal distrbituion, they have infinite variance.
There has been some study of whether continuously compounded asset returns could fit a stable distribution, given that their kurtosis is too high for a normal distribution.

Source: econterms

stable steady state

in a dynamical system with deterministic generator function F() such that Nt+1=F(Nt), a steady state is stable if, loosely, all nearby trajectories go to it.

Source: econterms

staggered contracting

A model can be constructed in which some agents, usually firms, cannot change their prices at will. They make a contract at some price for a specified duration, then when that time is up can change the price. If the terms of the contracts overlap, that is they do not all end at the same time, we say the contracts are staggered.

An important paper on this topic was Taylor (1980) which showed that staggered contracts can have an effect of persistence -- that is, that one-time shocks can have effects that are still evolving for several periods. This is a version of a new Keynesian, sticky-price model.

Source: econterms

standard normal

Refers to a normal distribution with mean of zero and variance of one.

Source: econterms

Standard operating procedures

Standard operating procedures are part of the formal structure of organizations. They serve to coordinate divisional labour processes. At the same time, they are part of the decision environment organizations equipe their members with: SOPs limit the aspect of reality which is relevant for certain decisions. Thus, they reduce the complexity of decision problems. As well, SOPs can be the result of trial-and-error processes. Thus, they might be regarded as an accumulation of organizational experience. According to this view, SOPs heighten the level of problem handling, respectively the rationality of decision makers in organizations. They enable individuals with limited rationality to engage in more effective information gathering and -processing. However, decisions in Organizations are not just executions of SOPs. Rather, SOPs need to be complemented by interpretations of decision makers. Eventually, strict observance of SOPs might even result in dysfunctional or irrational decisions.

Source: SFB 504

Stata

Statistical analysis software. Stata web site

Source: econterms

state price

the price at time zero of a state-contingent claim that pays one unit of consumption in a certain future state.

Source: econterms

state price vector

the vector of state prices for all states.

Source: econterms

state-space approach to linearization

Approximating decision rules by linearizing the Euler equations of the maximization problem around the stationary steady state and finding a unique solution to the resulting system of dynamic equations

Source: econterms

States

are temporary conditions within an individual such as anger, stress or fear; opposed to traits that are more permanent.

Source: SFB 504

statistic

a function of one or more random variables that does not depend upon any unknown parameter.
(The distribution of the statistic may depend on one or more unknown parameters, but the statistic can be calculated without knowing them just from the realizations of the random variables, e.g. the data in a sample.)
In general a statistic could be a vector of values, but often it is a scalar.

Source: econterms

Statistica

Statistical software. See http://www.statsoft.com.

Source: econterms

statistical discrimination

A theory of why minority groups are paid less when hired. The theory is roughly that managers, who are of one type (say, white), are more culturally attuned to the applicants of their own type than to applicants of another type (say, black), and therefore they have a better measure of the likely productivity of the applicants of their own type. (There is uncertainty in the manager's predictions about blacks and probably of whites too, but more uncertainty for blacks.) Because the managers are risk averse they bid more for a white applicant of a given apparent productivity than for a black one, since their measure of the white's productivity is better. This theory predicts that white managers would offer black applicants lower starting wages than whites of the same apparent ability, even if the manager is not prejudiced against the blacks.

Source: econterms

statistics

statistics

Source: econterms

stochastic

synonym for random.

Source: econterms

stochastic difference equation

A linear difference equation with random forcing variables on the right hand side. Here is a stochastic difference equation in k:
kt+1 + kt = wt
where the k's and w's are scalars, and time t goes from 0 to infinity. The w's were exogenous forcing variables. Or:
Akt+1 + Bkt + Ckt-1 = Dwt + et
where the k's are vectors, the w's and e's are exogenous vectors, and A, B, C, and D are constant matrices.

Source: econterms

stochastic dominance

An abbreviation for first-order stochastic dominance. A possible comparison relationship between two stochastic distributions. Let the possible returns from assets A and B be described by statistical distributions A and B. Payoff distribution A first-order stochastically dominates payoff distribution B if for every possible payoff, the probability of getting a payoff that high is never better in B than in A.

Much more is in Huang and Litzenberger (1988), chapter 2.

Source: econterms

stochastic process

is an ordered collection of random variables. Discrete ones are indexed, often by the subscript t for time, e.g., yt, yt+1, although such a process could be spatial instead of temporal. Continuous ones can be described as continuous functions of time, e.g. y(t).

A stochastic process is specified by properties of the joint distribution for those random variables. Examples:

-- the random variables are independently and identically distributed (iid).
-- the process is a Markov process
-- the process is a martingale
-- the process is white noise
-- the process is autoregressive (e.g. AR(1))
-- the process has a moving average (e.g. see MA(1))

Source: econterms

Stolper-Samuelson theorem

In some models of international trade, trade lowers the real wage of the scarce factor of production, and protection from trade raises it. That is a Stolper-Samuelson effect, by analogy to their (1941) theorem in a Heckscher-Ohlin model context.

A notable case is when trade between a modernized economy and a developing one would lower the wages of the unskilled in the modernized economy because the developing country has so many of the unskilled.

Source: econterms

stopping rule

A stopping rule, in the context of search theory, is a mapping from histories of draws to one of two decisions: stop at this draw, or continue drawing.

Source: econterms

storable

A good is storable to the degree that it does not degrade or lose its value over time. In models of money, storable goods dominate less storable goods as media of exchange.

Source: econterms

straddle

An options trading strategy of buying a call option and a put option on the same stock with the same strike price and expiration date. Such a strategy would result in a profitable position if the stock price is far enough from the strike price.

Source: econterms

Strategic Equilibrium

Profile of plans leading to an outcome of a game that is stable in the sense that given the other players adhere to the equilibrium prescription, no single player wants to deviate from the prescription. Any outcome that is reached by the play of strategies which do not form an equilibrium is an implausible way of playing the game, because at least one player could improve by selecting another strategy.

The concept of strategic equilibrium is completely unrelated to (Pareto) efficiency. Correspondigly, infinitely many games have (only) inefficient strategic equilibria; for a striking example, see the Prisoners' Dilemma game. As a strategic equilibrium is a profile of strategies that is unilaterally unimprovable given that all (other) players conform to their equilibrium strategies, the concept is weak and very general, but on the other hand most games possess several strategic equilibria. One of the major achievements of game theory accordingly has been the refinement of the concept of strategic equilibrium to allow for sharper predictions.

Two major achievements in refining the concept of equilibrium center around the 'time consistency' of strategically stable plans for sequential games, and on making precise the role of the players' beliefs about other players' plans of actions and information. A more general definition of strategic equilibrium is the following: an equilibrium is a profile of strategies and a profile of beliefs such that given the beliefs, the strategies are unilaterally unimprovable given equilibrium behavior, and such that the beliefs are consistent with the actual courses of action prescribed by the equilibrium strategies.

Source: SFB 504

Strategy in game theory and economics

In non-cooperative game theory, strategies are the primitives the player can choose between. A player's strategy is the action or the plan of actions this player chooses out of his set of strategies. For example in an auction, the strategy of a player describes the way this player bids.

In the most simple games (with complete information and simultaneous moves) the strategy of a player simply specifies which action this player takes. To be general enough to cover also more complex games (like dynamic games of incomplete information), the notion of strategy is very comprehensive: A strategy of a player is a complete plan of action in the game. In particular, for each point of time where the player is called upon to act, it describes which action to choose. And it does so for every combination of previous moves (of this player and of his opponents) and for each type of player.

For example, in a dynamic game like chess, a strategy specifies not only the move of a player in the first round, but also in every consecutive round for every possible combination of previous rounds (e.g. if a strategy in a suitable game specifies that a player commits suicide today, it must also specify what he would do tomorrow if he still were alive.)

In games of incomplete information there are different types of players, e.g. in auctions there are types with different valuations for the object for sale. Here a strategy specifies a complete plan of action for every such type.

Source: SFB 504

strategy-proof

A decision rule (a mapping from expressed preferences by each of a group of agents to a common decision) "is strategy-proof if in its associated revelation game, it is a dominant strategy for each agent to reveal its true preferences."

Source: econterms

strict stationarity

Describes a stochastic process whose joint distribution of observations is not a function of time. Contrast weak stationarity.

Source: econterms

strict version of Jensen's inequality

Quoting directly from Newey-McFadden: "[I]f a(y) is a stricly concave function [e.g. a(y)=ln(y)] and Y is a nonconstant random variable, then a(E[Y]) > E[a(Y)]."

Source: econterms

Strictly dominated strategy

A strategy is strictly dominated, if there is a second strategy, such that the second strategy yields a strictly higher payoff than the first one, for every possible combination of strategies of the opponents. Rational Game Theory expects that strictly dominated strategies are never played. If for every player one strategy strictly dominates all other strategies of this player, game theorists expect the combination of these strictly dominant strategies to be the outcome of the game. Unfortunately, typically there are no strictly dominant strategies. Hence weaker equilibrium concepts have to be used to predict play in such games.

Source: SFB 504

strictly stationary

A random process {xt} is strictly stationary if the joint distribution of elements is not a function of the index t. This is a stronger condition than weak stationarity (which see; it's easier to understand) for any random process with first and second moments, because it requires also that the third moments, etc, be stationary.

Source: econterms

strip financing

Corporate financing by selling "stapled" packages of securities together that cannot be sold separately. E.g., if a firm might sell bonds only in a package that includes a standard proportion of senior subordinated debt, convertible debt, preferred, and common stock. A benefit is reduced conflict. In principle bondholders and stockholders have different interests and that can impose costs on the firm. After a strip financing, however, those groups are each made up of all the same people, so their interests coincide.

Source: econterms

strips

securities made up of standardized proportions of other securities from the same firm. See strip financing.

U.S. Treasury bonds can be split into principal and interest components, and the standard name for the resulting securities is STRIPS (Separate Trading of Registered Interest and Principal of Securities). See coupon strip and principal strip.

Source: econterms

strong form

Can refer to the strong form of the efficient markets hypothesis, which is that any public or private information known to anyone about a security is fully reflected in its current price.
Fama (1991) renames tests of the strong form of the hypothesis to be 'tests for private information.' Roughly -- If individuals with private information can make trading gains with it, the strong form hypothesis does not hold.

Source: econterms

strong incentive

An incentive that encourages maximization of an objective. For example, payment per unit of output produced encourages maximum production. Useful in design of a contract if the buyer knows exactly what is desired. Contrast weak incentive.

Source: econterms

strong law of large numbers

If {Zt} is a sequence of n iid random variables drawn from a distribution with mean MU, then with probability one, the limit of sample averages of the Z's goes to MU as sample size n goes to infinity.

I believe that strong laws of large numbers are generally, or perhaps always, proved using some version of Chebyshev's inequality. (The proof is rarely shown; in most contexts in economics one can simply assume laws of large numbers).

Source: econterms

strongly consistent

An estimator for a parameter is strongly consistent if the estimator goes to the true value almost surely as the sample size n goes to infinity. This is a stronger condition than weak consistency; that is, all strongly consistent estimators are weakly consistent but the reverse is not true.

Source: econterms

strongly dependent

A time series process {xt} is strongly dependent if it is not weakly dependent; that is, if it is strongly autocorrelated, either positively or negatively.

Example 1: A random walk with correlation 1 between observations is strongly dependent.

Example 2: An iid process is not strongly dependent.

Source: econterms

strongly ergodic

A stochastic process may be strongly ergodic even if it is nonstationary. A strongly ergodic process is also weakly ergodic.

Source: econterms

Strongly Pareto Optimal

A strongly Pareto optimal allocation is one such that no other allocation would be both (a) as good for everyone and (b) strictly preferred by some.

Source: econterms

structural break

A structural change detected in a time series sample.

Source: econterms

structural change

A change in the parameters of a structure generating a time series. There exist tests for whether the parameters changed. One is the Chow test.

Examples: (planned)

Source: econterms

structural moving average model

The model is a multivariate, discrete time, dynamic econometric model. Let yt be an ny x 1 vector of observable economic variables, C(L) is a ny x ne matrix of lag polynomials, and et be a vector of exogenous unobservable shocks, e.g. to labor supply, the quantity of money, and labor productivity. Then:
yt=C(L)et
is a structural moving average model.

Source: econterms

structural parameters

Underlying parameters in a model or class of models.

If a theoretical model explains two effects of variable x on variable y, one of which is positive and one negative, they are structurally separate. In another model, in which only the net effect of x on y is relevant, one structural parameter for the effect may be sufficient.

So a parameter is structural if a theoretical model has a distinct structure for its effect. The definition is not absolute, but relative to a model or class of models which are sometimes left implicit.

Source: econterms

structural unemployment

Unemployment that comes from there being an absence of demand for the workers that are available. Contrast frictional unemployment.

Source: econterms

structure

A model with its parameters fixed. One can discuss properties of a model with various parameters, but 'structural' properties are those that are fixed unless parameters change.

Source: econterms

Student t

Synonym for the t distribution. The name came about because the original researcher who described the t distribution wrote under the pseudonym 'Student'.

Source: econterms

stylized facts

Observations that have been made in so many contexts that they are widely understood to be empirical truths, to which theories must fit. Used especially in macroeconomic theory. Considered unhelpful in economic history where context is central. stylized facts

Source: econterms

subdifferential

A class of slopes. By example -- consider the top half of a stop sign as a function graphed on the xy-plane. It has well-defined derivatives except at the corners. The subdifferential is made up of only one slope, the derivative, at those points. At the corners there are many 'tangents' which define lines that are everywhere above the stop sign except at the corner. The slopes of those lines are members of the subdifferential at those points.

In general equilibrium usage, the subdifferential can be a class of prices. It's the set of prices such that expanding the total endowment constraint would not cause buying and selling, because the agents have optimized perfectly with respect to the prices. So if a set of prices is possible for a Walrasian equilibrium, it is in the subdifferential of that alocation.

Source: econterms

subgame perfect equilibrium

An equilibrium in which the strategies are a Nash equilibrium, and, within each subgame, the parts of the strategies relevant to the subgame make a Nash equilibrium of the subgame.

Source: econterms

Subgame perfect equilibrium

In extensive-form games with complete information, many strategy profiles that form best responses to one another imply incredible threats or promises that a player actually does not want to carry out anymore once he must face an (unexpected) off-equilibrium move of an opponent. If the profile of strategies is such that no player wants to amend his strategy whatever decision node can be reached during the play of the game, an equilibrium profile of strategies is called subgame perfect. In this sense, a subgame-perfect strategy profile is 'time consistent' in that it remains an equilibrium in whatever truncation of the original game (subgame) the players may find themselves.

Source: SFB 504

submartingale

A kind of stochastic process; one in which the expected value of next period's value, as projected on the basis of the current period's information, is greater than or equal to the current period's value.
This kind of process could be assumed for securities prices.

Source: econterms

subordinated

Adjective. A particular debt issue is said to be subordinated if it was senior but because of a subsequent issue of debt by the same firm is no longer senior. One says, 'subordinated debt'.

Source: econterms

substitution bias

A possible problem with a price index. Consumers can substitute goods in response to price changes. For example when the price of apples rises but the price of oranges does not, consumers are likely to switch their consumption a little bit away from apples and toward oranges, and thereby avoid experiencing the entire price increase. A substitution bias exists if a price index does not take this change in purchasing choices into account, e.g. if the collection ('basket') of goods whose prices are compared over time is fixed.

'For example, when used to measure consumption prices between 1987 and 1992, a fixed basket of commodities consumed in 1987 gives too much weight to the prcies that rise rapidly over the timespan and too little weight to the prices that have fall; as a result, using the 1987 fixed basket overstates the 1987-92 cost-of-living change. Conversely, because consumers substitute, a fixed basket of commodities consumed in 1992 gives too much weight to the prices that have fallen over the timespan and to little to the prices that have risen; as a result, the 1992 fixed based understates the 1987-92 cost-of-living change.' (Triplett, 1992)

Source: econterms

SUDAAN

A statistical software program designed especially to analyze clustered data and data from sample surveys. The SUDAAN Web site is at http://www.rti.org/patents/sudaan/sudaan.html.

Source: econterms

sufficient statistic

Suppose one has samples from a distribution, does not know exactly what that distribution is, but does know that it comes from a certain set of distributions that is determined partly or wholly by a certain parameter, q. A statistic is sufficient for inference about q if and only if the values of any sample from that distribution give no more information about q than does the value of the statistic on that sample.

E.g. if we know that a distribution is normal with variance 1 but has an unknown mean, the sample average is a sufficient statistic for the mean.

Source: econterms

sunk costs

Unrecoverable past expenditures. These should not normally be taken into account when determining whether to continue a project or abandon it, because they cannot be recovered either way. It is a common instinct to count them, however.

Source: econterms

sup

Stands for 'supremum'. A value is a supremum with respect to a set if it is at least as large as any element of that set. A supremum exists in context where a maximum does not, because (say) the set is open; e.g. the set (0,1) has no maximum but 1 is a supremum.

sup is a mathematical operator that maps from a set to a value that is syntactically like an element of that set, although it may not actually be a member of the set.

Source: econterms

superlative index numbers

'What Diewert called 'superlative' index numbers were those that provide a good approximation to a theoretical cost-of-living index for large classes of consumer demand and utility function specifications. In addition to the Tornqvist index, Diewert classified Irving Fisher's 'Ideal' index as belong to this class.' -- Gordon, 1990, p. 5

from harper (1999, p. 335): The term 'superlative index number' was coined by W. Erwin Diewert (1976) to describe index number formulas which generate aggregates consistent with flexible specifications of the production function.'

Two examples of superlative index number formulas are the Fisher Ideal Index and the Tornqvist index. These indexes 'accomodate subsitution in consumer spending while holding living standards constant, something the Paasche and Laspeyres indexes do not do.' (Triplett, 1992, p. 50).

Source: econterms

superneutrality

Money in a model 'is said to be superneutral if changes in [nominal] money growth have no effect on the real equilibrium.' Contrast neutrality.

Source: econterms

supply curve

For a given good, the supply curve is a relation between each possible price of the good and the quantity that would be supplied for market sale at that price.

Drawn in introductory classes with this arrangement of the axes, although price is thought of as the independent variable:

Price   |         / Supply
        |       /
        |     /
        |   /
        |________________________
                        Quantity

Source: econterms

support

of a distribution. Informally, the domain of the probability function; includes the set of outcomes that have positive probability. A little more exactly: a set of values that a random variable may take, such that the probability is one that it will take one of those values. Note that a support is not unique, because it could include outcomes with zero probability.

Source: econterms

SUR

Stands for Seemingly Unrelated Regressions. The situation is one where the errors across observations are thought to be correlated, and one would like to use this information to improve estimates. One makes an SUR estimate by calculating the covariance matrix, then running GLS.

The term comes from Arnold Zellner and may have been used first in Zellner (1962).

Source: econterms

SURE

same as SUR estimation.

Source: econterms

Survey of Consumer Finances

There is a U.S. survey and a Canadian survey by this name.

The U.S. one is a survey of U.S. households by the Federal Reserve which collects information on their assets and debt. The survey oversamples high income households because that's where the wealth is. The survey has been conducted every three years since 1983.

The Canadian one is an annual supplement to the Labor Force Survey that is carried out every April.

Source: econterms

survival function

From a model of durations between events (which are indexed here by i). Probability that an event has not happened since event (i-1), as a function of time.
E.g. denote that probability by Si():
Si(t | ti-1, ti-2, ...)

Source: econterms

SVAR

Structured VAR (Vector Autoregression).
The SVAR representation of a SMA model comes from inverting the matrix of lag polynomials C(L) (see the SMA definition) to get: A(L)yt=et
The SVAR is useful for (1) estimating A(L), (2) reconstructing the shocks et if A(L) is known.

Source: econterms

symmetric

A matrix M is symmetric if for every row i and column j, element M[i,j] = M[j,i].

Source: econterms

Synonyms

Rückschau-Fehler, knew-it-all-along effect, creeping determinism

Source: SFB 504

T

t distribution

Defined in terms of a normal variable and a chi-squared variable. Let z~N(0,1) and v~X2(n). (That is, v is drawn from a chi-squared distribution with n degrees of freedom.) Then t = z(n/v)1/2
has a t distribution with n degrees of freedom. The t distribution is a one-parameter family of distributions. n is that parameter here. The t distribution is symmetric around zero and asymptotically (as n goes to infinity) approaches the standard normal distribution.

Mean is zero, and variance is n/(n-2).

Source: econterms

t statistic

After an estimation of a coefficient, the t-statistic for that coefficient is the ratio of the coefficient to its standard error. That can be tested against a t distribution (which see) to determine how probable it is that the true value of the coefficient is really zero.

Source: econterms

tangent cone

Informally: a set of vectors that is tangent to a specified point.

Source: econterms

Target return

To measure the shortfall risk and the excess chance, the investor has to define a target return. This can be a deterministic one, for example a riskless attainable final wealth position or a minimum return, spezified by a controlling authority or by the market. Besides this, it is also interesting to consider a random target return, e.g. the return or the price of a stock- or a bond-index.

Source: SFB 504

team production

Defined by Alchian and Demsetz (1972) this way: "Team pproductive activity is that in which a union, or joint use, of inputs yields a larger output than the sum of the products of the separately used inputs." (p. 794)

Source: econterms

technical change

A change in the amount of output produced from the same inputs. Such a change is not necessarily technological; it might be organizational, or the result of a change in a constraint such as regulation, prices, or quantities of inputs.

According to Jorgenson and Stiroh (May 1999 American Economic Review p 110), sometimes total factor productivity (TFP) can be a synonym for technical change. A possible measure is output per unit of factor input. Jorgenson and Stiroh also have an explanation of how it is definitionally possible for how a technological revolution not to lead to technical change as measured in these ways.

Source: econterms

technological change

A change in the set of feasible production possibilities.
Contrast technical change.

Source: econterms

technology shocks

An event, in a macro model, that changes the production function. Commonly this is modeled with a aggregate production function that has a scaling factor, e.g.:
F(Kt,Nt) = AtKaN(1-a)
where At a time series of technology shocks whose values can be estimated or whose stochastic process (joint distribution) might be conjectured to have certain properties.
By this definition the oil shocks of the 1970s were technology shocks -- that is, for any given aggregate capital stock or labor stock, production was more expensive after an oil shock because energy would be more expensive. This interpretation explains why real business cycle theory drew interest in economics in the 1970s after the oil shocks had such a dramatic impact on Western economies.

Source: econterms

tenure

In the context of studies of employees, length of time with current employer in current job. Contrast experience.

Source: econterms

term spreads

"long-term minus short-term interest rates"

Source: econterms

terms of trade

An index of the price of a country's exports in terms of its imports. The terms of trade are said to improve if that index rises. (Obstfeld and Rogoff, p 25)

An analogous use is when comparing relative prices. If the cost of agricultural goods in terms of industrial goods goes up, one might say the 'terms of trade ... shifted in favor of agricultural products.' (North and Thomas, p 108).

Source: econterms

tertiary sector

Literally, 'third sector'. Per Landes, 1969/1993, p 9, refers to the "administrative and service sector of the economy".

In context of Williamson and Lindert, 1980, p 172, is defined more specifically to be the sector of production outside of agriculture and industry, and includes construction, trade, finance, real estate, private services, government, and sometimes transportation.

Source: econterms

test for structural change

An econometric test to determine whether the coefficients in a regression model are the same in separate subsamples. Often the subsamples come from different time periods. See Chow test.

Source: econterms

test of identifying restrictions

synonym for Hausman test, in practice. Only overidentifying restrictions (assumptions) can be tested.

Source: econterms

test statistic

"A random variable [T, in this example] of which the probability distribution is known, either exactly or approximately, under the null hypothesis. We then see how likely the observed value of T is to have occurred, according to that probability distribution. If T is a number that could easily have occurred by chance [under the tested hypothesis], then we have no evidence against the null hypothesis H0. However if it is a number that would occur by chance only rarely, we do have evidence against the null, and may well decide to reject it."

Source: econterms

TFP

Abbreviation for Total Factor Productivity.

Source: econterms

the standard model

Has a variety of meanings, and can be a confusing phrase to outsiders to a discussion. Often implicitly contrasts the model at hand to a simpler, earlier one in the same literature, sometimes with the implication that variations from the earlier one ought, in the speaker's opinion, to be justified explicitly.

A standard model of a firm is one in which it is strictly and always profit maximizing. Often 'profit' is interpreted in a short term way, but depending on context it may refer to a long run present-discounted value kind of profit.

A standard model of individuals seeking jobs is that they are strictly consumption maximizing, and therefore wage maximizing. Occasionally a long run present discounted value of wages is the objective. If time away from work is relevant, the consumer maximizes some combination of consumption/ wage and time away from work, or 'leisure'.

A standard model of international trade is one in which countries specialize toward their comparative advantages.

A standard model of a product market is one in which (1) all producers (called firms) and consumers (thought of as individuals) are price takers and variations in any one actor's production or consumption have no effect on the price; (2) the demand curve is strictly increasing (that is the price and quantity are positively correlated); (3) the supply curve is strictly decreasing (that is, price and quantity are negatively correlated); (4) the good is infinitely divisible.

Source: econterms

Theory of subjective expected utility

The theory of subjective (expected) utility (Savage, 1954) is the central element of the neoclassical theory of rational economic behavior. As such, it is the most important example of a theory of rational behavior. Its basic assumptions are that choices are made:


among a given, fixed set of alternatives;
with (subjectively) known probability distributions of outcomes for each alternative; and
in such a way as to maximize the expected value of a given utility function.

While these assumptions are convenient for many purposes, they may not fit empirically many situations of economic choice. This is the subject of the theory of bounded rationality and the research interest of behavioral economics.

Source: SFB 504

theory of the firm

Subject is: What are the nature, extent, and purposes of firms? This organization of the answers comes from Hart's book.

Categories of answers:

Neoclassical theories of the firm identify it with its production technology, and usually define the driving objective of the firm as maximizing its profits given its technology.

Principal-agent theories of firms -- that firms are organized to divide work among many people in ways that minimize principal agent problems.

Transaction cost theories -- that comprehensive contracts with workers are unrealistic and that the structure of a firm (e.g., a hierarchical one) is useful for efficiently doing a job. First academic paper of this kind was Coase, 1937.

Property rights theories -- that ownership is a source of power ... --------- theory of the firm: firm organization substitutes for contracts, firms reduce uncertainty and opportunistic behavior, and set incentives to elicit efficient responses from agents. -- Mokyr's rise and fall paper

Source: econterms

theta

As used with respect to options: The rate of change of the value of a portfolio of derivatives with respect to time with all else held constant. Formally this is a partial derivative.

Source: econterms

tightness

An attribute of a market.

In securities markets, tightness is "the cost of turning around a position over a short period of time." (Kyle, 1985, p 1316). [Does 'cost' mean trading costs, alone? So does 'turning around' just mean 'trading'?]

A labor market is said to be tight if employers have trouble filling jobs, or if there is a long wait to fill an available job. It is not evidence that the labor market is tight if potential employees have trouble finding jobs or must wait to get one.

Source: econterms

time consistency

Opposite of time inconsistency or dynamic inconsistency.

Source: econterms

time deposits

The money stored in the form of savings accounts at banks.

Source: econterms

time inconsistency

Same as dynamic inconsistency.

Source: econterms

Time preference

Intuitively, time preference describes the preference for present consumption over future consumption. It is a key concept underlying the theory of intertemporal choice. The strength of this preference is measured by the rate of time preference or, equivalently, by a discount factor.

An number of studies have shown that the standard theory of intertemporal choice is frequently violated in experimental settings, just as standard (static) expected utility (EU) theory of choice is systematically violated. The main findings of such experiments are (see Camerer 1995, pp. 649-51):


Implicit discount rates decline with time horizon (formally, this implies that discount rates are hyperbolic rather than exponential);
discount rates are larger for gains than for losses of equal magnitude;
people demand more to delay consumption than to speed it up.

How alternative models of intertemporal choice (such as hyperbolic discounting) can be incorporated in applied economic research is an important field of current research.

Source: SFB 504

time preference

A utility function may or may not have the property of time preference. Time preference is an intense preference to receive goods or services immediately.

The discount factor preference to avoid delay must be more than multiplictavely linear in the delay time passed, or one would not use this term to describe the utility function. In theory this attribute is analytically distinct from other reasons to want something sooner, such as interest rates; the bounded rationality problem of remembering how and when to consume the good later; or discounting of future events for reasons of opportunity, risk, or uncertainty (e.g., the chance of surviving to a later time).

There is evidence that human behavior exhibits great impatience which might be modeled well by time preference and perhaps can perhaps be distinguished from these other factors. So one may read references to empirical observations of time preference, though as far as this editor can tell the concept is quite theoretical and some jump is required to leave all other explanations aside and link it directly to an observation.

Source: econterms

time series

A stochastic process where the time index takes on a finite or countably infinite set of values. Denoted, e.g. {Xt | for all integers t}. Relevant terms: time series
See Editor's comment on time series.

Source: econterms

time-varying covariates

Means the same thing as time-dependent covariates; that the covariates (regressors, probably) change over time.

Source: econterms

tit-for-tat

A strategy in a repeated game (or a series of similar games). When a Prisoner's dilemma game is repeated between the same players, the tit-for-tat strategy is to choose the 'cooperate' action unless in the previous round, one's opponent chose to defect, in which case one responds by choosing to defect this round. This tends to induce cooperative behavior against an attentive opponent.

Source: econterms

Tobin tax

A tax on foreign currency exchanges.

Source: econterms

Tobin's marginal q

The ratio of the change in the value of the firm to the added capital cost for a small increment to the capital stock. If the firm is in equilibrium, it's marginal q is one; all investments that add more to the value of the firm than their cost have already been undertaken, and if we knew the replacement cost of capital we could look up the stock market value of a firm and calculate its average q directly.

Source: econterms

Tobin's q

This description comes from Dow and Gorton, (1996): The ratio of the current market value of a firm's assets to their cost. If q is greater than 1, the firm should increase its capital stock. It follows that, according to "Fischer and Merton (1984), 'the stock market should be a predictor of the rate of corporate investment' (p 84-85)" -- that is,"rising stock prices cause higher investment [by firms]. The empirical evidence is consistent with this view: investment in plant and eqipment increases following a rise in stock prices in all countries that have been studied. In fact, lagged stock returns outperform q in predicting investment [at both] the macroeconomic level and in cross-sections of firms. See Barro (1990), Bosworth (1975), and Welch (1994)."

Source: econterms

tobit model

An econometric model in which the dependent variable is censored; in the original model of Tobin (1958), for example, the dependent variable was expenditures on durables, and the censoring occurs because values below zero are not observed.
The model is:
yi*=bxi+ui where ui~N(0,s2)
But yi* (e.g., durable goods desired by the consumer described by variables xi) is not observed.
yi=yi* if yi*>y0, and yi=y0 otherwise
yi is observed.
y0 is known. s2 is often treated as known. xi's are observed for all i.

Source: econterms

top-coded

For data recorded in groups, e.g. 0-20, 21-50, 50-100, 101-and-up, we do not know the average or distribution of the top category, just its lower bound and quantity. That data is 'top-coded.' We may adjust for it by scaling up the top-code and calling that the average.

Source: econterms

topological space

A pair of sets (X, t) such that t is a topology in X. See topology.

Source: econterms

topology

Is defined with respect to a set X. A 'topology in X' is a set of subsets of X satisfying several criteria. Let t denote a topology in X. The sets in t are by definition 'open sets' with respect to t, and sets outside of t are not. t satisfies the following:
(1) X and the null set are in t.
(2) Finite or infinite unions of open sets (that is, elements of t) are also in t.
(3) Finite intersections of open sets are in t.

Comments and related definitions:
More than one topology in X may be possible for a given set X.

The complement of a set in t is said to be a 'closed set'.

Element of X may be called 'points'.

A 'neighborhood' of a point x is any open set containing x.

Let M be a subset of X. A point x in X is a 'contact point' of M if every neighborhood of x contains at least one point of M; and x would be a 'limit point' of M if every neighborhood of x contained infinitely many points of M. The set of all contact points of M is the 'closure' of M.

A 'topological space' is a pair of sets (X, t) satisfying the above.
All metric spaces are topological spaces. The sets one would call open in a metric space satisfy the criteria above; one could also label all subsets of X as open for purpose of listing the members of the topology and they would then satisfy the definition above.

Given two topologies t1 and t2 on the same set X, we say that 't1 is stronger than t2', or equivalently that 't2 is weaker than t1' if every set in t2 is in t1. A stronger topology thus has at least as many elements as a weaker one.

Source: econterms

Tornqvist index

Defined in Hulten to be a discrete-time approximation to a Divisia index, in which averages over time fill in the quantities of capital and labor.

The Tornqvist index is a superlative index formula. It was developed in the 1930s at the Bank of Finland, according to Triplett (1992).

Defined at length in Dean & Harper, 1998, pages 8-9.

See also http://www.geocities.com/jeab_cu/paper2/paper2.htm.

Source: econterms

total factor productivity

Given the macro model: Yt = ZtF(Kt,Lt), Total Factor Productivity (TFP) is defined to be Yt/F(Kt,Lt)

Likewise, given Yt = ZtF(Kt,Lt,Et,Mt), TFP is Yt/F(Kt,Lt,Et,Mt)

The Solow residual is a measure of TFP. TFP presumably changes over time. There is disagreement in the literature over the question of whether the Solow residual measures technology shocks. Efforts to change the inputs, like Kt, to adjust for utilization rate and so forth, have the effect of changing the Solow residual and thus the measure of TFP. But the idea of TFP is well defined for each model of this kind.

TFP is not necessarily a measure of technology since the TFP could be a function of other things like military spending, or monetary shocks, or the political party in power.

"Growth in total-factor productivty (TFP) represents output growth not accounted for by the growth in inputs." -- Hornstein and Krusell (1996). Disease, crime, and computer viruses have small negative effects on TFP using almost any measure of K and L, although with absolutely perfects measures of K and L they might disappear. Reason: crime, disease, and computer viruses make people AT WORK less productive.

Source: econterms

totally mixed strategy

In a noncooperative game, a totally mixed strategy of a player is a mixed strategy giving positive probability weight to every pure strategy available to the player.

For a more formal definition see Pearce, 1984, p 1037. This is a rough paraphrase.

Source: econterms

Townsend inefficiency

a possible property of monetary exchange. One of the parties is evaluating the value of the money he gets in the transaction not the utility he generated in production.

Source: econterms

trace

The trace of a square matrix A is the sum of the elements on its diagonal. Has the property that tr(AB)=tr(BA).

Source: econterms

Tragedy of the commons

A metaphor for the public goods problem that it is hard to coordinate and pay for public goods. The term comes from Hardin (1968). The commons is a pasture held by a group. Each individual owns sheep and has the incentive to put more and more sheep on the pasture to gain, privately. The overall effect of many individuals do this overwhelms the carrying capacity of the pasture and the sheep cannot all survive.

Source: econterms

trajectory

series of states in a dynamical system {N0, N1, N2, ...}. For a deterministic generator function F() such that Nt+1 = F(Nt), then N1=F(N0), N2=F(F(N0)), etc.

Source: econterms

transactions costs

Made up of three types per North and Thomas (1973) p 93:

-- search costs (the costs of locating information about opportunities for exchange)
-- negotiation costs (costs of negotiating the terms of the exchange)
-- enforcement costs (costs of enforcing the contract)

Source: econterms

transactions demand

The transactions demand for money is positively related to income and negatively related to the interest rate.

Source: econterms

transient

In the context of stochastic processes, "A state is called transient if there is a positive probability of leaving and never returning." -- Stokey and Lucas, p 322

Source: econterms

transition economics

Since about 1992 this term has come to mean the subject of the transition of post-Soviet economies toward a Western free market model.

It almost never refers either to other kinds of transitions economies might undergo, nor to the subject labeled development economics.

Source: econterms

translog

The translog production function is a generalization of the Cobb-Douglas production function. The name stands for 'transcendental logarithmic'. See Greene, 2nd edition, p 209-210. Cited to Berndt and Christensen (1972); elsewhere, said to have been introduced by Christensen, Jorgenson, and Lau, 1971, p 255-6. Applied to a case like Y=f(K,L), where f() is replaced by the translog. Its use always seems to be in estimation not in theory. Avoids strong assumptions about the functional form of the production function; can approximate any other production function to second degree. The regression run is, e.g. (from Greene p 209):

ln Y = b1 + b2 (ln L) + b3 (ln K) + b4 (ln L)2/2 + b5 (ln K)2/2 + b6 (ln L)(ln K) + e

The Cobb-Douglas estimation is like this but with the restriction that b4=b5=b6=0.

Greene, p 210 says that the elasticity of output with respect to capital is in this model:

(d ln Y)/(d ln K) = b3 + b5 (ln K) + b6 (ln L) -------------------- From Lau (1996) in _Mosaic_: Flexible functional forms such as the translog production function allow 'the production [function?] elasticies to change with differing relative factor proportions.' (p76)

Source: econterms

transpose

A matrix operation. The transpose of an M x N matrix A is an N x M matrix, denoted A' or AT, in which the top row of A has been made into the first column of A', the second row of A has been made into the second column of A', and so forth.

Source: econterms

transversality condition

Limits solutions to an infinite period dynamic optimization problem. Intuitively, it rules out those that involve accumulating, for example, infinite debt. The transversality condition (TC) can be obtained by considering a finite, T-period horizon version of the problem of maximizing present value, obtaining the first-order condition for nt+T, and then taking the limit of this condition as T goes to infinity. The form is often: (TC) lim bT.... = 0

Source: econterms

treatment effects

In the language of experiments, a treatment is something done to a person that might have an effect. In the absence of experiments, discerning the effect of a treatment like a college education or a job training program can be clouded by the fact that the person made the choice to be treated. The outcomes are a combined result of the person's propensity to choose the treatment, and the effects of the treatment itself. Measuring the treatment's effect while screening out the effects of the person's propensity to choose it is the classic treatment effects problem.

A standard way to do this is to regress the outcome on other predictors that do not vary with time, as well as whether the person took the treatment or not. An example is a regression of wages not only on years-of-education but also on test scores meant to measure abilities or motivation. Both years-of-education and test scores are positively correlated with subsequent wages, and when interpreting the findings the coefficient found on years of education has been partly cleansed of the factors predicting which people would have chosen to have more education.

A more advanced method is the Heckman two-step.

Source: econterms

trembling hand perfect equilibrium

Defined by Selten (1975). Now perfect equilibrium is considered a synonym.

Source: econterms

trend stationary

A time series process is trend stationary if after trends were removed it would be stationary.

Following Phillips and Xiao (1998): iff a time series process yt can be decomposed into the sum of other time series as below, it is trend stationary:

yt = gxt + st

where g is a k-vector of constants, xt is a vector of deterministic trends, and st is a stationary time series. Phillips and Xiao (1998), p. 2, say that xt may be "more complex than a simple time polynomial. For example, time polynomials with sinusoidal factors and piecewise time polynomials may be used. The latter corresponds to a class of models with structural breaks in the deterministic trend."

Whether all researchers would include statistical models with structural breaks in the class of those that are trend stationary, as Phillips and Xiao do, is not known to this writer.

Note that this definition is designed to discuss the question of whether a statistical model is trend stationary. To decide if one should think of a particular time series sample as trend stationary requires imposing a statistical model first.

Source: econterms

triangular kernel

The triangular kernel is this function: (1-|u|) for -1<u<1 and zero for u outside that range. Here u=(x-xi)/h, where h is the window width and xi are the values of the independent variable in the data, and x is the value of the independent variable for which one seeks an estimate.
For kernel estimation.

Source: econterms

truncated dependent variable

A dependent variable in a model is truncated if observations cannot be seen when it takes on vales in some range. That is, both the independent and the dependent variables are not observed when the dependent variable is in that range.

A natural example is that if we have data on consumption purchases, if a consumer's willingness-to-pay for a certain product is negative, we will never see evidence of it no matter how low the price goes. Price observations are truncated at zero, along with identifying characteristics of the consumer in this kind of data.

Contrast censored dependent variables.

Source: econterms

TSP

Time series econometrics software

Source: econterms

Tukey boxplot

A way of showing a distribution on a line, so that distributions can be compared easily in a single diagram. Used more in statistics than in econometrics. A thin box marks out the 25th to 75th percentiles; a dash within that box marks the median; a line marks the outer part of the distribution, and outside dots or stars mark outliers. (The exact range of the line is also derived from the location of the quartiles; its exact definition I do not understand from Quah, 1997; maybe is clear in Cleveland 1993.)

A rough example; consider two continuous distributions that ranges from 0 to 4:

  
0    1    2    3    4
 |--[==+===]---|  * *      <= the first distribution
      |-[=+==]---|         <= the second distribution
The first distribution has a median around 1.3, and the main part of it ranges from .3 to 3.0. There are some outliers at the top. The second distribution has a median near 2.0, and is more narrowly concentrated than the first, with few outliers.

Source: econterms

tutorial: Matlab

From a Unix shell one can just type 'matlab' as a command on any computer that has it, and start to type interactive statements such as those below. One could also put them in a file with the .m extension to run them from within matlab with 'run file.m' or from the shell with 'matlab < file.m' This tutorial covers very little but you can see something of the language.

%  The percent sign begins comments.
%  The statements below can be typed interactively one per line to get
% clear responses from Matlab.  No need to type the comment part at the
% end of the lines.  Make sure to use upper and lower case in the
% same was as in the statements shown.

A=[1 2;3 4]   % defines matrix A as a 2x2 with first line [1 2]
B=A'          % transpose
B=A+A         % sum, element by element
Ainv=inv(A)   % takes inverse of a matrix
A*Ainv        % calculates and prints the result of a matrix multiplication
B=[A;A]       % stacked so B has twice as many rows as A
B=[A A]       % the A's are side by side.  B has twice as many columns as A.
B=A(1,1)      % B is a scalar now, the upper left element of A
B=A'*A        % matrix multiplication
B=A(:,1)      % B is set to first row of A
B=A.*A        % element by element multiplication
B=B./A        % element by element division
A=zeros(3,3)  % special definition of a matrix of zeros
B=ones(3,1)   % defines a matrix of ones
A=eye(5)      % defines identity matrix
B=A(1:2,1:3)  % takes part of matrix
more on       % may not be needed; prevents help screen from scrolling off
help *        % shows sample of the help available

Source: econterms

two stage least squares

An instrumental variables estimation technique. Extends the IV idea to a situation where one has more instruments than independent variables in the model. Suppose one has a model:
y = Xb + e
Here y is a T x 1 vector of dependent variables, X is a T x k matrix of independent variables, b is a k x 1 vector of parameters to estimate, and e is a k x 1 vector of errors. But the matrix of independent variables X may be correlated to the e's. Then using a matrix of independent variables Z, uncorrelated to the e's, that is T x r, where r>k:
Stage 1: By OLS, regress the X's on the Z's to get Xhat = (Z'X)-1Z'y
Stage 2: By OLS, regress y on the Xhat's. This gives an unbiased estimate of b.
The stages can be combined into one for maximum speed:
b = (X'PzX)-1X'Pzy
where Pz, the projection matrix of Z, is defined to be:
Pz = Z(Z'Z)-1Z'

Source: econterms

two-factor model

suggests a production model with two factors of production, labor L and capital K.

Source: econterms

tying

Tying is the vendor practice of requiring customers of one product to buy others.
Tying can be said to impede trade in that the customer's choices are restricted. If the customer were free to buy the product without further conditions, the customer would apparently be better off than if the product has strings attached. Tying could, however, be efficiency-enhancing by (1) reducing the number of market transactions (an efficiency of scale), or by (2) enabling a work-around of a regulation, such as offering a bargain in conjunction with a price-controlled product.
A historical example: years ago lessees of IBM mainframes had to agree to buy punch cards only from IBM. Those punch cards were sold at a higher price than on the open market. So the customer would have been better off with the same contract minus this clause. But one could argue that tying the products this way improved competition. It could be that IBM was trying to charge heavy users of the computer more than light users by putting a surcharge on the punch cards. If so, IBM found a way to bill customers for one of its costs, computer maintenance. The practice would theoretically encourage customers to optimize their use of the computer rather than use it excessively. In this case the practice might be pro-competitive.

Source: econterms

type I error

That is, 'type one error.' This is the error in testing a hypothesis of rejecting a hypothesis when it is true.

Source: econterms

type I extreme value distribution

Has the cdf F(x)=exp(-exp(-x)).

(Devine and Kiefer write F(x)=exp(-exp(-x)); the difference may be in the range of x? must write this out)

Source: econterms

type II error

That is, 'type two error.' This is the error in testing a hypothesis of failing to reject a hypothesis when it is false.

Source: econterms

Type of players

In a game of incomplete information, the payoff-relevant private information of a player, like the numerical level of a certain parameter in his von Neumann-Morgenstern utility function, is called his type. In a game of incomplete information, each player has at least two possible types. Types also can be represented as varying continously.

Source: SFB 504

Type1 Arbitrage

is a trading strategy that generates a strictly positive cash flow between 0 and T in at least one state with positive probability and does not require an outflow of funds at any date, that is a trading strategy that produces something from nothing. A simple example of this kind of arbitrage is the opportunity to borrow and lend at two different rates of interest.

Source: SFB 504

Type2 Arbitrage

generates a net future cash flow of at least zero for sure, with the arbitrageur getting his profits up front. This kind of arbitrage is referred to as free lunch. The simultaneous purchase and sale of the same or essentially similar security in two different markets for advantageously different prices may illustrate this case.

This text-book definition of arbitrage requires no capital and entails no risk. We do not expect such strategies to exist in the equilibrium state of efficient securities market.

Source: SFB 504

U

UL Literature

Autorenkollektiv (1997), Bogart (1985), Funk & Stoer (1997), Hausmeister (1952), Lotterbottel (1983)


 

Source: SFB 504

ultimatum game

An experiment. There are two players, an allocator A and a recipient R, who in the experiment do not know one another. They have received a windfall, e.g., of $1. The allocator, moving first, proposes to split the windfall by proposing to take share x, so that A receives x and R receives 1-x. The recipient can accept this allocation, or reject it in which case both get nothing. The subgame perfect equilibrium outcome is that A would offer the smallest possible amount to R, e.g., the share $.99 for A and $.01 for R, and that the recipient should accept. The experimental evidence, however, is that A offers a relatively large share to R, often 50-50, and that R would often reject smaller positive amounts. We may interpret R's behavior has willingness to pay a cost to punish "unfair" splits. With regard to A's behavior -- does A care about fairness too? Or is A income-maximizing given R's likely behavior? See also Dictator Game.

Source: econterms

unbalanced data

In a panel data set, there are observations across cross-section units (e.g. individuals or firms), and across time periods. Often such a data set can be represented by a completely filled in matrix of N units and T periods. In the "unbalanced data" case, however, the number of observations per time period varies. (Equivalently one might say that the number of observations per unit is not always the same.) One might handle this by letting T be the total number of time periods and Nt be the number of observations in each period.

Source: econterms

unbiased

An estimator b of a distribution's parameter B is unbiased if the mean of b's sampling distribution is B. Formally, if: E[b] = B.

Source: econterms

uncertainty

If outcomes will occur with a probability that cannot even be estimated, the decisionmaker faces uncertainty. Contrast risk.

This meaning to uncertainty is attributed to Frank Knight, and is sometimes referred to as Knightian uncertainty.

The decisionmaker can apply game theory even in such a circumstance, e.g. the choice of a dominant strategy.

Kreps (1988), p 31, writes that three standard ways of modeling choices made under conditions of uncertainty are with von Neumann-Morgenstern expected utility over objective uncertainty, the Savage axioms for modeling subjective uncertainty, and the Anscombe-Aumann theory which is a middle course between them.

A recent ad for a new book edited by Haim Levy (Stochastic Dominance: Investment Decision Making under Uncertainty) considers three ways of modeling investment choices under uncertainty: by tradeoffs between mean and variance, by choices made by stochastic dominance, and non-expected utility approaches using prospect theory.

Source: econterms

uncorrelated

Two random variables X and Y are uncorrelated if E(XY)=E(X)E(y). Note that this does not guarantee they are independent.

Source: econterms

under the null

Means "assuming the hypothesis being tested is true."

Source: econterms

unemployment

The state of an individual looking for a paying job but not having one.
Does not include full-time students, the retired, children, or those not actively looking for a paying job.

Source: econterms

uniform distribution

A continuous distribution over a range which we will denote [a,b]. Pdf is (x-a)/(b-a). Mean is .5*(a+b). Variance is (1/12)(b-a)2.

Source: econterms

uniform kernel

The uniform kernel function is 1/2, for -1<u<1 and zero outside that range. Here u=(x-xi)/h, where h is the window width and xi are the values of the independent variable in the data, and x is the value of the independent variable for which one seeks an estimate. Unlike most kernel functions this one is unbounded in the x direction; so every data point will be brought into every estimate in theory, although outside three standard deviations they make hardly any difference.
For kernel estimation.

Source: econterms

uniform weak law of large numbers

See Wooldridge chapter, p 2651. The UWLLN applies to a non-random criterion function qt(wt,q), if the sample average of qt() for a sample {wt} from a random time series is a consistent estimator for E(qt()).

A law like this is proved with Chebyshev's inequality.

Source: econterms

union threat model

"Firms may find it profitable to pay wages above the market clearing level to try to prevent unionization." In a model this could lead to job rationing and unemployment, just as efficiency wage models can.

Source: econterms

unit root

An attribute of a statistical model of a time series whose autoregressive parameter is one. In a data series y[t] modeled by:
y[t+1] = y[t] + other terms
the series y[] has a unit root.

Source: econterms

unit root test

A statistical test for the proposition that in a autoregressive statistical model of a time series, the autoregressive parameter is one. In a data series y[t], where t a whole number, modeled by:
y[t+1] = ay[t] + other terms
where a is an unknown constant, a unit root test would be a test of the hypothesis that a=1, usually against the alternative that |a|<1.

Source: econterms

unity

A synonym for the number 'one'.

Source: econterms

univariate

A discrete choice model in which the choice is made from a one-dimensional set is said to be a univariate discrete choice model.

Source: econterms

univariate binary model

For dependent variable yi that can be only one or zero, and a continuous indepdendent scalar variable xi, that:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is a distribution function. See probit and logit models for examples.

Source: econterms

unrestricted estimate

An estimate of parameters taken without constraining the parameters. See "restricted estimate."

Source: econterms

upper hemicontinuous

no disappearing points.

Source: econterms

urban ghetto

As commonly defined by U.S. researchers: areas where 40 percent or more of residents are poor.

Source: econterms

utilitarianism

A moral philosophy, generally operating on the principle that the utility (happiness or satisfaction) of different people can not only be measured but also meaningfully summed over people and that utility comparisons between people are meaningful. That makes it possible to achieve a well-defined societal optimum in allocations, production, and other decisions, and achieve the goal utilitarian British philosopher Jeremy Bentham described as "the greatest good for the greatest number."

This form of utilitarianism is thought of as extreme, now, partly because it is widely believed that there exists no generally acceptable way of summing utilities across people and comparing between them. Utility functions that can be compared and summed arithmetically are cardinal utility functions; utility functions that only represent the choices that would be made by an individual are ordinal.

Source: econterms

utility

utility is the internal satisfaction that a person acts to optimize

utility curve

synonym for indifference curve.

Source: econterms

Utility expected utility

The concept of utility enters economic analysis typically via the concept of a utility function which itself is just a mathematical representation of an individualīs preferences over alternative bundels of consumption goods (or, more generally, over goods, services, and leisure). If the individualīs preferences are complete, reflexive, transitive, and continuous, then they can be represented by a continuous utility function. In this sense, utility itself is an almost empty concept: It is just a number associated with some consumption bundle. A general treatment of the existence of an utility function is due to Debreu (1964).

Source: SFB 504

UVAR

Unstructured VAR (Vector Autoregression)

Source: econterms

UWLLN

Uniform weak law of large numbers

Source: econterms

V

Validity

Validity tells us the degree to which a test really measures the behaviour it was designed for

Source: SFB 504

value added

A measure of output. Value added by an organization or industry is, in principle:

revenue - non-labor costs of inputs

where revenue can be imagined to be price*quantity, and costs are usually described by capital (structures, equipment, land), materials, energy, and purchased services.

Treatment of taxes and subsidies can be nontrivial.

Value-added is a measure of output which is potentially comparable across countries and economic structures.

Source: econterms

value function

Often denoted v() or V(). Its value is the present discounted value, in consumption or utility terms, of the choice represented by its arguments.
The classic example, from Stokey and Lucas, is:
v(k) = maxk' { u(k, k') + bv(k') }
where k is current capital,
k' is the choice of capital for the next (discrete time) period,
u(k, k') is the utility from the consumption implied by k and k',
b is the period-to-period discount factor,
and the agent is presumed to have a time-separable function, in a discrete time environment, and to make the choice of k' that maximizes the given function.

Source: econterms

VAR

Vector Autoregression, a kind of model of related time series. In the simplest example, the vector of data points at each time t (yt) is thought of as a parameter vector (say, phi1) times a previous value of the data vector, plus a vector of errors about which some distribution is assumed. Such a model may have autoregression going back further in time than t-1 too.

Source: econterms

var()

An operator returning the variance of its argument

Source: econterms

variance

The variance of a distribution is the average of squares of the distances from the values drawn from the mean of the distribution:
var(x) = E[(x-Ex)2].
Also called 'centered second moment.' Nick Cox attributes the term to R.A. Fisher, 1918.

Source: econterms

variance decomposition

In a VAR, the variance decomposition at horizon h is the set of R2 values associated with the dependent variable yt and each of the shocks h periods prior.

Source: econterms

variance ratio statistic

discussed thoroughly on Bollerslev-Hodrick 1992 p. 19. Equations and estimation there.

Source: econterms

VARs

Vector Autoregressions. "Vector autoregressive models are _atheoretical_ models that use only the observed time series properties of the data to forecast economic variables." Unlike structural models there are no assumptions/restrictions that theorists of different stripes would object to. But a VAR approach only test LINEAR relations among the time series.

Source: econterms

vec

An operator. For a matrix C, vec(C) is the vector constructed by stacking all of the columns of C, the second below the first and so on. So if C is n x k, then vec(C) is nk x 1.

Source: econterms

vega

As used with respect to options: "The vega of a portfolio of derivatives is the rate of change fo the value of the portfolio with respect to the volatility of the underlying asset." -- Hull (1997) p 328. Formally this is a partial derivative.

A portfolio is vega-neutral if it has a vega of zero.

Source: econterms

verifiable

Observable to outsiders, in the context of a model of information.

Models commonly assume that some the values of some variables are known to both of the parties to a contract but are NOT verifiable, by which we mean that outsiders cannot see them and so references to those variables in a contract between the two parties cannot be enforced by outside authorities.

Examples: .....

Source: econterms

vintage model

One in which technological change is 'embodied' in Solow's language.

Source: econterms

vNM

Abbreviation for von Neumann-Morgenstern, which describes attributes of some utility functions.

Source: econterms

volatility clustering

In a time series of stock prices, it is observed that the variance of returns or log-prices is high for extended periods and then low for extended periods. (E.g. the variance of daily returns can be high one month and low the next.) This occurs to a degree that makes an iid model of log-prices or returns unconvincing. This property of time series of prices can be called 'volatility clustering' and is usually approached by modeling the price process with an ARCH-type model.

Source: econterms

von Neumann-Morgenstern utility

Describes a utility function (or perhaps a broader class of preference relations) that has the expected utility property: the agent is indifferent between receiving a given bundle or a gamble with the same expected value.

There may be other, or somewhat stronger or weaker assumptions in the vNM phrasing but this is a basic and important one. It does not seem to be the case that such a utility representation is required to be increasing in all arguments or concave in all arguments, although these are also common assumptions about utility functions. The name refers to John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior. Kreps (1990), p 76, says that this kind of utility function predates that work substantially, and was used in the 1700s by Daniel Bernoulli.

Source: econterms

W

WACM

abbreviation for the Weak Axiom of Cost Minimization

Source: econterms

wage curve

A graph of the relation between the local rate of unemployment, on the horizontal axis, and the local wage rate, on the vertical axis. Blanchflower and Oswald show that this relation is downward sloping. That is, locally high wages and locally low unemployment are correlated.

Source: econterms

Wallis statistic

A test for fourth-order serial correlation in the residuals of a regression, from Wallis (1972) Econometrica 40:617-636. Fourth-order serial correlation comes up in the context of quarterly data; e.g., seasonality. Formally, the statistic is:
d4=(sum from t=5 to t=T of: (et-et-4)2/(sum from t=1 to t=T of: et2)
where the series of et are the residuals from a regression.
Tables for interpreting the statistic are in Wallis (1972).

Source: econterms

Walrasian auctioneer

A hypothetical market-maker who matches suppliers and demanders to get a single price for a good. One imagines such a market-maker when modeling a market as having a single price at which all parties can trade.

Such an auctioneer makes the process of finding trading opportunities perfect and cost free; consider by contrast a "search problem" in which there is a stochastic cost of finding a partner to trade with and transactions costs when one does meet such a partner.

Source: econterms

Walrasian equilibrium

An allocation vector pair (x,p), where x are the quantities held of each good by each agent, and p is a vector of prices for each good, is a Walrasian equilibrium if (a) it is feasible, and (b) each agent is choosing optimally, given that agent's budget. In a Walrasian equilibrium, if an agent prefers another combination of goods, the agent can't afford it.

Source: econterms

Walrasian model

A competitive markets equilibrium model "'without any externalities, asymmetric information, missing markets, or other imperfections." (Romer, 1996, p 151)

'In this general equilibrium model, commodities are identical, themarket is concentrated at a single point [location] in space, and the exchange is instantaneous. [Individuals] are fully informed about the exchange commodity and the terms of trade are known to both parties. [No] effort is required to effect exchange other than to dispense with the appropriate amount of cash. [Prices are] a sufficient allocative device to achieve highest value uses.' (North, 1990, p. 30.)

Source: econterms

WAPM

abbreviation for the Weak Axiom of Profit Maximization

Source: econterms

WARP

WARP is an acronym for the Weak Axiom of Revealed Preference. This axiom states that when a consumer selects consumption bundle 'a' when bundle 'b' is available, the consumer will not select 'b' when 'a' is available. This axiom has two extensions: the Strong Axiom of Revealed Preference (SARP) and the Generalized Axiom of Revealed Preference (GARP).

wavelet

A wavelet is a function which (a) maps from the real line to the real line, (b) has an average value of zero, (c) has values very near zero except over a bounded domain, and (d) is used for the purpose, analogous to Fourier analysis, implied by the following paragraphs.

Unlike sine waves, wavelets tend to be irregular, asymmetric, and to have values that die out to zero as one approaches positive and negative infinity. "Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet."

By decomposing a signal into wavelets one hopes not to lose local features of the signal and information about timing. These contrast with Fourier analysis, which tends to reproduce only repeated features of the original function or series.

Source: econterms

WE

Walrasian Equilibrium

Source: econterms

weak form

Can refer to the weak form of the efficient markets hypothesis, which is that any information in the past prices of a security are fully reflected in its current price.
Fama (1991) broadens the category of tests of the weak form hypothesis under the name of 'test for return predictability.'

Source: econterms

weak incentive

An incentive that is does not encourage maximization of an objective, because it is ambiguous or satisfice-able. For example, payment of weekly wages is a weak incentive since by construction it does not encourage maximum production, but rather the minimal performance of showing up every work day. This can be the best kind of incentive in a contract if the buyer doesn't know exactly what he wants or if output is not straightforwardly measurable. Contrast strong incentive.

Source: econterms

weak law of large numbers

Quoted right from Wooldridge chapter:
A sequence of random variables {zt} for t=1,2,... satisfies the weak law of large numbers if these three conditions hold:
(1) E[|zt|] is finite for all t,
(2) as T goes to infinity, the limit of the average of the first T elements of {zt} 'exists' [unknown: that means it's fixed and finite, right?],
(3) as T goes to infinity, the probability limit of the average of the first T elements of the series [zt - E(zt)] is zero.

The most important point (I think) is that the weak law of large numbers holds iff the sample average is a consistent estimate for the mean of the process.

Laws of large numbers are proved with Chebyshev's inequality.

Source: econterms

weak stationarity

synonym for covariance stationarity. A random process is weakly stationary iff it is covariance stationary.

Source: econterms

weakly consistent

synonym for consistent.

Source: econterms

weakly dependent

A time series process {xt} is weakly dependent iff these four conditions hold: (1) {xt} is essentially stationary, that is if E[xt2] is uniformly bounded. In any such process, the following 'variance of partial sums' is well defined, and it will be used in the following conditions. Define sT2 to be the variance of the sum from t=1 to t=T of xt.
(2) sT2 is O(T).
(3) sT-2 is O(1/T).
(4) The asymptotic distribution of the sum from t=1 to t=T of (xt-E(xt))/sT is N(0,1).

These conditions rule out random processes which are serially correlated too positively or negatively or whose partial sums are near zero. Example 1: An iid process IS weakly dependent. (Domowitz, in class 4/14/97.)

Example 2: A stable AR(1) (|r|<1) with iid innovations.

Source: econterms

weakly ergodic

A stochastic process may be weakly ergodic without being strongly ergodic.

Source: econterms

weakly Pareto Optimal

An allocation is weakly Pareto optimal (WPO) if a feasible reallocation would be strictly preferred by all agents.
WPO <=> SPO if preferences are continuous and strictly increasing (that is, locally nonsatiated).

Source: econterms

WebEc

A Web site with indexes to World Wide Web Resources in Economics. Click here to go there.

Source: econterms

wedge

The gap between the price paid by the buyer and price received by the seller in an exchange. Might be caused by a tax paid to a third party.

Source: econterms

Weibull distribution

in at least one 'standard' specification, has pdf: f(x)=TxT-1exp(-xT)

where T stands for q. T=1 is the simplest case. It looks like the pdf is zero for x<1 in that case.

Source: econterms

Weierstrauss Theorem

that a continuous function on a closed and bounded set will have a maximum and a minimum.

This theorem is often used implicitly, in the assumption that some set is compact, meaning closed and bounded. Examples that may help clarify:

Example 1: Consider a set which is unbounded, like the real line. Say variable x has any value on the real line, and we wish to maximize the function f(x)=2x. It doesn't have a maximum or minimum because values of x further from zero have more and more extreme values of f(x).

Example 2: Consider a set which is not closed, like (0,1). Again, let f(x) be 2x. Again this function has no maximum or minimum because there is no largest or smallest value of x in the set.

Source: econterms

Weighted attributes

If the combined weights of a novel object's attributes' relevance for conferring family resemblance to the category exceed a certain level (the mebership criterion), that object will be considered an instance of the category (Medin, 1983).

Source: SFB 504

weighted least squares

A way of choosing an estimator. Makes a weighted tradeoff between the error in an estimator due to bias and that due to variance. Putting equal weights on the two is the mean square error criterion.

Source: econterms

welfare capitalism

welfare capitalism -- the practice of employers' voluntary provision of nonwage benefits of to their blue collar employees.

Source: econterms

WesVar

A software program for computing estimates and variance estimates from potentially complicated survey data. Made by Westat.

Source: econterms

white noise process

a random process of random variables that are uncorrelated, have mean zero, and a finite variance (which is denoted s2 below). Formally, et is a white noise process if E(et) = 0, E(et2) = s2, and E(etej) = 0 for t<>j, where all those expectations are taken prior to times t and j. A common, slightly stronger condition is that they are independent from one another; this is an "independent white noise process." Often one assumes a normal distribution for the variables, in which case the distribution was completely specified by the mean and variance; these are "normally distributed" or "Gaussian" white noise processes.

Source: econterms

White standard errors

Same as Huber-White standard errors.

Source: econterms

Wiener process

A continuous-time random walk with random jumps at every point in time (roughly speaking).

Source: econterms

window width

Synonym for bandwidth in the context of kernel estimation

Source: econterms

winner's curse

That a winner of an auction may have overestimated the value of the good auctioned. "The winner's curse arises in an auction when the good being sold has a common value to all the bidders (such as an oil field) and each bidder has a privately known unbiased estimate of the value of the good (such as from a geologist's report): the winning bidder [may] be the one who most overestimated the value of the good; this bidder's estimate itself may be unbiased but the estimate conditional on the knowledge that it is the highest of n unbiased estimates is not." -- Gibbons and Katz

Source: econterms

within estimator

synonym for fixed effects estimator

Source: econterms

Within subjects design

In a within subjects design the values of the dependent variable for an item or a set of items (e.g., the experimental items) are compared with the values for another item or another set of items (e.g., the control items) within one person.

Source: SFB 504

WLLN

Weak law of large numbers

Source: econterms

WLOG

abbreviation for "without loss of generality". This phrase is relevant in the context of a proof or derivation in which the notation becomes simpler, or there are fewer cases to demonstrate, by making an innocuous assumption, for example that the data are in a certain order.

Source: econterms

Wold decomposition

Any zero mean, covariance stationary process can be represented as a moving average sum of white noise processes plus a linearly deterministic component that is a function of the index t. That form of expressing the process is its Wold decomposition. Clear expression of this idea requires an equation or two that cannot be put here yet.

Source: econterms

Wold's theorem

That any covariance stationary stochastic process with mean zero has a moving average representation, called its Wold decomposition. Let {xt} be that process. See Sargent, 1987, p 286-288 for the complete theorem, assumptions, and proof.

Source: econterms

World Bank

A collection of international organizations to aid countries in their process of economic development with loans, advice, and research. It was founded in the 1940s to aid Western European countries after World War II with capital.

Click here to go to the World Bank web site.

Source: econterms

world systems theory

[What follows is the editor's best understanding, but not definitive.] A category of sociological/historical description and analysis in which aspects of the world's history are thought of as byproducts of the world being an organic whole. Key categories are core and periphery. Core countries, economies, or societies are richer, have more capital-intensive industry, skilled labor and relatively high profits. In a way they exploit the poorer peripheral societies but it may not be a deliberate collusion.

Source: econterms

WPO

stands for Weakly Pareto Optimal

Source: econterms

X

X-inefficiency model

A model in which there is a best-practice technology, and a unit (firm or country, for example) either has that technology or one not as good. No random factor could make a firm's production function better than that best-practice one. An organization is perfectly x-efficient if it produces the maximum output possible from its inputs? Or is there some connection between its choice of output levels and types and its x-efficiency? Sources of x-inefficiency discussed in the academic literature: n inertia in process; that is, doing things to minimize internal redesign from the way they were done last time, rather than in the most efficient way for current circumstances In prisoner's dilemma situations where an individual's effort is unobservable; lack of trust and lack of communication can contribute to this. It is hard for any individual to coordinate the agreement necessary to raise effort. (Leibenstein, Sept 1983 AER comment.) In Absence of knowledge (I haven't seen this discussed but it has to be out there.)

Source: econterms

Y

yellow-dog contract

A requirement by a firm that the worker agree not to engage in collective labor action. Such contracts are not enforceable in the U.S.

Source: econterms

Z

zero-sum game

A game in which total winnings and total losings sum to zero for each possible outcome.

Source: econterms

Copyright Đ 2006 Experimental Economics Center. All rights reserved. Send us feedback