Jump to ContentJump to Main Navigation
The Formation of EconometricsA Historical Perspective$

Duo Qin

Print publication date: 1997

Print ISBN-13: 9780198292876

Published to Oxford Scholarship Online: November 2003

DOI: 10.1093/0198292872.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 23 October 2019

Model Construction Revisited

Model Construction Revisited

Chapter:
(p.149) 6 Model Construction Revisited
Source:
The Formation of Econometrics
Author(s):

Qin Duo

Publisher:
Oxford University Press
DOI:10.1093/0198292872.003.0007

Abstract and Keywords

Looks at problems associated with econometric model construction for the period immediately after the formative phase, and tries to link up the previous chapters and to show what has been left unsolved in the formation of econometrics. The structural modelling procedure explained only how to estimate and identify a priori given structural models, while many of the empirical studies involved searching for the appropriate structural models from the given data. This mismatch of the two sides gave rise to many problems and disputes, mostly in connection with the roles that modellers attributed to individual tools of testing, identification, and estimation in the integrated process of empirical model construction, as the procedure and the associated techniques spread and formed the core of orthodox econometrics. Revisits the issue of model construction with particular respect to the roles of testing, identification, and estimation, depicting how controversies arose as econometricians were swung back to more data‐based positions, away from the emphasis on a priori considerations; back to statistical results, away from reliance on economic theory; and back to dynamics, away from concerns over contemporaneous interdependency. The first section looks at modelling issues associated with hypothesis testing; the second examines problems about model formulation with respect to identification; the third turns to the estimation aspect of modelling; and the fourth leads the discourse to the focal issue of the probability approach underlying established econometrics by illustrating that most of the problems could be viewed as due to the incompleteness of the probability approach (as suggested in Chapter 1).

Keywords:   econometric modelling, econometric models, econometrics, economic theory, estimation, history, hypothesis testing, identification, model construction, probability approach, probability theory, structural modelling, testing

In the late 1940s, trial applications appeared of the newly developed tools of estimation and identification. Klein's famous first three macroeconomic models, known as Klein models I–III (Klein 1950) served as a paradigm. The models were expected to demonstrate that the structural approach ‘would revolutionize economics’, as anticipated by the Cowles group (Phillips 1986). However, the outcome looked somewhat disappointing. Serious weaknesses in the model construction phase of the structural approach were revealed in these preliminary applied modelling attempts. As some of the Cowles people put it, before their newly invented tools were applicable, ‘we must know the form of all the equations that connect the several variables of the system; we must know which variables are endogenous and which are exogenous’ (Klein 1950: 12). They also noted however ‘that economic theory was not always specific enough; nor did the data always fit what theory suggested’, as recalled by Anderson (1990 Commission 1949). Feeling restricted by computing technology and data availability as well as statistical theory, the group soon turned its research focus upon mathematical economics from the 1950s onwards. Meanwhile, developments in econometrics grew more diversified and contentious as the elegance and rigour of the structural modelling procedure became eroded by the substantial amount of equation search involved in applied modelling. The structural modelling procedure only explained how to estimate and identify a priori given structural models, while many of the empirical studies actually involved groping for the appropriate structural models given the data. Mismatch of the two sides gave rise to many problems (p.150) and disputes, as the procedure and the associated techniques spread and formed the core of orthodox econometrics. At the time, problems and disputes appeared mostly in connection with the roles and ways that modellers attributed to individual tools of testing, identification, and estimation in the integrated process of empirical model construction.

This chapter therefore revisits the issue of model construction with particular respect to the roles of testing, identification, and estimation. It depicts how controversies arose as econometricians were swayed back to more data‐based positions, away from the emphasis on a priori considerations, back to statistical results, away from reliance on economic theory, and back to dynamics, away from concerns over contemporaneous inter‐dependency.1

Section 6.1 looks at modelling issues associated with hypothesis‐testing. This appears to be the most backward area with hind‐sight, but the least disputed at the time. Problems about model formulation with respect to identification are examined in Section 6.2. Section 6.3 turns to the ‘estimation’ aspect of modelling. In contrast to Section 6.1, the most acute disputes occurred in this area despite the fact that estimation methods were then the most developed. A central discord arose of whether it was redundant or still essential for applied purposes to further the formalization efforts, as exemplified by those of the Cowles group. Section 6.4 leads the discord to the focal issue of the probability approach underlying established econometrics. By illustrating that most of the problems could be viewed as due to the incompleteness of the probability approach, as suggested in Chapter 1, this final section leaves the story with an open end.

6.1 Testing and Model Construction

As shown in the previous chapter, the practice of testing in the literature gradually polarised in two directions. The general idea of theory verification by means of econometric models was (p.151) popularized in the contiguous area of applied econometrics and economics. In the contiguous area of theoretical econometrics and mathematical statistics, testing became a much narrower concept defining a technical process in model building within the established structural framework. This narrowing encouraged the mutual practice, in both applied and theoretical econometrics alike, of using econometric tests as diagnostic tools for uncovering many of the problems of inconsistency in matching data with theoretical models built by the structural approach. Such uncovered problems led to discernible differences in the ways in which the test results were explained and exploited in connection with model construction.

As mentioned above, Klein models I–III exemplified the Cowles new inventions. But they also exemplified a pragmatic modelling route, which retained quite a lot of Tinbergen's modelling style, i.e. modifying equations with respect to data features which emerged in estimation. Klein argued that ‘academic economic theory is only one among alternative sources for the development of hypotheses. . . . We shall by no means be so narrow as to insist that econometric work be built on this particular foundation’ (Klein 1953: 3). He pointed out that the final choice of a model depended largely on ‘pragmatic considerations’. This pragmatic position of accommodating empirical flexibility to the structural approach was strengthened through Marshall's and Christ's tests, others' opinions on Klein model III, the forecasting results of the model, and experience of other modellers. It was further consolidated by the combined efforts of Klein and Goldberger to build better models than the Klein model III (Klein and Goldberger 1955).

The construction of the Klein–Goldberger model followed roughly the same approach as the Klein models I–III, except that the scale was enlarged with many more new variables and lagged terms. Although theoretical justifications for including these new elements were given prior to statistical analyses in the book, actual decisions on their inclusion were often made in inverse order. For example, many of the structural relations were finalized in forms through a great deal of statistical testing among various alternative schemes, and then wrapped up with theoretical considerations, because ‘details of a priori analysis’ were ‘rarely’ found to ‘stand up against the facts of real life’ (p.152) (Klein and Goldberger 1955: 56). In particular, whenever no statistically satisfactory theoretical relations could be found, ‘empirical time‐series expression’ (i.e. autoregression) was adopted (p. 28). These data‐instigated modelling activities were however somewhat obscured by the scant reports of the tried, but discarded, alternatives.2 Nevertheless, statistical tests here were clearly used for diagnostic purposes, and the results were used effectively for formulating working theoretical relations. This way of making use of testing in model construction was implied in an earlier statement by Klein that ‘a great deal of empirical work will be of the utmost importance in the formulation of hypotheses’ (Klein 1953: 14–17).

Such a pragmatic modelling approach was developed into a more stepwise and more explicit procedure by H. Theil, called ‘specification analysis’ (now referred to as ‘misspecification analysis’) in the late 1950s. He first noticed the problem of misspecification while he was looking for the sufficient conditions to aggregate economic relations (Theil 1954). He then discovered its wide existence in his extensive survey of the previous macro‐econometric forecasting records, which revealed that the phenomenon of ‘underestimation of changes. . . with respect to a well‐defined variable’ prevailed (Theil 1961: 543–4). In tracing its causes, Theil soon became convinced that it was unfeasible in practice to follow rigidly the structural modelling procedure, namely, relying upon economic theory to provide correct postulates in the form of a well‐defined structural model, or a ‘maintained hypothesis’ (i.e. the set of admissible hypotheses), and then obtain structural estimates by significance tests of individual parameters. The reason for this judgement was that such an approach ‘greatly overestimates the economic theorist's knowledge and intellectual power’. He observed that the common practice in applied model revision often rejected and changed part of the original ‘maintained’ hypothesis, instead of rejecting the null hypothesis, when the original model had produced ‘unsatisfactory results’. So Theil set out to systematize (p.153) the applied modelling procedure of a trial‐and‐error type into explicit ‘specification analysis’ by ‘an experimental approach’ (pp. 206–7).

Noticeably, the problem how to set up the maintained hypothesis had been addressed by Haavelmo (see Chapter 5), but not as acutely as Theil put it. That was mainly because the two men had different types of economic theories on their minds when they discussed the problem. For Haavelmo, the economic theory underlying his maintained hypotheses was the most general type, since he remained at the purely theoretical level. Whereas for Theil from his applied stand, available economic theories were far from general, and therefore his maintained hypotheses were indeed far more fragile.

As for testing models, Theil agreed with the opinion that ‘ “the” criterion’ for testing the goodness of an econometric model is that ‘it predicts well’. But he argued that since ‘it is neither always conclusive nor always feasible’, two additional criteria of ‘rather subjective’ nature could be applied: ‘plausibility’ and ‘simplicity’ (Theil 1961: 205). These criteria supported his ‘experimental’ appeal towards model building, and especially his disapproval of ‘the statistical theory which forbids the rejection of a “maintained” hypothesis’. It is worth while to quote his argument on this point:

What is incorrect, however, is to act as if the final hypothesis presented is the first one, whereas in fact it is the result of much experimentation. Since every econometric analysis is an essay in persuasion—just as is true for any other branch of science—the line of thought leading to the finally accepted result must be expounded. It is not true that analyses which are in the end not accepted are useless. The mere fact that a certain ‘maintained’ hypothesis can be excluded raises the plausibility of its rivals. This can be compared to a large extent with the function of standard errors of parameter estimates. Just as the standard errors contribute to an appraisal of numerical outcomes within a certain ‘maintained’ hypothesis, in just the same way alternative analyses of separate ‘maintained’ hypotheses contribute to an appraisal of the hypothesis which is finally preferred. (Theil 1961: 207)

To illustrate his point, Theil criticized the conventional practice of judging parameter estimates by ‘correct’ signs or plausible order of magnitudes. He pointed out that these simple methods implied erroneous construction of the maintained hypothesis, (p.154) since the maintained hypothesis either was incomplete in so far as it excluded a priori known information about those signs and orders of magnitudes, or violated the rules of construction in so far as some of the information included was actually uncertain (Theil 1961: 233). He argued therefore that a more explicit model construction strategy should be adopted and accompanied by a schematic ‘analysis of specification errors’ by means of statistical inference.

Like his contemporaries, Theil started his ‘specification analysis’ from particularly specified, simple structural models, using the standard regression conditions as the criteria. The analysis began with the problem of diagnosing wrong choices of explanatory variables. He dealt with it in two consecutive steps. First, he set up a measure for the consequences of erroneous choices of explanatory variables by transforming the estimated coefficients of the erroneous equation into a weighted sum of all the coefficients of the true equation, so as to express the degree of misspecification in terms of the difference between the two, denoted ‘the specification bias’ (Theil 1957; see also Griliches 1957). This measure of specification bias was applied to illustrate the consequences of omitted variables, incorrect equation forms, and errors in the variables (Theil 1957, 1961). Secondly, he established ‘the criterion of minimum residual variance’ as a primitive test criterion for selection among several alternative equations (hypotheses) when the true one was not known. This meant that selection among competing equations was made by comparison of their residual variances provided the regressors were non‐stochastic or otherwise independent of the error term (Theil 1957). The next problem that Theil tried to tackle was multicollinearity. The problem was recognized by considerable sampling variances of the parameter estimates. Theil did not provide a general solution to the problem. But he opposed the thoughtless practice of dropping some explanatory variables in case of high multicollinearity. He suggested ‘that we can rearrange our problem in such a way that the multicollinearity difficulty is avoided’ (Theil 1961: 217). He used, as an example, Koyck's transformation of a model with geometric distributed lags into a partial adjustment model (Koyck 1954) (see also Section 6.3). Theil then turned to the problem of residual autocorrelation. He adopted the diagnosis and the prescription (p.155) by the DAE people at Cambridge, i.e. to detect the autocorrelation by D–W tests and to cure it by using growth‐rate models (see below). As for the problem of simultaneity, its well‐known symptom was Haavelmo bias. Theil recommended his own device of the 2SLS method as the prescription. Finally, he proposed a method of ‘mixed estimation’ (together with Goldberger, cf. Theil and Goldberger 1961) for the problem of omitted a priori information in the specified model. The method was to specify the extra information not already included in the model, e.g. the expected signs and orders of magnitude of parameters, explicitly in the form of constraints on the parameters concerned, so as to utilize them ‘in the same way as observational information’ (Theil 1961: 233) (see also Section 6.4).

It is discernible that Theil's specification analysis was designed to reach a working structural model from a simple a priori theoretical model, which was consistent with various statistical tests of the data. The revisions of the original theoretical model due to statistical tests were regarded also as part of the theoretical model, and were given economic interpretations. By openly allowing successive extensions of the maintained hypothesis, Theil reinforced the pragmatic approach that statistical tests were used mainly for constructing part of the theory not yet formulated a priori rather than for verifying the original model.

But the revised parts of an a priori model were not always taken as parts of the theory by all the applied modellers. For those who were more concerned with theory verification, model revisions through diagnostic tests could be interpreted somewhat differently. For instance, the issue of how to test theories by applied econometric models using the structural modelling procedure was on the research agenda of the Department of Applied Economics at Cambridge headed by R. Stone at the end of 1950. Noticeably, the DAE group paid close attention to the connection and the relationship between empirical findings and established economic theories. This was especially reflected in Stone's writings of the period. In a monograph The Role of Measurement in Economics, Stone ranked ‘facts and empirical constructs’ as ‘the first‐class’ questions in empirical analyses (1951: 7), and thought of them as forming the basis for testing economic theories, ‘deductively formulated’ from certain postulates (pp. 12–15). Elsewhere, he maintained that ‘the role of theory is to reduce (p.156) the number of possibilities to be examined at any one stage and to permit the investigator to interpret the results of his analysis. It is a simple device for economizing and should be used for that reason wherever possible’ (Stone 1954b: p. xxx). Stone reckoned that the purpose of testing theory lay in ‘satisfying ourselves that for practical purposes the actual world behaves as if the postulates of the theory held true in it’ (1951: 15). These statements implied a more strict view on theory formulation than the one held by Klein and Theil for example, though the emphasis on empirical knowledge and the faith in the structural modelling procedure were shared. For Stone and his group, theory should be ‘deductively formulated’ and then verified through empirical work, whereas in the pragmatic approach described earlier, theory formulation was finalized during the empirical modelling exercise. This subtle difference was revealed in the way in which the DAE people interpreted the model revisions resulting from specification analysis. A good illustration of this was their response to the problem of detected residual autocorrelations, originally by von Neumann ratio tests and later by D–W tests. Their prescription, due to Orcutt and Cochrane as described in Chapter 3 and 5, was to append an error autoregressive equation to the relevant structural equation. The simplest case would be:

y t = β x t + u t
(6.1)
u t = ρ u t 1 + ε t | ρ | < 1 ;   ε t   ~   I N ( 0 , σ ε 2 ) .
(6.2)
(6.1) and (6.2) laid the basic form for the later appeared ‘common factor’ (or COMFAC) analysis (see e.g. Sargan 1980). The two equations implied a quasi‐difference expression:
y t ρ y t 1 = β ( x t ρ x t 1 ) + ε t .
(6.3)
(See Orcutt and Cochrane 1949.) When ρ→1, (6.3) would reduce to a first‐difference model:
Δ y t = β Δ x t + ε t .
(6.4)

Model (6.4) was widely used in the DAE empirical consumption studies (Stone 1948, 1954b). Their justification for using it was: ‘analyses made with the original data showed, in general, a significant amount of positive serial correlation in the observed residuals’, and ‘this correlation could, in general, be (p.157) effectively removed by working with first differences’ (Stone 1954b: 308). However, they ignored a loophole in so far as the consequence of substituting variables in levels in the original theoretical analyses by variables in differences implied an alteration of the theoretical meaning underlying the model. Equation (6.1) embodied a long‐term relationship whereas (6.4) could only depict a short‐term relationship as it was, by nature, a growth‐rate model.

What shielded the loophole was just the way in which model revisions in light of the diagnostic tests were interpreted. With strong belief in the theory (6.1), the addition of (6.2) resulting from residual correlation tests was not thought of as carrying any economic implications. It was merely a statistical adjustment to take care of the ‘inexactness’ involved in moving from pure theory to rather complicated data observations, so as to secure ‘the appropriate estimation procedure’ for identifying and obtaining estimates of those a priori formulated structural parameters (Stone 1954b: 239). In this way, the originally deductively formulated structural model was seemingly kept intact for measurement and verification, but at the expense of having some auxiliary specifications added between it and the data. Changes induced by these specifications in the economic implications of the model were likely to be overlooked, as long as they were regarded as covering up certain ‘estimation problems’, such as residual autocorrelation, multicollinearity, measurement errors etc. (see Stone 1954b, ch. 19). But it was almost impossible to maintain this position in all the applied studies. The applied studies by the DAE people also showed instances of data‐instigated model formulation, e.g. Stone's 1945 work on market demand and his 1954 work on the linear expenditure systems, as cited in the previous chapters.

Up to the end of the 1950s, test developments and applications in econometrics were aimed at diagnosing the places where the a priori specified models were incompatible with data. The efforts of applied modellers to deal with such incompatibilities led to the trial‐and‐error practice of specification searches for better models with respect to ‘all‐over performance testing of the models in question’ (Koopmans 1957). Stop‐gap remedies were commonly used to patch up the incompatibility bit by bit. Modellers viewed the remedies differently, however. For those (p.158) who considered the additionally respecified part of the model as devoid of any economic meaning, theory measurement and therefore model estimation absorbed their central attention. Although they were concerned with theory verification, the estimated results actually had very low power for that purpose with the respecified part added between the original theory and the data. For many who employed tests in search of data‐instigated structural models, test results served actually for hypothesis‐formulation rather than hypothesis‐testing. Furthermore, since this kind of specification search involved constant changes of the model framework (i.e. the maintained hypothesis), the test results seldom offered much guidance, having detected faults, to the definite directions along which the tested model should be rebuilt. Therefore, such tests were later labelled as ‘non‐constructive tests’ (see Goldfeld and Quandt 1972). Model reconstruction was thus often based upon arbitrary and non sequitur decisions in practice. Few people however recognized and worried about these decisions at the time, for there were more obviously arbitrary decisions in model construction to occupy the worries of econometricians. One of the particular topics of concern was model identification.

6.2 Identification and Model Construction

As described in Chapter 4, the necessary condition for model identifiability (i.e. the order condition) was found to be closely connected with the specification of exogenous variables in setting up ‘complete’ models ‘for statistical purposes’ (Koopmans 1950). In applied circumstances, the identification requirement frequently resulted in the imposition of additional exogenous variables to the original structural models, since the particular economic theories underlying the structural models often turned out to be short of the requirement of ‘complete’. The formalized identification theory disregarded the issue of whence the imposition should come, and left the job to model construction, specification, and testing. However, these steps could never stay apart in the applied world. With little theoretical guidance at hand, applied modellers were induced to make too much use of identification restrictions in model specification merely for the sake of identification. This situation gave rise to the frequent (p.159) occurrence of the arbitrary imposition of identifying restrictions in applied model construction. The need for checking the validity of these restrictions grew prominent and imminent, when it came to applying the estimated model results to real economic problems.

Indeed it was the practical motives of forecasts and policy making that induced Orcutt (1952) to ponder over the common practice of specifying a priori endogenous and exogenous variables in model building. From the policy‐makers' standpoint, he held that ‘more emphasis needs to be placed on building and testing models or components of models which include as exogenous variables those variables that we know how to control, and that we contemplate using for control purposes’. He observed that in econometric modelling ‘the specification of which variables are to be considered exogenous is either done on the basis of theoretical convenience from the standpoint of limiting the field of interest or is done on the basis of some a priori knowledge of unspecified source. In any case, the specification is not subject to any test whatsoever. . . . Little or no attention has been given to evidence or lack of evidence of relationship between movements of those variables selected as exogenous.’ To spell out his points, Orcutt criticized further the situation where ‘frequently the literature is far from explicit about the difference between endogenous and exogenous variables. And sometimes the distinction is merely used for the purpose of arbitrarily setting the limits of the problem under consideration’, and where ‘the interest of econometricians has been too much preoccupied with estimating interrelationships in the economic system to the almost complete neglect of testing hypotheses about which variables are wholly or partially exogenous to the economic system.’ He then stressed the fact that ‘the interpretation of the obtained econometric models depends critically’ on the ‘choice of exogenous variables’, so as to reinforce his appeal for a ‘partial redirection of econometrics’ toward testing and ‘discovery of exogenous variables and of as complete a specification as possible of their impact’. In particular, Orcutt called for more studies on ‘the continuity properties of economic time‐series’ in association with ‘the impact of instruments of control or policy actions’. This precautionary statement preceded the ‘Lucas critique’ (1976) by over twenty years.

(p.160) Orcutt's appeals met a sympathetic response from Koopmans (1952). Koopmans interpreted Orcutt's observations largely as concerns about ‘the problem of specification: the choice of the model, and the nature of the evidence that can be adduced in support of that choice’. Deeply involved in the structural approach, Koopmans explained dishearteningly the near impossibility of performing statistical tests on specifications of most exogenous variables due to the constraint of identifiability, upon which the estimability of test statistics was conditioned. He wrote that ‘assurance that a given variable is exogenous can only be obtained by qualitative knowledge of the variables causally involved in its generation. If the model can be extended by additional equations describing the generation of the presumably exogenous variables, the needed information is of the same type as that required for identifiability.’ Koopmans further accredited the difficulty to the specification of ‘sufficiently strong’ maintained hypotheses, which were prerequisite to and exempt from any statistical tests, in order to ensure the identifiability of the parameters to be tested. Since the maintained hypotheses included inevitably ‘a priori specification as to which variables are exogenous’ so as to enable certain parameters to be identified, ‘this specification then escapes all possibility of a test’. Koopmans therefore concluded that ‘the evidence on which the choice of exogenous variables rests must be sought primarily in qualitative knowledge about the place of the variables in question in the causal hierarchy, with slight chances of corroboration from statistical tests utilizing time series’, although he held no objection that ‘it would be very important to have a test of exogeneity’ (Koopmans 1952).

This viewpoint of Koopmans's was seconded by Tinbergen, who maintained that ‘in principle’ ‘the specification of the variables chosen as exogenous . . . should be based, in my opinion, on a priori rather than on statistical considerations’ (Tinbergen 1952). All these encouraged econometricians to lean, in model construction and specification, upon a priori theory and knowledge, which was ‘not subject to conclusive test’ (Koopmans 1952).

Noticeably, Koopmans referred heavily to Simon's 1953 paper concerning the concepts and relationship between causality, exogeneity specification, and identifiability, instead of to the (p.161) paper by Reiersøl and himself (1950), in his comments on Orcutt's paper. This put much emphasis upon the a priori side of the identification issue. As described already in Chapter 4, Section 4, Simon's exposition strongly helped to insulate model construction and specification from statistical testing with his ‘inverse’ interpretation of identifiability. In addition, Simon seemed to have included all the predetermined variables in his classification of exogenous variables, as he derived it from the imaginary process of reducing ‘self‐contained subsets’ from the original full structure (Simon 1953). His definition significantly strengthened the outlook of a simultaneous and deterministic economic mechanism at the expense of dynamics and the probability approach. But the insulation of model specification from statistical testing by identification was obscured by the purely theoretical level at which Simon proceeded with his discussion, since concerns about true correspondence with respect to reality here were of little relevance. The discussion over causality and identifiability aroused by Simon was soon converted into the dispute over simultaneity versus causal chain‐modelling strategies between H. Wold and defenders of the Cowles methodology. The dispute appeared more difficult to settle, being at a purely theoretical level where no common testing criteria could be resorted to (see the next section).

Back in the applied world, overidentified models prevailed gradually, with the popularization of the structural modelling procedure. As already pointed out, identifiability even became one of the model building criteria in the 1950s. The first person who stood out to challenge this ‘habit’ was T.‐C. Liu. During the construction of a forecasting model for the US economy in the early 1950s, Liu noticed that overidentifying restrictions would create differences between estimates obtained by the ‘reduced‐form solutions’ via the restricted maximum‐likelihood route set by the Cowles group, and those obtained directly from regression equations by least‐squares (Liu 1955). This placed weight on the soundness of the overidentifying restrictions. He noticed from his empirical modelling experience that actually ‘all structural relationships are likely to be “underidentified” ’ ‘in economic reality’ (Liu 1955), and there was a great deal of arbitrariness in formulating overidentifying restrictions. This brought him around to resist the orthodox structural route in (p.162) his modelling practice, and eventually motivated him to launch his famous critique of the Cowles structural approach (he referred to it as ‘the reduced‐form solutions approach’) (Liu 1960, 1963, written in 1957).

Liu built his argument from the specification of ‘the joint distribution of the current values of the endogenous variables, given the predetermined variables’. He noted that ‘the reduced‐form function’ was in fact a representation of this distribution function, and that identification conditions under the Cowles methodology amounted to ‘compromising’ the maximum‐likelihood principle ‘in the reduced‐form solutions approach’ with respect to ‘the so‐called a priori restrictions’ (Liu 1955). He then contested that these so‐called a priori restrictions were ‘really mostly oversimplifications of economic reality’ rather than ‘required by “economic theory” concerned’, as ‘often expressed in the Cowles Commission literature’ (Liu 1963). Thus he reasoned that since the situation of whether the structural model was over‐ or under‐identified, did ‘not constitute any constraint’ on the unrestricted ML function, i.e. the reduced form, estimation should be based directly upon and start from the reduced form instead of going through the Cowles route of reduced‐form solutions (1955, 1963). It is worth noting that Liu's argument implied the generality of the reduced form over the structural model. But this point was obscured by his emphasis on demonstrating the superiority of the least‐squares reduced‐form estimation over the Cowles structural approach in forecasting. It was not until two decades later that his point was readdressed in C. Sims's (1980) paper for the method of vector autoregression (VAR). At the time, attention soon focused on the more superficial question of the competing merits of the LS and ML principles in estimation (see the next section).

Liu's argument was in effect underpinned by acute considerations about the correspondence of the estimated relationships with the ‘true’ ones with respect to economic reality. In the absence of a testing scheme, Liu skirted round the issue of how to detect misspecifications in the a priori identifying restrictions to seek more fundamental solutions through reconstruction of applied models. His applied experience taught him that models with more dynamic factors, i.e. with ‘more finely divided time‐periods’ (e.g. quarterly or monthly) or further disaggregated (p.163) sectors, could get around the underidentification problem, because the chance of identification would go up by increasing the number of lagged (predetermined) variables (1960). Here he turned his sharp eye upon the error autoregressive model form, initiated by Orcutt and Cochrane, and questioned its validity in handling residual serial correlation. His argument was simple but forceful:

To consider an autoregressive scheme for an error term as a part of structural estimation is clearly unsatisfactory. For the use of such a scheme amounts to a confession that an economic explanation for a systematic (nonrandom) part of the movement in the variable to be explained has not been found. . . . The omission of relevant variables is an important, if not the main, reason for the existence of serial correlation in the estimated residuals (Liu 1960).

From this standpoint, he advocated the strategy of introducing more lagged variables into theoretical models, recommending particularly two models in this context (1960). One was Klein and Barger's recursive quarterly US model (Klein and Barger 1954), which was regarded by Liu as ‘a fundamental reversal of the position underlying the simultaneous‐equation approach’. The other was the experimental analysis of adaptive expectations by M. Nerlove.

Nerlove's model resulted from his doctoral research in estimating farmers' responses to prices (1958; written in 1954–6). The applied problem led him to study the issue of correspondence between data information and theory. Nerlove observed that ‘insufficient attention has been devoted to the problem of identifying the price variable to which farmers react’, and one of his major objectives was to ‘identify’ the appropriate price variable connected to farmers' response with one or several observable variables (pp. 24–5). The connotation associated with his use of the term ‘identify’ differed significantly from that in the Cowles identification theory. It contained a certain reversion towards the original identification problem. It anticipated the development of a different notion of identification in the later literature of economic time‐series analysis, e.g. in Box and Jenkins (1970). Nerlove resorted to dynamic models for solutions to his identification problem. He believed that dynamic models could mimic reality better than static models (1956, 1958). In particular, he chose the hypothesis of adaptive expectations to (p.164) account for farmers' adjustment to price response and derived a theoretical relationship, later known as the ‘partial adjustment model’. Its simple form is:

y t = α x t + β y t 1 + v t .
(6.5)

Actually, the relationship that Nerlove developed at the time and used mainly was the type which is now labelled ‘dead‐start’:

y t = α x t 1 + β y t 1 + v t .
(6.6)

Nerlove found that his model produced better‐estimated results than those produced by static models and reduced the symptoms associated with residual autocorrelation at the same time. Since this apparently satisfied his main object of estimation, Nerlove did not explore further the implications of his model specification, but just demonstrated that his adjustment model implied ‘certain forms of distributed lags’ (Nerlove 1958) (see the next section).

It is interesting to observe that there was a certain degree of similarity in people's attitudes to testing and identification with respect to model construction. Here, for those who looked at the identification problem more from the angle of data information, the part of a model instigated by identification considerations was interpreted as an inherent part of the theoretical model. Identification served hypothesis formulation. As for those who approached the problem more from the theoretical angle, the imposed identification conditions were treated largely as additional to the structural model. Identification then helped tuck the original model further away from statistical tests against data. Neither way of interpreting the identification conditions in model construction brought the issue closer to hypothesis‐testing. In fact, model estimation attracted the central interest of econometricians from both sides.

6.3 Estimation and Model Construction

The most heated controversies over model construction from the late 1940s till the early 1960s involved methods of estimation, mainly between the ML camp and the LS camp. As mentioned in Chapter 3, least‐squares estimators held on to their popularity with applied modellers, in spite of the advantages of the maximum‐likelihood (p.165) technique demonstrated by the Cowles group. Superficially, the popularity had much to do with the obvious computing advantages of the LS estimators. But a deep and crucial reason for the unceasing controversies lay in the very fact that the choice of estimators depended upon the type of structural models specified. Thus the validity of the choice depended vitally upon the validity of the specified model. This was clearly at the heart of the debate between H. Wold and proponents of the SEM.

As described in earlier chapters, Wold started his econometric career as a purely statistical time‐series analyst. He was especially won over by the method of regression analysis taught by H. Cramér and the disequilibrium approach of the Swedish macroeconomic school by means of ‘sequence’ analysis, when he began his econometric practice by building applied models for demand analysis (1943–4).3 From his knowledge and experience, Wold developed an approach called ‘recursive’ or ‘causal‐chain’ modelling to challenge the simultaneous‐equations modelling approach (see Epstein 1987; Morgan 1991).

Wold's interest in the causal implications of models was provoked primarily by the early debate over the choice of regression directions in connection with demand analysis (Wold 1965). Looking from a dynamic viewpoint, Wold maintained that the problem of ‘regression direction’ should be solved by formulating applied models, such as demand models, in terms of non‐simultaneous systems according to the subject‐matter of research. Furthermore, he established a device called ‘conditional regression analysis’ based upon the implicit causal links provided by the subject‐matter to solve the problem of high interdependency (or collinearity) among explanatory variables. It was designed to condition certain explanatory variables on some other available information, e.g. to estimate the price elasticity of demand conditioned upon the estimated income elasticity obtained from cross‐section data, so as to avoid the collinearity between price and income. Least‐squares could therefore always be valid estimators (Wold 1943–4). From the same viewpoint, Wold interpreted Tinbergen's macroeconometric model as a recursive model, and Tinbergen's approach as a (p.166) significant approval of his own. (Tinbergen (1939: i.13) explicitly referred to the ‘sequence analysis’ of Swedish economists in formulating dynamic economic theory.)

The discovery of Haavelmo bias and the consequent development of the simultaneous‐equations model by the Cowles group greatly upset Wold, because it seemed to invalidate all the results that he had worked out on the basis of the LS method. So Wold, together with R. Bentzel, investigated the degree of OLS bias in the context of a dynamic system (i.e. a recursive model) underlying all his previous applied studies (Bentzel and Wold 1946). Their original model was:

x t   ( i ) = F i ( x t   ( 1 ) ,   x t 1   ( 1 ) , ,   x t   ( i 1 ) ,   x t 1   ( i 1 ) , ;   x t 1   ( i ) ,   x t 2   ( i ) , , x t 1   ( n ) ,   x t 2   ( n ) , ) + ξ t   ( i ) ( i = 1 , , n ;   t = 1 , , T ) .
(6.7)
A simpler representation of the model was in linear matrix form:
y t = A y t + B z t + v t   with   E ( y t / y t , z t ) = A y t + B z t ,
(6.8)
where {y t z t}=x t (see Wold 1965).

Wold and Bentzel regarded the system (6.7) to be adequately general to represent the gist of sequence analysis, for it described an economic structure in a clearly defined causal chain in a recursive manner, ‘i.e. x t   ( i ) can for every i be calculated from the development of x (1), . . . , x (n) up to the time point t−1’ (Bentzel and Wold 1946). They found with relief that the LS method could, under fairly general assumptions, still apply to their type of dynamic system. This discovery drew their attention from defending the LS method to defending their model system. Thus the focus of dispute was shifted from different estimation techniques to different strategies in formulating theoretical models, when Wold and Bentzel's 1946 paper came out (see Wold 1965).

The findings with Bentzel strengthened Wold's belief in the recursive system as the most basic structural model representing a ‘discrete process’ of ‘a joint probability distribution which specifies . . . an infinite sequence of variables’ (Wold and Juréen 1953).4 In order to maintain this position, Wold undertook a (p.167) series of studies exploring the ‘attractive features’ of his recursive model (1965). He showed that the recursive model guaranteed a straightforward causal interpretation (1949, 1954), and that any given set of time‐series could be represented formally as a causal chain system satisfying the basic regression requirements (1948, 1951). Meanwhile, Wold and his colleagues cast serious doubt on Haavelmo and the Cowles simultaneous‐equations formulation, which he referred to as an interdependent (ID) system. He criticized it for failing to specify clear‐cut causal relations and their dynamic motion (Bentzel and Wold 1946; Wold and Juréen 1953; Wold 1954, 1956; Bentzel and Hansen 1955). This immediately got entangled with Simon's (1953) causality and identifiability observation described above in Section 6.2 (Wold 1954, 1955; Simon 1955). The debate over causality instigated Wold to find a ‘precise causal interpretation . . . to the coefficients of ID systems’ using his causal chain systems (1965). The outcome of his investigation was that if an interdependent system was causally interpretable, it was then ‘either an approximation to the recursive system or a description of its equilibrium state’ (Strotz 1960; also Strotz and Wold 1960). This outcome seems to suggest that any identifiable interdependent system could be ‘encompassed’ by a recursive system using the present terminology.

Using causal chains as the main criterion, Wold further criticized the simultaneous‐equations approach by warning that ‘specification errors’ would occur when causal relations were ill‐defined or important variables were omitted (1954, 1956). He explained that these errors were different from sampling errors and tended to be of much larger order of magnitude than sampling errors in the case of large samples. Worse still, he wrote, ‘the presence of a specification error is not signalled by the standard error of a regression coefficient, for this accounts only for the sampling error. . . . No routine methods are available for guarding ourselves against a specification error’ (Wold 1954; see also Strotz 1960). Thus Wold saw the need to further clarify the rationale of non‐experimental modelling with respect to causal interpretation. This led him to the idea that ‘the general procedures of operation with relations’ in model building should be best ‘specified in terms of conditional expectations’, which he termed ‘eo ipso predictors’ (1965; see also Wold 1961, 1963).

(p.168) Despite his opposition to the simultaneous‐equations model formulation, Wold's overall view on modelling procedures was very much the same as that of the simultaneous‐equations camp, especially in view of his strong claim on the priority of theory in formulating a good causal model. He relied upon economic theory for arbitration among multiple hypotheses, and believed that ‘ad hoc tests’ should be used only on the basis of ‘theoretical or empirical evidence’ to detect possible specification errors. In case the error did exist, correction could be made by adding ‘further factors as explanatory variables’ to the model (Wold 1954). His unanimity with the Cowles group on the structural modelling procedure can best be seen in his conviction that a system of statistical relations, i.e. the reduced‐form type, was ‘less general’ than structural models, and that structural models in the form of the causal chain system were less general than those in the form of the simultaneous‐equations system. Therefore, he adhered to the structural modelling procedure, even when he derived the same reduced form from alternative or even rival structural systems (Wold 1956). In his 1965 paper, Wold outlined that there were three types of structural models: (A) the vector regression system:

y t = R z t + ε t   with   E ( y t / z t ) = R z t ;
(6.9)
(B) the causal‐chain system (6.8) and (C) the interdependent system:
y t = C y t + D z t + ω t  with  E ( y t / y t ,   z t ) A y t + B z t .
(6.10)

Wold showed that all three had the same reduced form. But he did not stop to ponder over the implication of this, probably for the reason of its label of ‘reduced form’. It was the original structural models that he was interested in. He asserted, from the standpoint of economic theory, that ‘models A–C represent three levels of increasing generalization’, and that the only trouble with (C) was that its ‘cause–effect specification’ and ‘chainwise forecasting’ broke down (Wold 1965). Therefore the causal‐chain system appeared to be the most appropriate type. The remedy that he suggested for (C) was to respecify it into an ‘expectational interdependent system’ using the rationale of Theil's 2SLS method. Up to this point, Wold seemed to feel relieved for having finally justified the validity of the LS estimation method.

(p.169) Apparently, Wold came quite near to unveil the myth of generality of an SEM, and the discrepancy in modellers' concepts of economic theory, from the angle of recursive analysis. But the structural approach had deeply indoctrinated him with the generality of any structural models over their statistical counterparts (reduced forms). So his argument was somewhat confused on the issues of model generalization and causality versus testability.

Wold was far from a lone defender in that respect. As described earlier, Theil based his procedure of constructing and specifying forecasting models mainly upon LS estimators; Liu advocated the reduced‐form LS approach also for forecasting purposes. Moreover, Fox (1956) re‐estimated the Klein–Goldberger model by LS and found that the degree of Haavelmo bias was smaller than expected. A number of Monte Carlo studies demonstrated fairly good performance of the LS estimators as compared with the ML estimators in finite samples (cf. Christ 1960; Hildreth 1960).5 As the position of the LS methods was consolidated increasingly by empirical findings, modellers gradually shifted their attention from structural estimates more and more to estimates of statistical relations and hence to reduced‐form equations. In particular, this was reflected in Klein's (1960) remark that ‘in practically all the situations, the reduced‐form equations are the important ones to consider even though they are derived from the more basic structural equations. . . . This is true whether we are considering a fixed or changing structure’. Underlying this shift of attention was actually the old concern over the ‘true’ correspondence of model results with reality, and the fact that this correspondence problem had not yet been adequately tackled in the formalization of the structural modelling procedure. Without definite confirmation of the true correspondence, the status of structural models in applied circumstances was inevitably weakened.

It was in this context that Waugh (1961) presented an open opposition to the route of ‘structural analysis’. Waugh pointed out that Haavelmo bias was widely misunderstood by the profession, for the nature of the bias did not lie in the statistical (p.170) sense but in the imposition of a simultaneous‐equations model as the structural model. He demonstrated that the bias arose from the fact that the parameter of de facto conditional expectation of one variable upon another estimated by least‐squares was not the same parameter of interest formulated a priori in an SEM, or in his words, ‘the basic structural true equations give biased estimates of the expected value of the dependent variable’. Waugh observed from his own empirical experience that the parameters of interest in most practical circumstances belonged to the type of conditional expectations, and therefore the LS methods were adequate for applied purposes. Since the construction of an SEM was carried practically through in the attribution of endogeneity/exogeneity to the variables in question, Waugh criticized strongly the ‘metaphysical rituals about endogenous and exogenous variables’ in the ‘routine’ practice of structural modelling for impeding empirical economists from appreciating ‘the exact meaning of the available statistical data’ (Waugh 1961).

The debates around LS versus ML and causal chain versus simultaneity made it clear that ‘the choice between different estimation techniques must be decided upon from case to case with regard to the theoretical model that underlies the application at issue’ (Wold 1965). But on the other hand a working theoretical model could rarely be deduced with certainty from a priori economic theory. In most of the applied circumstances, its formulation involved frequent adjustments in respect to the various results of trial estimations. Such uses of estimations deranged the role of estimation in the structural approach formalized not long before, i.e. measuring merely the magnitudes of the coefficients of given structural models as accurately as possible. Similar to the cases with testing and identification, estimation in practice also transgressed its formal scope, and overlapped with testing and identification in exerting an indispensable ‘feedback’ effect on applied model construction. The transgression is perceivable in the debates described above. Some accounts of the feedback effect are now described.

As mentioned previously, the purely theoretical models of the interdependent type were found to be conveniently transformable into the SEM framework in econometrics, whereas the dynamic part (i.e. the lagged terms) of an SEM could rarely (p.171) find a transformable counterpart in economic theory. Consequently, dynamic specification was the place where estimation was found in practice to serve hypothesis formulation the most significantly. Innovative research of this sort during the 1950s was best represented by the works of T. M. Brown, L. M. Koyck, M. Nerlove, and J. D. Sargan. The pragmatic and inventive style of these works bore a close resemblance to that of the early applied works of the 1920s and 1930s. Their studies resulted in the formulation of the model types of ‘distributed lags’, ‘partial adjustment’ (as seen in Section 6.2 above), and ‘error correction’.

T. M. Brown (1952) started his study by estimating the aggregate Canadian consumption function. In trying to find the best fit with the data, he experimented with three types of relations: (a) a simple static equation embodying the conventional economic theory; (b) an equation with the lagged explanatory variable (i.e. a distributed lag relation):

C = a 0 + a 1 Y + a 2 Y 1 + u
(6.11)
and (c) an equation with the lagged explained variable (i.e. a partial adjustment relation):
C = a 0 + a 1 Y + a 2   C 1 + u .
(6.12)

Brown found that (b) produced a better statistical fit than (a), especially with respect to residual serial correlation, and that the statistical fit in (c) was ‘quite good’. He thus observed that ‘lagged values of some variables involved exert an important influence on current consumer behaviour’, and justified (b) and (c) as representing consumers' ‘habit persistence’ or ‘inertia’ behaviour in adjusting to income changes. Both types, and the explanation of inertia reaction, were soon adopted in modelling other areas by the profession, e.g. in the Klein–Goldberger model (Klein and Goldberger 1955).

About the same time, L. M. Koyck (1954) studied ‘distributed lags’ in its general form in relation to investment analysis:

y t = Σ i = 0   α i x t i + u t .
(6.13)
His explanation for (6.13) was that it represented a certain ‘time‐shape of an economic reaction’ (p. 3). To circumvent the difficulty in estimating the α's in the case of high multicollinearity (p.172) (i.e. high interdependency between one variable and its lags), Koyck restricted the α's to be geometrically decreasing coefficients:
y t = α  Σ i = 0   λ i    x t i + u t ( 0 λ < 1 ) .
(6.14)
This enabled him to transform the distributed lags into the following relation with only the concurrent exogenous variables plus the one‐lagged dependent variables:
y t = α x t + λ y t 1 + u t λ u t 1 .
(6.15)

Klein immediately saw the similarity of Koyck's result to the Brown hypothesis (6.12) that he had used. Klein made his preference of (6.12) to (6.15) as a basic structural form, for the reason that (6.12) did not induce additional residual autocorrelation if there was no residual autocorrelation to start with, and that it was difficult to find an acceptable theoretical rationale for the imposition of both geometric lags and the autocorrelated disturbances in (6.15) (Klein 1958). Concurrently, the scheme (6.12) was further linked to the theory of adaptive expectations by Nerlove, and developed formally into the partial adjustment rationale, as shown in Section 6.2.

Unlike Brown, Koyck, and Nerlove, whose researches concerned directly applied issues, Sargan made his contribution through developing estimators for a generalized simultaneous‐equations model incorporated with the error autoregressive expression (see Chapter 5). During the derivation of an ML estimator for the model, Sargan noticed that the additional autoregressive expression amounted to imposing only an extra set of restrictions upon the ‘restricted’ ML estimation of the structural form, while the ML estimation of the reduced form remained ‘unrestricted’ (1961). Subsequently, Sargan elaborated and applied his above findings in a paper called ‘Wages and Prices in the United Kingdom: A Study in Econometric Methodology’ (1964).

In that paper, Sargan showed explicitly that the error auto‐regressive equation was equivalent to ‘a set of non‐linear restrictions’ imposed upon a general equation with autoregressive and distributed lags (ADL), ‘transformed’ by combining a simultaneous‐equation model with an error autoregressive scheme. Roughly, Sargan's formulation runs as follows (in matrix form): (p.173)

  • The simultaneous‐equation system: AXt=u t.

  • A first‐order error autoregression: u tku t−1=e t.

  • The combination of the two: AXtk AXt−1=e t.

This implied a ‘transformed equation’ with autoregressive and distributed lags:

B ξ t = e t , B = ( A , k A ) , ξ t = ( X t , X t 1 ) .

He proposed a likelihood ratio (LR) test based upon the unrestricted ML estimates of the general ‘transformed equation’ versus the restricted ML estimates of the structural equations to check ‘if the autoregressive assumption is correct’. He deduced from the unrestricted ADL equation that if the auto‐regressive assumption was rejected, ‘a more complicated structure of lags, or a longer lag is required in the structural equation on at least one of the variables’. Then he observed, first, that the coefficients of the general ‘transformed equation’ could ‘indicate’ the direction of ‘modification’ (i.e. respecification) of the lags if the results of the test results so required, and, secondly, that the modification often ended up with the autoregressive coefficient ceasing to be significant (i.e. its value approaching zero). These findings led him to the conclusion ‘that its [the autoregressive coefficient] significance in the original form of the structural equation was due to the variables in the equation having the wrong lags’ (Sargan 1964). His conclusion formally verified Liu's intuitive criticism of the error autoregressive scheme, as described in Section 6.2.

Sargan applied his method to modelling wages and prices in the UK. He believed from his theoretical deduction that ‘the only criterion’ regarding ‘the most appropriate form of equation’ was having non‐autocorrelated errors. He therefore turned down the growth‐rate model, recommended formerly by the Cambridge group, and modified it by adding a ‘correction’ factor made of the difference between the lagged wages and prices. The resulting wage‐determination equation in essence looks like:

Δ w t = α 0 + α 1 Δ p t + α 2 ( w t 1 p t 1 ) + e t .
(6.16)

Sargan explained the economic interpretation of (6.16) as allowing potentially for separate expressions of the ‘equilibrium’ wage level E(w t) and the wage ‘dynamic adjustment’ process (p.174) Δw t. An alternative explanation, that he pointed out, was the rationale of dynamic ‘correction’ behaviours, given by A. W. Phillips in a theoretical study of the relationship between unemployment and wage rates (1958).6 The factor (w t−1p t−1) in (6.16) could also be seen as embodying such a ‘correction’ mechanism. Thereafter, the equation form:

Δ y t = α 0 + α 1 Δ x t + α 2 ( y t 1 k x t 1 ) + e t
(6.17)
similar to (6.16) got its name ‘error correction’.

It is discernible that Sargan's 1964 paper carried, within itself and reflected in its subtitle ‘A Study in Econometric Methodology’, the seeds to construct models using a different approach from the stereotypical structural route. His demonstration of the generality and importance of the ‘transformed equation’ not only provided a formal approval of Liu's argument in favour of the reduced‐form approach, but also suggested a constructive way to implement the hypothesis‐testing principle for choosing among alternative hypotheses. However, the seeds were overshadowed by Sargan's dominant interest in methods of estimation. He stated this explicitly in commencing his 1964 paper: ‘the primary intention of this study was to develop methods of estimation, and to compare different methods of estimation when estimating structural relationships from economic time‐series when the errors in the relationships are auto‐correlated.’

The applied studies described above implied that ‘an acceptable rationale’ with respect to economic theory and ‘a feasible estimation procedure applicable to a wide range of problem’ were essential for applied model construction (Griliches 1967). This soon became extensively accepted as the general criteria for a good econometric model design, through the common practice of constructing empirical models with heavy reference to estimation results in the 1960s and 1970s.

(p.175) 6.4 Model Construction and the Probability Approach

It has been stated in Section 1.5 that Haavelmo's probability revolution was not complete, and that what got through was merely the adoption of probability theory as the pillar of those statistical methods found applicable in the structural modelling framework. This has been illustrated through Chapters 2 to 5, where we saw that the part backed up by the success of the revolution showed faster and smoother developments (e.g. methods of estimation and identification) than the part without the backing (e.g. methods of model building and testing). The problem of the incomplete revolution is more evident in the arguments and debates described in the first three sections of this chapter. There we saw that even the relatively well‐formulated part (identification and estimation) was in question, because it could not stand independent of the fragile part of model construction and testing in any applied circumstances. In particular the fact, described in Section 4.3, that the LS methods remained the most used in practice despite the theoretical beauty of the ML principle, made it clear that it was quite immaterial for the majority of applied modellers to remember constantly to start off their modelling by thinking of all the variables as jointly distributed in probability terms, which had been such a crucial point in the partial victory of the probability approach.

In Section 1.5, the incomplete revolution has been attributed to the ‘deterministic’ attitude towards economic theory in the structural modelling procedure. We saw, especially from the early part of this chapter, that econometricians actually began to realize that there was a significant discrepancy between the very abstract and seemingly incontestable theory of the general equilibrium model—the cornerstone of the structural approach—and the rather restrictive and uncertain economic theories available as a basis for any applied studies. Therefore, to mend the weak state of economic theory was the immediate response of many econometric theorists. For instance, the end of 1950 saw the Cowles Commission taking a ‘shift toward theoretical work to obtain better models preparatory to another phase of empirical work’ (Christ 1953: 47). Its subsequent theoretical work bearing the closest link with its SEM formulation (p.176) in econometrics was on issues concerning the general equilibrium, and particularly the dynamic path of the equilibrium of an economic system. Here, a number of momentous contributions were produced from the late 1940s to the mid‐1950s, associated with the names of Koopmans, Hurwicz, K. Arrow, G. Debreu, and E. Malinvaud (cf. Arrow and Intriligator 1982; Weintraub 1985; Debreu 1986).7 However, it was hard to discern, from their models, any serious concern with the issue of theory testability against data information, or the need of having ‘in mind some actual experiment, or some design of an experiment’ (Haavelmo 1944: 6). The demarcation of econometrics and mathematical economics apparently made theorists feel justified in dissociating their research from economic data altogether.

Since in the circumstances the discrepancy between actual theories and the ideal theory was neither eliminated, nor in any hope of being eliminated, by newly developed theoretical models in economics, applied modellers often found themselves compelled to violate the structural modelling procedure, trying and appending various ad hoc alterations and assumptions to the original structural model in order to make it data‐permissible. The original ‘maintained hypothesis’ was either too uncertain to maintain, or too simple to allow for making economically meaningful choice between the null and the alternative hypotheses, or both. However, once the maintained hypothesis was free to alter, as suggested by Theil (1961), the ‘deterministic’ view of treating the structural model as ‘maintained’ seemed no longer maintainable. The issue of (structural) model choice had to be faced and given explicit treatment. The debate on causality between Wold and the Cowles people, for instance, posed serious doubt about the maintainability of the SEM in the dynamic context. Interestingly, H. Wold (1965) later regarded the debate as a reflection of ‘the transition from deterministic to stochastic models’, and R. L. Basmann (1963) thought of it as one over specifications of the ‘jointly determined variables’. Their opinions brought the problems of model construction back to the (p.177) probability approach, and indicated that the approach had not yet been carried through.

Actually, the problems of how to view the probability approach with respect to the establishment of the structural modelling procedure were taken up by R. Vining in the well‐known Cowles versus the (US) National Bureau of Economic Research (NBER) debate in the late 1940s. The debate started from Koopmans's review of Burns and Mitchell's 1946 book Measuring Business Cycles. Since this work represented the strongly data‐orientated research methodology of the (US) National Bureau of Economic Research, Koopmans's review led to a methodological debate between the Cowles approach versus the NBER's approach (see also Epstein 1987).

Koopmans criticized the NBER's approach to business cycle analysis as one of thorough empiricists' ‘measurement without theory’ (Koopmans 1947b). Vining produced a sharp review of the work of the Cowles group in response. He observed:

Some of his [Koopmans] discussion suggests that we have already at hand a theoretical model. . . . Koopmans doesn't give his hypotheses specific economic content. He discusses the mathematical form that the model should (or must) take; and suggests the kind of content it should have in very general terms. . . . But apparently all he has to insist upon at present is the mathematical form, and from his discussion it appears not unfair to regard the formal economic theory underlying his approach as being in the main available from works not later than those of Walras. (Vining 1949)

Vining thus concluded that the ‘entire line of development’ of the Cowles group was to solve ‘the problem of statistical estimation that would be presented by the empirical counterpart of the Walrasian conception’. But he pointed out that ‘the adequacy of this model’ still awaited evidence, and expressed the view that ‘the Walrasian conception’ was ‘in fact a pretty skinny fellow of untested capacity’. With regard to the uncertainty of actual economic theory, Vining argued that ‘statistical economics is too narrow in scope if it includes just the estimation of postulated relations’. He related this further to the probability approach and wrote:

Probability theory is fundamental as a guide to an understanding of the nature of the phenomena to be studied and not merely as a basis (p.178) for a theory of the sampling behaviour of estimates of population parameters the characteristics of which have been postulated. In seeking for interesting hypotheses for our quantitative studies we might want to wander beyond the classic Walrasian fields and to poke around the equally classic fields once cultivated by such men as Lexis, Bortkievicz, Markov, and Kapteyn. (Vining 1949)

From this standpoint, he praised the studies of Burns and Mitchell as representing ‘an accumulation of knowledge’, for they aimed at ‘discovery and hypothesis‐seeking’ (Vining 1949).

In reply, Koopmans agreed with Vining's probability argument:

Probability, randomness, variability, enter not only into estimation and hypothesis‐testing concerning economic behaviour parameters. These concepts are an essential element in dynamic economic theory, in the model we form of the conditioning (rather than determination) of future economic quantities by past economic developments. (Koopmans 1949a)

However, Koopmans's real intention was to defend the Cowles position of strong reliance on economic theory. He observed pessimistically that hypothesis‐seeking touched on some ‘unsolved problems at the very foundations of statistical theory’. Although ‘a formal view’ purported that ‘hypothesis‐seeking and hypothesis‐testing differ only in how wide a set of alternatives is taken into consideration’, ‘there remains scope for doubt whether all hypothesis‐seeking activity can be described and formalized as a choice from a preassigned range of alternatives’ (Koopmans 1949a).

Plainly, the Koopmans–Vining debate brought up the issue of how to construct models with tentative and indefinite hypothetical theories. Both sides recognized that the structural modelling procedure in the Cowles formalization was incapable of handling the strong uncertainty of theories, for the uncertainty overruled the basic assumption that the theory underlying the structural model should be known as true and general. Noticeably, both sides also admitted that the probability approach should not be narrowly confined to the specification of distribution functions of variables for estimation. But neither was able to suggest constructive, systematic methods for the uncertainty in model construction by the probability approach.

(p.179) During the 1950s, more and more people came to the view that theoretical uncertainty was creating a bottleneck for econometric model construction. The issue was chosen as the central theme of the presidential address at the 1957 conference of the Econometric Society (Haavelmo 1958). Haavelmo acknowledged the phenomenon that ‘the concrete results of our efforts at quantitative measurements often seem to get worse the more refinement of tools and logical stringency we call into play’. His explanation was that the previous econometric work was essentially ‘repair work’ on the basis of an economic model being ‘in fact “correct” or “true” ’, whereas in reality ‘the “laws” of economics are not very accurate in the sense of a close fit’. Hence he appealed to econometricians to overcome their ‘passive attitude’ to economic axioms and cut down the amount of ‘repair work’ so as ‘to bring the requirements of an econometric “shape” of the models to bear upon the formulation of fundamental economic hypotheses from the very beginning’. He further emphasized:

We have a very important task of formulating and analysing alternative, feasible economic structures, in order to give people the best possible basis for choice of the kind of economy they want to live in. By formulating alternatives in the language of econometrics we may also be in a position to judge the amount of quantitative information concerning these alternatives that could conceivably be extracted from data of past and current facts of our economy. . . .

I believe the econometricians have a mission in fostering a somewhat bolder attitude in the choice of working hypotheses concerning economic goals and economic behaviour in a modern society. (Haavelmo 1958)

But, like Koopmans, Haavelmo could not offer a concrete way out of the ‘model choice’ problem. Indeed, with the strengthening of the newly established structural approach, it was almost impossible for Koopmans, Haavelmo, and their associates to turn and challenge squarely the very foothold upon which they had just set up the whole structural enterprise. However, as long as economic theory kept its general and ‘maintained’ position in the form of structural models, the principle of hypothesis‐testing remained powerless in discriminating between models, and data information had no explicit channel to feed its effect on new model construction. Therefore, the probability revolution was incomplete.

(p.180) Around the turn of 1960, Haavelmo's advocacy for the probability approach was effectively submerged in mainstream econometrics. Meanwhile, the approach revived partially in the emergence of a heterodox school of econometrics—the Bayesian school. The early 1960s saw the first introduction of Bayes's principle of inference into econometrics, e.g. through the pioneer works of W. D. Fisher (1962), J. Drèze (1962), and T. J. Rothenberg (1963). The pioneers reckoned that Bayes's principle offered a convenient way to handle the uncertainty in a priori formulated theories, because these theories could be expressed in the form of prior distribution densities in model specification in the Bayesian framework. The Bayesian method thus put forth a formal way to represent the uncertainty of theory in model construction in line with the probability approach. However, the early Bayesian econometricians did not pursue this route much farther. Instead of challenging the structural modelling procedure, they based themselves upon it, with the belief that their approach provided a better set of instruments for it, particularly with respect to the aspect of modelling various policy scenarios. Hence they devoted almost all their efforts in devising Bayesian tools of estimation and identification, equivalent to those of the standard econometrics within the same structural modelling paradigm (see Qin 1991). Nevertheless, their continuation of the probability approach was hardly noticed at the time, since the Bayesian method was strongly rejected from the outset by orthodox econometricians for its subjective image.

Econometricians would still have to try all the paths in the labyrinth of structural modelling for years before they would stumble upon some way out. To achieve that, they would have to recoordinate the steps of model estimation, identification, testing, specification, and construction in a systematic way, in addition to making refinements on each, such that these together would embody a progressive searching process for better econometric models. More fundamentally, they would have to push along the probability revolution. This required them to move away from ‘repair work’ and take a ‘bolder attitude’ to challenge ‘fundamental economic hypotheses’ (Haavelmo 1958), to overcome the narrow outlook of regarding probability theory ‘merely as a basis for a theory of the sampling behaviour of estimates of population parameters the characteristics of which have been (p.181) postulated’ (Vining 1949), to change ‘deductively formulated theories’ or postulates (Stone 1951) from the position of maintained hypotheses to that of any testable hypotheses, and to instil their stochastic, non‐experimental viewpoint into the largely deterministic, metaphysical camp of economics. Before then many problems awaited solution. Yet many more would still arise thereafter. (p.182)

Notes:

(1) This chapter moves slightly away from the style used in the previous chapters. Instead of making an extensive survey of the historical events, it focuses on the most representative and controversial events in order to provide readers with a full perspective over the whole period of history examined. The literature covered here is therefore not exhaustive.

(2) C. Christ gave an account of all the places in Klein and Goldberger (1955) where statistical selections among different equation forms were suggested, and commented that ‘their work would be even more useful to others had they presented all the equations with which they experimented and the estimates obtained for each’ (Christ 1956).

(3) Actually Wold had a part of his demand studies published in Swedish as early as 1940, see the reference in Wold (1943).

(4) Notice that Wold only considered ‘stationary’ processes in his statistical discussions.

(5) These two papers together with Liu (1960) were components of a ‘symposium on simultaneous‐equation estimation’ originating from a panel discussion on the topic at the 1958 meeting of the Econometric Society in Chicago.

(6) Actually, Phillips (1954) first came up with the theory in an effort to formulate stabilization policy in a dynamic context. In the paper, Phillips made the hypothesis that whenever there occurred an error, i.e. the difference between the actual level and the desired level of a variable, a certain stabilizing factor ‘will be changing in a direction which tends to eliminate the difference and at a rate proportional to the difference’, and suggested the use of a ‘derivative correction’ mechanism, opposite to the acceleration mechanism, to depict such an adjustment movement towards the desired level.

(7) Their main contributions include Debreu 1951; Koopmans 1951; Hurwicz and Arrow 1952; Malinvaud 1953; Arrow and Debreu 1954). Notice that Malinvaud did his work during his visit at the Cowles Commission as a guest for one year during 1950–1.