In several accounts of what models are and how they function a specific view dominates. This view contains the following characteristics. First, there is a clear-cut distinction between theories, models and data and secondly, empirical assessment takes place after the model is built. This view in which discovery and justification are disconnected is not in accordance with several practices of mathematical business-cycle model building. What these practices show is that models have to meet implicit criteria of adequacy, such as satisfying (...) theoretical, mathematical and statistical requirements, and be useful for policy. In order to be adequate, models have to integrate enough items to satisfy such criteria. These items include besides theoretical notions, policy views, mathematisations of the cycle and metaphors also empirical data and facts. So, the main thesis of this chapter is that the context of discovery is the successful integration of those items that satisfy the criteria of adequacy. Because certain items are empirical data and facts, justification can be built-in. (shrink)
Economics is dominated by model building, therefore a comprehension of how such models work is vital to understanding the discipline. This book provides a critical analysis of the economist's favourite tool, and as such will be an enlightening read for some, and an intriguing one for others.
Introduction: philosophy of science in practice Content Type Journal Article Category Editorial Article Pages 303-307 DOI 10.1007/s13194-011-0036-4 Authors Rachel Ankeny, School of History & Politics, University of Adelaide, Napier Building, The University of Adelaide, Adelaide, SA 5005, Australia Hasok Chang, Department of History and Philosophy of Science, University of Cambridge, Free School Lane, Cambridge, CB2 3RH UK Marcel Boumans, Faculty of Economics and Business, University of Amsterdam, Valckenierstraat 65-67, 1018 XE Amsterdam, The Netherlands Mieke Boon, Department of Philosophy, University of (...) Twente, Postbox 217, 7500 AE Enschede, The Netherlands Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 3. (shrink)
A long-standing tradition presents economic activity in terms of the flow of fluids. This metaphor lies behind a small but influential practice of hydraulic modelling in economics. Yet turning the metaphor into a three-dimensional hydraulic model of the economic system entails making numerous and detailed commitments about the analogy between hydraulics and the economy. The most famous 3-D model in economics is probably the Phillips machine, the central object of this paper.
The kinds of models discussed in this paper function as measuring instruments. We will concentrate on two necessary steps for measurement: (1) the search of a mathematical representation of the phenomenon; (2) this representation should cover an invariant relationship between the properties of the phenomenon to be measured and observable accociated attributes of a measuring instrument. Therefore, the measuring instrument should function as a nomological machine. However, invariant relationships are not necessarily ceteris paribus regularities, but could also occur when the (...) influence of the environment is negligible. Then we are able to achieve accurate measurements outside the laboratory. (shrink)
In this chapter, we discuss a specific kind of progress that occurs in most branches of economics today: progress involving the repeated use of mathematical models. We adopt a functional account of progress to argue that progress in economics occurs through the use of what we call “common recipes” and model templates for defining and solving problems of relevance for economists. We support our argument by discussing the case of 20th century business cycle research. By presenting this case study in (...) detail, we show how model templates are not only reapplied to different phenomena. We also show how scientists first develop them and how, once they are considered less useful, they are replaced with new ones. Finally, our case also illustrates that it is not only the mathematical structure that is reused but that such reuse also requires a shared conceptual vision of the core properties of the phenomenon to be studied. If that vision is no longer shared among economists, a model template can become useless and has to be replaced, sometimes against resistance, with a different one. (shrink)
Since the February 2020 publication of the article ‘Flattening the curve’ in The Economist, political leaders worldwide have used this expression to legitimize the introduction of social distancing measures in fighting Covid-19. In fact, this expression represents a complex combination of three components: the shape of the epidemic curve, the social distancing measures and the reproduction number \. Each component has its own history, each with a different history of control. Presenting the control of the epidemic as flattening the curve (...) is in fact flattening the underlying natural-social complexity. The curve that needs to be flattened is presented as a bell-shaped curve, implicitly suggesting that the pathogen’s spread is subject only to natural laws. The \ value, however, is, fundamentally, a metric of how a pathogen behaves within a social context, namely its numerical value is affected by sociopolitical influences. The jagged and erratic empirical curve of Covid-19 illustrates this. Although the virus has most likely infected only a small portion of the total susceptible population, it is clear its shape has changed drastically. This changing shape is largely due to sociopolitical factors. These include shifting formal laws and policies, shifting individual behaviors as well as shifting various other social norms and practices. This makes the course of Covid-19 curve both erratic and unpredictable. (shrink)
In the social sciences we hardly can create laboratory conditions, we only can try to find out which kinds of experiments Nature has carried out. Knowledge about Nature's designs can be used to infer conditions for reliable predictions. This problem was explicitly dealt with in Haavelmo's (1944) discussion of autonomous relationships, Friedman's (1953) as-if methodology, and Simon's (1961) discussions of nearly-decomposable systems. All three accounts take Marshallian partitioning as starting point, however not with a sharp ceteris paribus razor but with (...) the blunt knife of negligibility assumptions. As will be shown, in each account reflection on which influences are negligible, for what phenomena and for how long, played a central role. (shrink)
There are at least two elements of theory completion necessary for measurement: (1) a measurement formula and (2) standardization of that representation. Standardization is based on the search for stability. The more stable the correlation which the measurement formula represents is, the less influence other circumstances have. Then, the interconnection between testing, mathematical representation and standardization is of a hierarchical order. By testing a model one tries to find out to what extent the model covers the data of the phenomenon, (...) while to be a candidate for a measurement formula the model must represent the whole data range. And among the possible representations the standard model represents the most stable correlation under different circumstances. Lucas? model of the Phillips curve has been used to investigate this interconnection between testing, representation and stability. (shrink)
The Representational Theory of Measurement conceives measurement as establishing homomorphisms from empirical relational structures into numerical relation structures, called models. There are two different approaches to deal with the justification of a model: an axiomatic and an empirical approach. The axiomatic approach verifies whether a given relational structure satisfies certain axioms to secure homomorphic mapping. The empirical approach conceives models to function as measuring instruments by transferring observations of a phenomenon under investigation into quantitative facts about that phenomenon. These facts (...) are evaluated by their accuracy and precision. Precision is generally achieved by least squares methods and accuracy by calibration. For calibration standards are needed. Then two polar strategies can be distinguished: white-box modeling and black-box modeling. The first strategy of modeling aims at estimating the invariant equations of the phenomenon, thereby fulfilling Hertz’s correctness requirement. The latter strategy of modeling is to use known stable facts about the phenomenon to adjust the model parameters, thereby fulfilling Hertz’s appropriateness requirement. For this latter strategy, the requirement of models as homomorphic mappings has been dropped. Where one will find the axiomatic approach more often used for measurement in the laboratory, the empirical approach is more appropriate for measurement outside the laboratory. The reason for this is that for measurement of phenomena outside the laboratory, one also needs to take account of the environment to achieve accurate results. Environments are generally too relation-rich for an axiomatic approach, which are only applicable for relation-poor systems. The white-box modeling strategy, reflecting the complexity of the environment due to its correctness requirement, will, however, lead to immensely large models. To avoid this problem, modular design is an appropriate strategy to reduce this complexity. Modular design is a grey-box modeling strategy. Grey-box models are assemblies of modules; these are black boxes with standard interface. It should be noted that the structure of the assemblage need not be homomorphic to the relations describing the interaction between phenomenon and environment. These three modeling strategies map out the possible designs for computer simulations as measuring instruments. Whether a simulation is based on a white-box, grey-box or black-box model is only determined by the relationship between the phenomenon and its environment and not by e.g. its materiality or physicality. (shrink)
Generally, simulations are carried out to answer specific questions. The assessment of the reliability of an answer depends on the kind of question investigated. The answer to a 'why' question is an explanation. The premises of an explanation have to include invariant relationships, and thus the reliability of such answer depends on whether the domain of invariance of the relevant relationships covers the domain of the question. The answer to a 'how much' question is a measurement. A measurement is reliable (...) when it is an output of a calibrated measuring instrument. (shrink)
Generally, rational decision-making is conceived as arriving at a decision by a correct application of the rules of logic and statistics. If not, the conclusions are called biased. After an impressive series of experiments and tests carried out in the last few decades, the view arose that rationality is tough for all, skilled field experts not excluded. A new type of planner's counsellor is called for: the normative statistician, the expert in reasoning with uncertainty par excellence. To unravel this view, (...) the paper explores a specific practice of clinical decision-making, namely Evidence-Based Medicine. This practice is chosen, because it is very explicit about how to rationalize practice. The paper shows that whether a decision-making process is rational cannot be assessed without taking into account the environment in which the decisions have to be taken. To be more specific, the decision to call for new evidence should be rational too. This decision and the way in which this evidence is obtained are crucial to validate the base rates. Rationality should be model-based, which means that not only the isolated decision-making process should take a Bayesian updating process as its norm, but should also model the acquisition of evidence (priors and tests results) as a rational process. (shrink)
Scientific measurements are made objective through the use of reliable instruments. Instruments can have this function because they can - as material objects - be investigated independently of the specific measurements at hand. However, their materiality appears to be crucial for the assessment of their reliability. The usual strategies to investigate an instrument’s reliability depend on and assume possibilities of control, and control is usually specified in terms of materiality of the instrument and environment. The aim of this paper is (...) to investigate the problem of reliability for non-material instruments, the instruments being applied in the social sciences. Any possible lack of reliability of the instrument hinders the measurements of ever becoming objective. (shrink)
According to Suppes, measurement theory, like any scientific theory, should consist of two parts, a set-theoretical defined structure and the empirical interpretation of that structure. An empirical interpretation means the specification – ‘coordinating definitions’ – of a ‘hierarchy of models’ between the theory and the experimental results. But in the case of measurement theory, he defined the relationship between numerical structure and the empirical structure specifically in terms of homomorphism. This is rather a highly restrictive relation between models, and therefore (...) he never succeeded in giving his measurement theory empirical content. This paper discusses what an empirical measurement theory will look like if we would use less restrictive ‘coordinating definitions’ to specify the relationships between the various models. (shrink)
The Representational Theory of Measurement conceives measurement as establishing homomorphisms from empirical relational structures into numerical relation structures, called models. Models function as measuring instruments by transferring observations of an economic system into quantitative facts about that system. These facts are evaluated by their accuracy. Accuracy is achieved by calibration. For calibration standards are needed. Then two strategies can be distinguished. One aims at estimating the invariant (structural) equations of the system. The other is to use known stable facts about (...) the system to adjust the model parameters. For this latter strategy, the requirement of models as homomorphic mappings is not required anymore. (shrink)
Glenn Harrison [Journal of Economic Methodology, 2013, 20, 103–117] discusses four related forms of methodological intolerance with respect to field experiments: field experiments should rely on some form of randomization, should be disconnected from theory, the concept of causality should only be defined in terms of observables, and the role of laboratory experiments is dismissed. As is often the case, the cause of intolerance is ignorance, as it is here. To acquire knowledge about potential influences, which we need for both (...) the evaluation of internal and external validity of experimental results, we cannot do without theory. A purely empiricist methodology will be unable to give us sufficient understanding of the validity of these results. An account of causality only based on directly observed things, is an account based on factual influences only. This account will be too restricted, because it will not deal with the unobserved potential influences, which we need – again – for the evaluat... (shrink)
A typical difference between social science and natural science is the degree in which control is possible. Strategies in both sciences to obtain true facts are consequently different. Measurement errors are due to background noise. Laboratories are environments in which background conditions can be controlled. As a result, accurate observations { measurement results close to the true values of the measurands { can only be obtained in laboratories. Therefore, measuring instruments are built such that they function as mini laboratories. However, (...) observations in social science are usually passive, in the sense that control of background conditions is impossible. Models are built to solve this problem of (lack of) control. They function as nonmaterial laboratories by aiming at precision, that is reducing the spread of the measurement errors. The application of models as measuring instruments necessitates a shift of the requirement of accuracy to the requirement of precision, which is a feature of the instrument and not of the environment. (shrink)
Unlike basic sciences, scientific research in advanced technologies aims to explain, predict, and describe not phenomena in nature, but phenomena in technological artefacts, thereby producing knowledge that is utilized in technological design. This article first explains why the covering‐law view of applying science is inadequate for characterizing this research practice. Instead, the covering‐law approach and causal explanation are integrated in this practice. Ludwig Prandtl’s approach to concrete fluid flows is used as an example of scientific research in the engineering sciences. (...) A methodology of distinguishing between regions in space and/or phases in time that show distinct physical behaviours is specific to this research practice. Accordingly, two types of models specific to the engineering sciences are introduced. The diagrammatic model represents the causal explanation of physical behaviour in distinct spatial regions or time phases; the nomo‐mathematical model represents the phenomenon in terms of a set of mathematically formulated laws. (shrink)
In economic methodology, a complete turn to practice is hampered by a broadly shared normative stance towards practice. The root of this normativism is Platonism. Platonism presupposes in essence a...
When observing or measuring phenomena, errors are inevitable, one can only aspire to reduce these errors as much as possible. An obvious strategy to achieve this reduction is by using more precise instruments. Another strategy was to develop a theory of these errors that could indicate how to take them into account. One of the greatest achievements of statistics in the beginning of the 19th century was such a theory of error. This theory told the practitioners that the best thing (...) they could do is taking the arithmetical mean of their observations. This average would give them the most accurate estimate of the value they were searching for. Soon after its invention, this method made a triumphal march across various sciences. However, not in all sciences one stood waving aside. This method, namely, only worked well when the various observations were made under similar circumstances and when there were very many of them. And this was not the case for e.g. meteorology and actuarial science, the two sciences discussed in this paper. (shrink)
In economics, models are built to answer specific questions. Each type of question requires its own type of models; in other words, it defines the requirements that a model should meet and thereby instructs how the models should be built. An explanation is an answer to a ‘why’-question. In economics, this answer is provided by a white -box model. To answer a ‘how much’-question, which is asking for a measurement, economists can make use of black-box models. Economic phenomena are often (...) so complex that white -box models that should function as explanations for them appear not to be intelligible. So, for questions concerning understanding - e.g. ‘Can you clarify this phenomenon‘’ ‘Can you tell me what would happen if‘’ - it would appear that grey-box models are more adequate. This implies a twofold revision of de Regt and Dieks’s criteria for understanding phenomena. First, whenever the term ‘theory’ is used it should be substituted by ‘model’. Secondly, the Criterion for the Intelligibility of Models is now simplified as: ‘A model M is intelligible for economists if they have built M as a grey-box construction’. A grey-box construction implies that they ‘can recognize qualitatively characteristic consequences of M without performing exact calculations’. (shrink)
Assessment of error and uncertainty is a vital component of both natural and social science. This edited volume presents case studies of research practices across a wide spectrum of scientific fields. It compares methodologies and presents the ingredients needed for an overarching framework applicable to all.
The assessment of models in an experiment depends on their material nature and their function in the experiment. Models that are used to make the phenomenon under investigation visible - sensors - are assessed by calibration. However, calibration strategies assume material intervention. The experiment discssed in this paper is an experiment in economics to measure the influence of technology shocks on business cycles. It uses immaterial, mathematical instruments. It appears that calibration did not work for these kinds of models, it (...) did not provide reliable evidence for the facts of the business cycle. (shrink)
The metrology literature neglects a strong empirical measurement tradition in economics, which is different from the traditions as accounted for by the formalist representational theory of measurement. This empirical tradition comes closest to Mari's characterization of measurement in which he describes measurement results as informationally adequate to given goals. In economics, one has to deal with soft systems, which induces problems of invariance and of self-awareness. It will be shown that in the empirical economic measurement tradition both problems have been (...) on the agenda for a long while, and that the proposed solutions to these problems provide clues for the directions in which one could develop a measurement theory that takes account of soft systems. (shrink)
Notwithstanding the fact that a lot, if not most, of science is done outside the laboratory, most literature in the history and philosophy of science, when discussing the experimental method, focus only on experimentation “within the walls of a laboratory” . To fill this embarrassing gap, Astrid Schwarz has written an excellent book on field experimentation. The field, however, is a much more messy site than a clean lab. In an introduction to a special issue of Osiris on field science, (...) Kuklick and Kohler list a number of the problems related to science in the field: As scientific rigor is defined by the standards of the laboratory, the field is considered to be “a site of compromised work: field sciences have dealt with problems that resist tidy solutions, and they have not excluded amateur participants” . To discuss science in the field, we will have to take account of a methodological tension between laboratory and field standards of evidence and reasoning. .. (shrink)
In de economische wetenschap worden vragen beantwoord met behulp van modellen. Voor ieder type vraag bestaat een eigen klasse van modellen. Het type vraag bepaalt de eisen waaraan het model moet voldoen en geeft daarbij aan hoe de modellen gemaakt dienen te worden. Dit artikel behandelt de vraag die gesteld wordt om tot begrip te komen: de ‘hoezo-vraag’. Om duidelijk te maken welke modellen hoezo-vragen impliceren zal dit type vraag vergeleken worden met ‘waarom-vragen’ en ‘hoeveelvragen’. Het antwoord op een waarom-vraag (...) is een verklaring. Het antwoord op een hoeveel-vraag is een meetresultaat. (shrink)
The practice of economic science is dominated by model building. To evaluate economic policy, models are built and used to produce numbers to inform us about economic phenomena. Although phenomena are detected through the use of observed data, they are in general not directly observable. To 'see' them we need instruments. More particularly, to obtain numerical facts of the phenomena we need measuring instruments. This paper will argue that in economics models function as such instruments of observation, more specific as (...) measuring instruments. In measurement theory, measurement is a mapping of some class of aspects of characteristics of the empirical world into a set of numbers. The paper's view is that economic modelling is a specific kind of mapping to which the standard account on how models are obtained and assessed does not apply. Models are not easily or simply derived from theories and subsequently tested against empirical data. Instruments are constructed by integrating several theoretical and empirical ideas and requirements in such a way that their performance meets a beforehand chosen standard. The empirical requirement is that the model should take account of the phenomenological facts, so that the reliability of the model is not assessed by post-model testing but obtained by calibration. (shrink)
This paper discusses the role of expert’s observations in different practices of decision making. In these practices it is never the case that the observations of one sole expert is being used, so discussing the role of expert’s observations implies a discussion of how these observations are combined.