This paper has three main objectives: (a) Discuss the formal analogy between some important symmetry-invariance arguments used in physics, probability and statistics. Specifically, we will focus on Noether’s theorem in physics, the maximum entropy principle in probability theory, and de Finetti-type theorems in Bayesian statistics; (b) Discuss the epistemological and ontological implications of these theorems, as they are interpreted in physics and statistics. Specifically, we will focus on the positivist (in physics) or subjective (in statistics) interpretations vs. objective interpretations that (...) are suggested by symmetry and invariance arguments; (c) Introduce the cognitive constructivism epistemological framework as a solution that overcomes the realism-subjectivism dilemma and its pitfalls. The work of the physicist and philosopher Max Born will be particularly important in our discussion. (shrink)
The main goal of this article is to use the epistemological framework of a specific version of Cognitive Constructivism to address Piaget’s central problem of knowledge construction, namely, the re-equilibration of cognitive structures. The distinctive objective character of this constructivist framework is supported by formal inference methods of Bayesian statistics, and is based on Heinz von Foerster’s fundamental metaphor of objects as tokens for eigen-solutions. This epistemological perspective is illustrated using some episodes in the history of chemistry concerning the definition (...) or identification of chemical elements. Some of von Foerster’s epistemological imperatives provide general guidelines of development and argumentation. (shrink)
Optimization and Stochastic Processes Applied to Economy and Finance. Textbook for the BM&F-USP (Brazilian Mercantile and Futures Exchange - University of Sao Paulo) Master's degree program in Finance.
In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by Cognitive Constructivism and the FBST (Full Bayesian Significance Test). The constructivist framework is contrasted with the traditional epistemological settings for orthodox Bayesian and frequentist statistics provided by Decision Theory and Falsificationism.
Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical requirements. (...) However, Izbicki and Esteves shows that the only tests that are in accordance with three logical requirements (monotonicity, invertibility and consonance) are trivial tests based on point estimation, which generally lack statistical optimality. As a possible solution to this dilemma, this paper adapts the above logical requirements to agnostic tests, in which one can accept, reject or remain agnostic with respect to a given hypothesis. Each of the logical requirements is characterized in terms of a Bayesian decision theoretic perspective. Contrary to the results obtained for regular hypothesis tests, there exist agnostic tests that satisfy all logical requirements and also perform well statistically. In particular, agnostic tests that fulfill all logical requirements are characterized as region estimator-based tests. Examples of such tests are provided. (shrink)
The full Bayesian signi/cance test (FBST) for precise hypotheses is presented, with some illustrative applications. In the FBST we compute the evidence against the precise hypothesis. We discuss some of the theoretical properties of the FBST, and provide an invariant formulation for coordinate transformations, provided a reference density has been established. This evidence is the probability of the highest relative surprise set, “tangential” to the sub-manifold (of the parameter space) that defines the null hypothesis.
In this article, we discuss some issues concerning magical thinking—forms of thought and association mechanisms characteristic of early stages of mental development. We also examine good reasons for having an ambivalent attitude concerning the later permanence in life of these archaic forms of association, and the coexistence of such intuitive but informal thinking with logical and rigorous reasoning. At the one hand, magical thinking seems to serve the creative mind, working as a natural vehicle for new ideas and innovative insights, (...) and giving form to heuristic arguments. At the other hand, it is inherently difficult to control, lacking effective mechanisms needed for rigorous manipulation. Our discussion is illustrated with many examples from the Hebrew Bible, and some final examples from modern science. (shrink)
Although logical consistency is desirable in scientific research, standard statistical hypothesis tests are typically logically inconsistent. To address this issue, previous work introduced agnostic hypothesis tests and proved that they can be logically consistent while retaining statistical optimality properties. This article characterizes the credal modalities in agnostic hypothesis tests and uses the hexagon of oppositions to explain the logical relations between these modalities. Geometric solids that are composed of hexagons of oppositions illustrate the conditions for these modalities to be logically (...) consistent. Prisms composed of hexagons of oppositions show how the credal modalities obtained from two agnostic tests vary according to their threshold values. Nested hexagons of oppositions summarize logical relations between the credal modalities in these tests and prove new relations. (shrink)
Heinz Von Forester characterizes the objects “known” by an autopoietic system as eigen-solutions, that is, as discrete, separable, stable and composable states of the interaction of the system with its environment. Previous articles have presented the FBST, Full Bayesian Significance Test, as a mathematical formalism specifically designed to access the support for sharp statistical hypotheses, and have shown that these hypotheses correspond, from a constructivist perspective, to systemic eigen-solutions in the practice of science. In this article several issues related to (...) the role played by language in the emergence of eigen-solutions are analyzed. The last sections also explore possible connections with the semiotic theory of Charles Sanders Peirce. (shrink)
This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing (...) scientific hypotheses, respectively: (a) Decision theoretic Bayesian statistics and Bayes factors; (b) Frequentist statistics and p-values; (c) Constructive Bayesian statistics and e-values. This article examines with special care the Zero Probability Paradox (ZPP), related to the verification of sharp or precise hypotheses. Finally, this article makes some remarks on Lakatos’ view of mathematics as a quasi-empirical science. (shrink)
In this paper, the notion of degree of inconsistency is introduced as a tool to evaluate the sensitivity of the Full Bayesian Significance Test (FBST) value of evidence with respect to changes in the prior or reference density. For that, both the definition of the FBST, a possibilistic approach to hypothesis testing based on Bayesian probability procedures, and the use of bilattice structures, as introduced by Ginsberg and Fitting, in paraconsistent logics, are reviewed. The computational and theoretical advantages of using (...) the proposed degree of inconsistency based sensitivity evaluation as an alternative to traditional statistical power analysis is also discussed. (shrink)
Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST - Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupling, randomization and sparsely connected statistical models in the epistemological (...) framework of cognitive constructivism. (shrink)
A new Evidence Test is applied to the problem of testing whether two Poisson random variables are dependent. The dependence structure is that of Holgate’s bivariate distribution. These bivariate distribution depends on three parameters, 0 < theta_1, theta_2 < infty, and 0 < theta_3 < min(theta_1, theta_2). The Evidence Test was originally developed as a Bayesian test, but in the present paper it is compared to the best known test of the hypothesis of independence in a frequentist framework. It is (...) shown that the Evidence Test is considerably more powerful when the correlation is not too close to zero, even for small samples. (shrink)
Sortition, i.e. random appointment for public duty, has been employed by societies throughout the years as a firewall designated to prevent illegitimate interference between parties in a legal case and agents of the legal system. In judicial systems of modern western countries, random procedures are mainly employed to select the jury, the court and/or the judge in charge of judging a legal case. Therefore, these random procedures play an important role in the course of a case, and should comply with (...) some principles, such as transparency and complete auditability. Nevertheless, these principles are neglected by random procedures in some judicial systems, which are performed in secrecy and are not auditable by the involved parties. The assignment of cases in the Brazilian Supreme Court is an example of such a procedure, for it is performed using procedures unknown to the parties involved in the judicial cases. This article presents a review of how sortition has been historically employed by societies and discusses how Mathematical Statistics may be applied to random procedures of the judicial system, as it has been applied for almost a century on clinical trials, for example. A statistical model for assessing randomness in case assignment is proposed and applied to the Brazilian Supreme Court. As final remarks, guidelines for the development of good randomization procedures are outlined. (shrink)
Randomization is an integral part of well-designed statistical trials, and is also a required procedure in legal systems. Implementation of honest, unbiased, understandable, secure, traceable, auditable and collusion resistant randomization procedures is a mater of great legal, social and political importance. Given the juridical and social importance of randomization, it is important to develop procedures in full compliance with the following desiderata: (a) Statistical soundness and computational efficiency; (b) Procedural, cryptographical and computational security; (c) Complete auditability and traceability; (d) Any (...) attempt by participating parties or coalitions to spuriously influence the procedure should be either unsuccessful or be detected; (e) Open-source programming; (f) Multiple hardware platform and operating system implementation; (g) User friendliness and transparency; (h) Flexibility and adaptability for the needs and requirements of multiple application areas (like, for example, clinical trials, selection of jury or judges in legal proceedings, and draft lotteries). This paper presents a simple and easy to implement randomization protocol that assures, in a formal mathematical setting, full compliance to the aforementioned desiderata for randomization procedures. (shrink)
We review the definition of the Full Bayesian Significance Test (FBST), and summarize its main statistical and epistemological characteristics. We review also the Abstract Belief Calculus (ABC) of Darwiche and Ginsberg, and use it to analyze the FBST’s value of evidence. This analysis helps us understand the FBST properties and interpretation. The definition of value of evidence against a sharp hypothesis, in the FBST setup, was motivated by applications of Bayesian statistical reasoning to legal matters where the sharp hypotheses were (...) defendants statements, to be judged according to the Onus Probandi juridical principle. (shrink)
In this article, we discuss some issues concerning magical thinking—forms of thought and association mechanisms characteristic of early stages of mental development. We also examine good reasons for having an ambivalent attitude concerning the later permanence in life of these archaic forms of association, and the coexistence of such intuitive but informal thinking with logical and rigorous reasoning. At the one hand, magical thinking seems to serve the creative mind, working as a natural vehicle for new ideas and innovative insights, (...) and giving form to heuristic arguments. At the other hand, it is inherently difficult to control, lacking effective mechanisms needed for rigorous manipulation. Our discussion is illustrated with many examples from the Hebrew Bible, and some final examples from modern science. (shrink)
This article presents a simple derivation of optimization models for reaction networks leading to a generalized form of the mass-action law, and compares the formal structure of Minimum Information Divergence, Quadratic Programming and Kirchhoff type network models. These optimization models are used in related articles to develop and illustrate the operation of ontology alignment algorithms and to discuss closely connected issues concerning the epistemological and statistical significance of sharp or precise hypotheses in empirical science.
This paper introduces pragmatic hypotheses and relates this concept to the spiral of scientific evolution. Previous works determined a characterization of logically consistent statistical hypothesis tests and showed that the modal operators obtained from this test can be represented in the hexagon of oppositions. However, despite the importance of precise hypothesis in science, they cannot be accepted by logically consistent tests. Here, we show that this dilemma can be overcome by the use of pragmatic versions of precise hypotheses. These pragmatic (...) versions allow a level of imprecision in the hypothesis that is small relative to other experimental conditions. The introduction of pragmatic hypotheses allows the evolution of scientific theories based on statistical hypothesis testing to be interpreted using the narratological structure of hexagonal spirals, as defined by Pierre Gallais. (shrink)
We present SASC, Self-Adaptive Semantic Crossover, a new class of crossover operators for genetic programming. SASC operators are designed to induce the emergence and then preserve good building-blocks, using metacontrol techniques based on semantic compatibility measures. SASC performance is tested in a case study concerning the replication of investment funds.
We formulate the problem of permuting a matrix to block angular form as the combinatorial minimization of an objective function. We motivate the use of simulated annealing (SA) as an optimization tool. We then introduce a heuristic temperature dependent penalty function in the simulated annealing cost function, to be used instead of the real objective function being minimized. Finally we show that this temperature dependent penalty function version of simulated annealing consistently outperforms the standard simulated annealing approach, producing, with smaller (...) running times, better solutions. We believe that the use of a temperature dependent penalty function may be useful in developing SA algorithms for other combinatorial problems. -/- . (shrink)
The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
Much forensic inference based upon DNA evidence is made assuming Hardy-Weinberg Equilibrium (HWE) for the genetic loci being used. Several statistical tests to detect and measure deviation from HWE have been devised, and their limitations become more obvious when testing for deviation within multiallelic DNA loci. The most popular methods-Chi-square and Likelihood-ratio tests-are based on asymptotic results and cannot guarantee a good performance in the presence of low frequency genotypes. Since the parameter space dimension increases at a quadratic rate on (...) the number of alleles, some authors suggest applying sequential methods, where the multiallelic case is reformulated as a sequence of “biallelic” tests. However, in this approach it is not obvious how to assess the general evidence of the original hypothesis; nor is it clear how to establish the significance level for its acceptance/rejection. In this work, we introduce a straightforward method for the multiallelic HWE test, which overcomes the aforementioned issues of sequential methods. The core theory for the proposed method is given by the Full Bayesian Significance Test (FBST), an intuitive Bayesian approach which does not assign positive probabilities to zero measure sets when testing sharp hypotheses. We compare FBST performance to Chi-square, Likelihood-ratio and Markov chain tests, in three numerical experiments. The results suggest that FBST is a robust and high performance method for the HWE test, even in the presence of several alleles and small sample sizes. (shrink)
Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian (...) significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value). (shrink)
The concept of non-arbitrage plays an essential role in finance theory. Under certain regularity conditions, the Fundamental Theorem of Asset Pricing states that, in non-arbitrage markets, prices of financial instruments are martingale processes. In this theoretical framework, the analysis of the statistical distributions of financial assets can assist in understanding how participants behave in the markets, and may or may not engender arbitrage conditions. Assuming an underlying Variance Gamma statistical model, this study aims to test, using the FBST - Full (...) Bayesian Significance Test, if there is a relevant price difference between essentially the same financial asset traded at two distinct locations. Specifically, we investigate and compare the behavior of call options on the BOVESPA Index traded at (a) the Equities Segment and (b) the Derivatives Segment of BM&FBovespa. Our results seem to point out significant statistical differences. To what extent this evidence is actually the expression of perennial arbitrage opportunities is still an open question. (shrink)
The Generalized Poisson Distribution (GPD) adds an extra parameter to the usual Poisson distribution. This parameter induces a loss of homogeneity in the stochastic processes modeled by the distribution. Thus, the generalized distribution becomes an useful model for counting processes where the occurrence of events is not homogeneous. This model creates the need for an inferential procedure, to test for the value of this extra parameter. The FBST (Full Bayesian Significance Test) is a Bayesian hypotheses test procedure, capable of providing (...) an evidence measure on sharp hypotheses (where the dimension of the parametric space under the null hypotheses is smaller than that of the full parametric space). The goal of this work is study the empirical properties of the FBST for testing the nullity of extra parameter of the generalized Poisson distribution. Numerical experiments show a better performance of FBST with respect to the classical likelihood ratio test, and suggest that FBST is an efficient and robust tool for this application. (shrink)
A Bayesian measure of evidence for precise hypotheses is presented. The intention is to give a Bayesian alternative to significance tests or, equivalently, to p-values. In fact, a set is defined in the parameter space and the posterior probability, its credibility, is evaluated. This set is the “Highest Posterior Density Region” that is “tangent” to the set that defines the null hypothesis. Our measure of evidence is the complement of the credibility of the “tangent” region.
Abstract: The Pull Bayesian Significance Test (FBST) for precise hy- potheses is applied to a Multivariate Normal Structure (MNS) model. In the FBST we compute the evidence against the precise hypothesis. This evi- dence is the probability of the Highest Relative Surprise Set (HRSS) tangent to the sub-manifold (of the parameter space) that defines the null hypothesis. The MNS model we present appears when testing equivalence conditions for genetic expression measurements, using micro-array technology.
This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
We study Compositional Models based on Dirichlet Regression where, given a (vector) covariate x, one considers the response variable, y, to be a positive vector with a conditional Dirichlet distribution, y | X We introduce a new method for estimating the parameters of the Dirichlet Covariate Model given a linear model on X, and also propose a Bayesian model selection approach. We present some numerical results which suggest that our proposals are more stable and robust than traditional approaches.
This article explores the metaphor of Science as provider of sharp images of our environment, using the epistemological framework of Objective Cognitive Constructivism. These sharp images are conveyed by precise scientific hypotheses that, in turn, are encoded by mathematical equations. Furthermore, this article describes how such knowledge is pro-duced by a cyclic and recursive development, perfection and reinforcement process, leading to the emergence of eigen-solutions characterized by the four essential properties of precision, stability, separability and composability. Finally, this article discusses (...) the role played by ontology and metaphysics in the scientific production process, and in which sense the resulting knowledge can be considered objective. (shrink)
Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic (...) strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications. (shrink)
in Oct-14-1998 ordinance INDESP-IO4 established the federal software certification and verification requirements for gaming machines in Brazil. The authors present the rationale behind these criteria, whose basic principles can find applications in several other software authentication applications.
The Full Bayesian Significance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to significance tests or, equivalently, to p-ualue.s. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of the complement of a credible set "tangent" to the sub-manifold (of the para,rreter space) that defines the null hypothesis. We use the FBST in an application requiring a quality control of used components, based (...) on remaining life statistics. (shrink)
Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a (...) real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents’ risk of default. (shrink)
t Intentional sampling methods are non-randomized procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. In this paper we extend previous works related to intentional sampling, and address the problem of sequential allocation for clinical trials with few patients. Roughly speaking, patients are enrolled sequentially, according to the order in which they start the treatment at the clinic or hospital. The allocation problem consists in assigning each new patient to one, and (...) only one, of the alternative treatment arms. The main requisite is that the profiles in the alternative arms remain similar with respect to some relevant patients’ attributes (age, gender, disease, symptom severity and others). We perform numerical experiments based on a real case study and discuss how to conveniently set up perturbation parameters, in order to yield a suitable balance between optimality – the similarity among the relative frequencies of patients in the several categories for both arms, and decoupling – the absence of a tendency to allocate each pair of patients consistently to the same arm. (shrink)
This paper shows how an efficient and parallel algorithm for inference in Bayesian Networks (BNs) can be built and implemented combining sparse matrix factorization methods with variable elimination algorithms for BNs. This entails a complete separation between a first symbolic phase, and a second numerical phase.
The Gompertz distribution is commonly used in biology for modeling fatigue and mortality. This paper studies a class of models proposed by Adham and Walker, featuring a Gompertz type distribution where the dependence structure is modeled by a lognormal distribution, and develops a new multivariate formulation that facilitates several numerical and computational aspects. This paper also implements the FBST, the Full Bayesian Significance Test for pertinent sharp (precise) hypotheses on the lognormal covariance structure. The FBST’s e-value, ev(H), gives the epistemic (...) value of hypothesis, H, or the value of evidence in the observed in support of H. (shrink)
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root (...) hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test. (shrink)
This article presents a two level hierarchical forecasting model developed in a consulting project for a Brazilian magazine publishing company. The first level uses a VARMA model and considers econometric variables. The second level takes into account qualitative aspects of each publication issue, and is based on polynomial networks generated by Genetic Programming (GP).
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if (...) in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method’s sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results. (shrink)
We study active set methods for optimization problems in Block Angular Form (BAF). We begin by reviewing some standard basis factorizations, including Saunders' orthogonal factorization and updates for the simplex method that do not impose any restriction on the pivot sequence and maintain the basis factorization structured in BAF throughout the algorithm. We then suggest orthogonal factorization and updating procedures that allow coarse grain parallelization, pivot updates local to the affected blocks, and independent block reinversion. A simple parallel environment appropriate (...) to the description and complexity analysis of test procedures is defined in Section 5. The factorization and updating procedures are presented in Sections 6 and 7. Our update procedure outperforms conventional Updating procedures even in a purely sequential environment. (shrink)
The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profile of each area. It (...) was assumed that the three populations were in the same conditions with respect to the influence of confounding factors such as animal age, health, nutrition status, presence of pathogens, and intra- and inter-populational genetic variability. Therefore, any differences found in the endpoints analyzed could be attributed to the external agents present in each area. The statistical models used in this paper are mixtures of negative-binomials and Poisson variables. The Poisson variables are used as approximations of binomials for rare events. The mixing distributions are beta densities. The statistical analyzes are under the bayesian perspective, as opposed to the frequentist ones often considered in the literature, as for instance in Bueno et al. (2000). (shrink)
We present a module based criterion, i.e. a sufficient condition based on the absolute value of the matrix coefficients, for the convergence of Gauss–Seidel method (GSM) for a square system of linear algebraic equations, the Generalized Line Criterion (GLC). We prove GLC to be the “most general” module based criterion and derive, as GLC corollaries, some previously know and also some new criteria for GSM convergence. Although far more general than the previously known results, the proof of GLC is simpler. (...) The results used here are related to recent research in stability of dynamical systems and control of manufacturing systems. (shrink)
In the financial markets, there is a well established portfolio optimization model called generalized mean-variance model (or generalized Markowitz model). This model considers that a typical investor, while expecting returns to be high, also expects returns to be as certain as possible. In this paper we introduce a new media optimization system based on the mean-variance model, a novel approach in media planning. After presenting the model in its full generality, we discuss possible advantages of the mean-variance paradigm, such as (...) its flexibility in modeling the optimization problem, its ability of dealing with many media performance indices – satisfying most of the media plan needs – and, most important, the property of diversifying the media portfolios in a natural way, without the need to set up ad hoc constraints to enforce diversification. (shrink)
We describe a software system for the analysis of defined benefit actuarial plans. The system uses a recursive formulation of the actuarial stochastic processes to implement precise and efficient computations of individual and group cash flows.
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that “the topic of selecting the cointegrating rank has not yet given very useful and convincing results”. The (...) present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used. (shrink)
Randomization procedures are used in legal and statistical applications, aiming to shield important decisions from spurious influences. This article gives an intuitive introduction to randomization and examines some intended consequences of its use related to truthful statistical inference and fair legal judgment. This article also presents an open-code Java implementation for a cryptographically secure, statistically reliable, transparent, traceable, and fully auditable randomization tool.