This volume includes a target paper, taking up the challenge to revive, within a modern (formal) framework, a medieval solution to the Liar Paradox which did ...
I. Levi has advocated a decision-theoretic account of belief revision. We argue that the game-theoretic framework of Interrogative Inquiry Games, proposed by J. Hintikka, can extend and clarify this account. We show that some strategic use of the game rules generate Expansions, Contractions and Revisions, and we give representation results. We then extend the framework to represent explicitly sources of answers, and apply it to discuss the Recovery Postulate. We conclude with some remarks about the potential extensions of interrogative games, (...) with respect to some issues in the theory of belief change. (shrink)
This paper examines critically the reconstruction of the ‘Sherlock Holmes sense of deduction’ proposed jointly by M.B. Hintikka and J. Hintikka in the 1980s, and its successor, the interrogative model of inquiry developed by J. Hintikka and his collaborators in the 1990s. The Hintikkas’ model explicitly used game theory in order to formalize a naturalistic approach to inquiry, but the imi abandoned both the game-theoretic formalism, and the naturalistic approach. It is argued that the latter better supports the claim that (...) the imi provides a ‘logic of discovery’, and safeguards its empirical adequacy. Technical changes necessary to this interpretation are presented, and examples are discussed, both formal and informal, that are better analyzed when these changes are in place. The informal examples are borrowed from Conan Doyle’s The Case of Silver Blaze, a favorite of M.B. and J. Hintikka. (shrink)
We examine a special case of inquiry games and give an account of the informational import of asking questions. We focus on yes-or-no questions, which always carry information about the questioner's strategy, but never about the state of Nature, and show how strategic information reduces uncertainty through inferences about other players' goals and strategies. This uncertainty cannot always be captured by information structures of classical game theory. We conclude by discussing the connection with Gricean pragmatics and contextual constraints on interpretation.
M. B. Hintikka and J. Hintikka claimed that their reconstruction of the ‘Sherlock Holmes sense of deduction’ can “serve as an explication for the link between intelligence and memory”. The claim is vindicated, first for the single-agent case, where the reconstruction captures strategies for accessing the content of a distributed and associative memory; then, for the multi-agent case, where the reconstruction captures strategies for accessing knowledge distributed in a community. Moreover, the reconstruction of the ‘Sherlock Holmes sense of deduction’ allows (...) to conceptualize those strategies as belonging to a continuum of behavioral strategies. (shrink)
Olsson and his collaborators have proposed an extension of Belief Revision Theory where an epistemic state is modeled as a triple S=⟨K_,E,A_⟩ , where A_ is a research agenda, i.e. a set of research questions. Contraction and expansion apply to states, and affect the agenda. We propose an alternative characterization of the problem of agenda updating, where research questions are viewed as blueprints for research strategies. We offer a unified solution to this problem, and prove it equivalent to Olsson’s own. (...) We conclude arguing that: (i) our solution makes the idea of ‘minimal change’ in questions and agendas clearer; (ii) can be extended in ways the original theory was not, and may help better realize the aims this theory was proposed for; (iii) unveils some limitations of the initial approach, yet opening a way to overcome them. (shrink)
We examine a special case of inquiry games and give an account of the informational import of asking questions. We focus on yes-or-no questions, which always carry information about the questioner's strategy, but never about the state of Nature, and show how strategic information reduces uncertainty through inferences about other players' goals and strategies. This uncertainty cannot always be captured by information structures of classical game theory. We conclude by discussing the connection with Gricean pragmatics and contextual constraints on interpretation.Send (...) article to KindleTo send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.HOW CAN QUESTIONS BE INFORMATIVE BEFORE THEY ARE ANSWERED? STRATEGIC INFORMATION IN INTERROGATIVE GAMESVolume 9, Issue 2Emmanuel J. Genot and Justine JacotDOI: https://doi.org/10.1017/epi.2012.8Your Kindle email address Please provide your Kindle [email protected]@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. HOW CAN QUESTIONS BE INFORMATIVE BEFORE THEY ARE ANSWERED? STRATEGIC INFORMATION IN INTERROGATIVE GAMESVolume 9, Issue 2Emmanuel J. Genot and Justine JacotDOI: https://doi.org/10.1017/epi.2012.8Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. HOW CAN QUESTIONS BE INFORMATIVE BEFORE THEY ARE ANSWERED? STRATEGIC INFORMATION IN INTERROGATIVE GAMESVolume 9, Issue 2Emmanuel J. Genot and Justine JacotDOI: https://doi.org/10.1017/epi.2012.8Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
The Barth–Krabbe–Hintikka–Hintikka Problem, independently raised by Barth and Krabbe and Hintikka and Hintikka Sherlock Holmes confronts modern logic: Toward a theory of information-seeking through questioning. Indiana University Press, Bloomington, 1983), is the problem of characterizing the strategic reasoning of the players of dialogical logic and game-theoretic semantics games from rational preferences rather than rules. We solve the problem by providing a set of preferences for players with bounded rationality and specifying strategic inferences from those preferences, for a variant of logical (...) dialogues. This solution is generalized to both game-theoretic semantics and orthodox dialogical logic. (shrink)
Fake news can originate from an ordinary person carelessly posting what turns out to be false information or from the intentional actions of fake news factory workers, but broadly speaking it can also originate from scientific fraud. In the latter case, the article can be retracted upon discovery of the fraud. A case study shows, however, that such fake science can be visible in Google even after the article was retracted, in fact more visible than the retraction notice. We hypothesize (...) that the reason for this lies in the popularity-based logic governing Google, in particular its foundational PageRank algorithm, in conjunction with a psychological law which we refer to as the “law of retraction”: a retraction notice is typically taken to be less interesting and therefore less popular with internet users than the original content retracted. We conduct an empirical study drawing on records of articles retracted due to fraud in the Retraction Watch public database. The study tests the extent to which such retracted scientific articles are still highly ranked in Google –and more so than information about the retraction. We find, among other things, that both Google Search and Google Scholar more often than not ranked a link to the original article higher than a link indicating that the article has been retracted. Surprisingly, Google Scholar did not perform better in this regard than Google Search.We also found cases in which Google did not track the retraction of an article on the first result page at all. We conclude that both Google Search and Google Scholar run the risk of disseminating fake science through their ranking algorithms. (shrink)
This article demonstrates that typical restrictions which are imposed in dialogical logic in order to recover first-order logical consequence from a fragment of natural language argumentation are also forthcoming from preference profiles of boundedly rational players, provided that these players instantiate a specific player type and compute partial strategies. We present two structural rules, which are formulated similarly to closure rules for tableaux proofs that restrict players' strategies to a mapping between games in extensive forms and proof trees. Both rules (...) are motivated from players' preferences and limitations; they can therefore be viewed as being player-self-imposable. First-order logical consequence is thus shown to result from playing a specific type of argumentation game. The alignment of such games with the normative model of the Pragma-dialectical theory of argumentation is positively evaluated. But explicit rules to guarantee that the argumentation game instantiates first-order logical consequence have now become gratuitous, since their normative content arises directly from players' preferences and limitations. A similar naturalization for non-classical logics is discussed. (shrink)
Fake news can originate from an ordinary person carelessly posting what turns out to be false information orfrom the intentional actions of fake news factory workers,but broadly speaking it can also originate from scientific fraud. In the latter case, the article can be retracted upon discovery of the fraud. A case study shows, however, that such fake sciencecan be visible in Google even after the article was retracted, in fact more visible thanthe retraction notice. We hypothesize that the reason for (...) this lies in the popularity-based logic governing Google, in particular its foundational PageRank algorithm,in conjunction with a psychological law which we refer to as the “law of retraction”: a retraction notice is typically taken to be less interestingand therefore less popular with internet users than the original content retracted. We conduct anempiricalstudy drawing on records of articles retracted due to fraud in the Retraction Watch public database. The study tests the extent to which such retracted scientific articles are still highly ranked in Google –and more so than information about the retraction. We find, among other things, thatboth Google Search and Google Scholar more often than not rankeda link to the original article higher than a link indicating that the article has been retracted.Surprisingly, Google Scholar did not perform better in this regard than Google Search.We also foundcases in which Google didnot track the retraction of anarticle on the first result page at all.We conclude thatboth Google Search and Google Scholar runthe risk of disseminating fake science through theirranking algorithms. (shrink)
: A popular belief is that the process whereby search engines tailor their search results to individual users, so-called personalization, leads to filter bubbles in the sense of ideologically segregated search results that would tend to reinforce the user’s prior view. Since filter bubbles are thought to be detrimental to society, there have been calls for further legal regulation of search engines beyond the so-called Right to be Forgotten Act. However, the scientific evidence for the filter bubble hypothesis is surprisingly (...) limited. Previous studies of personalization have focused on the extent to which different users get different results lists without taking the content on the webpages into account. Such methods are unsuitable for detecting filter bubbles as such. In this paper, we propose a methodology that takes content differences between webpages into account. In particular, the method involves studying the extent to which users with strong opposing views on an issue receive search results that are correlated content-wise with their personal view. Will users of with strong prior opinion that X is true on average have a larger share of search results that are in favor of X than users with a strong prior opinion that X is false? We illustrate our methodology at work, but also the non-trivial challenges it faces, by a small-scale study of the extent to which Google Search leads to ideological segregation on the issue of man-made climate change. (shrink)
Erik J. Olsson and David Westlund have recently argued that the standard belief revision representation of an epistemic state is defective. In order to adequately model an epistemic state one needs, in addition to a belief set K and an entrenchment relation E, a research agenda A, i.e. a set of questions satisfying certain corpus-relative preconditions the agent would like to have answers to. Informally, the preconditions guarantee that the set of potential answers represent a partition of possible expansions of (...) K, hence are equivalent to well-behaved sets of alternative hypotheses. (shrink)
We describe a class of semantic extensive entailment game with algorithmic players, related to game-theoretic semantics, and generalized to classical first-order semantic entailment. Players have preferences for parsimonious spending of computational resources, and compute partial strategies, under qualitative uncertainty about future histories. We prove the existence of local preferences for moves, and strategic fixpoints, that allow to map eeg game-tree to the building rules and closure rules of Smullyan's semantic tableaux. We also exhibit a strategy profile that solves the fixpoint (...) selection problem, and can be mapped to systematic constructions of semantic trees, yielding a completeness result by translation. We conclude on possible generalizations of our games. (shrink)
This paper examines whether Sherlock Holmes’ “Science of Deduction and Analysis,” as reconstructed by Hintikka and Hintikka The sign of three: Peirce, Dupin, Holmes, Indiana University Press, Bloomington, 1983), exemplifies a logic of discovery. While the Hintikkas claimed it does, their approach remained largely programmatic, and ultimately unsuccessful. Their reconstruction must thus be expanded, in particular to account for the role of memory in inquiry. Pending this expansion, the Hintikkas’ claim is vindicated. However, a tension between the naturalistic aspirations of (...) their model and the formal apparatus they built it on is identified. The paper concludes on suggestions for easing this tension without losing the normative component of the Hintikkas’ epistemological model. (shrink)
Instructions in Wason’s Selection Task underdetermine empirical subjects’ representation of the underlying problem, and its admissible solutions. We model the Selection Task as an interrogative learning problem, and reasoning to solutions as: selection of a representation of the problem; and: strategic planning from that representation. We argue that recovering Wason’s ‘normative’ selection is possible only if both stages are constrained further than they are by Wason’s formulation. We conclude comparing our model with other explanatory models, w.r.t. to empirical adequacy, and (...) modeling of bounded rationality. (shrink)
If semantic consequence is analyzed with extensive games, logical reasoning can be accounted for by looking at how players solve entailment games. However, earlier approaches to game semantics cannot achieve this reduction, by want of explicitly dened preferences for players. Moreover, although entailment games can naturally translate the idea of argumentation about a common ground, a cognitive interpretation is undermined by the complexity of strategic reasoning. We thus describe a class of semantic extensive entailment game with algorithmic players, who have (...) preferences for parsimonious spending of computational resources and thus compute partial strategies under qualitative uncertainty about future histories. We prove the existence of local preferences for moves and of strategic fixpoints that allow to map game-trees to tableaux proofs, and exhibit a strategy prole that solves the fixpoint selection problem, and can be mapped to systematic constructions of semantic trees, yielding a completeness result by translation. We then discuss the correspondence between proof heuristics and strategies in our games, the relations of our games to gts, and possible extensions to other entailment relations. We conclude that the main interest of our result lies in the possibility to bridge argumentative and cognitive models of logical reasoning, rather than in new meta-theoretic results. All proofs are given in appendix. (shrink)
We apply an algorithmic learning model of inquiry to model reasoning carried by experimental subjects in Wason's _Selection Task_ that represents reasoning in the task as computation of a decision tree that supervenes on semantic representations. We argue that the resulting model improves on previous probabilistic and pragmatic models of the task. In particular, it suggests that subjects' selection could in fact be guided by sophisticated patterns of argumentative reasoning.
In Wason’s Selection Task, subjects: process information from the instructions and build a mental representation of the problem, then: select a course of action to solve the problem,under the constraints imposed by the instructions. We analyze both aspects as part of a constraint satisfaction problem without assuming Wason’s ‘logical’ solution to be the correct one. We show that outcome of step may induce mutually inconsistent constraints, causing subjects to select at step solutions that violate some of them. Our analysis explains (...) why inconsistent constraints are less likely disrupt non-abstract versions of the tasks, but unlike Bayesians does not posit different mechanisms in abstract and thematic variants. We then assess the logicality of the task, and conclude on cognitive tasks as coordination problems. (shrink)
Reichenbach’s constraint is the methodological imperative formulated by Reichenbach in the following passage: “If we want to construct a philosophy of science, we have to distinguish carefully between two kinds of context in which scientific theories may be considered. The context of discovery is to be separated from the context of justification; the former belongs to the psychology of scientific discovery, the latter alone is to be the object of the logic of science.” Reichenbach’s constraint is usually understood as barring (...) epistemological models from attempting rational reconstructions of discovery processes. This paper shows that Reichenbach’s constraint also bars epistemological models from capturing inquiry processes as genuine learning processes. (shrink)