Published: 22nd June 2021
DOI: 10.4204/EPTCS.335
ISSN: 2075-2180

EPTCS 335

Proceedings Eighteenth Conference on
Theoretical Aspects of Rationality and Knowledge
Beijing, China, June 25-27, 2021

Edited by: Joseph Halpern and Andrés Perea

Preface
Andrés Perea
1
A Recursive Measure of Voting Power that Satisfies Reasonable Postulates
Arash Abizadeh and Adrian Vetta
3
Well-Founded Extensive Games with Perfect Information
Krzysztof R. Apt and Sunil Simon
7
Uncertainty-Based Semantics for Multi-Agent Knowing How Logics
Carlos Areces, Raul Fervari, Andrés R. Saravia and Fernando R. Velázquez-Quesada
23
Revisiting Epistemic Logic with Names
Marta Bílková, Zoé Christoff and Olivier Roy
39
Language-based Decisions
Adam Bjorndahl and Joseph Y. Halpern
55
An Awareness Epistemic Framework for Belief, Argumentation and Their Dynamics
Alfredo Burrieza and Antonio Yuste-Ginel
69
Local Dominance
Emiliano Catonini and Jingyi Xue
85
Collective Argumentation: The Case of Aggregating Support-Relations of Bipolar Argumentation Frameworks
Weiwei Chen
87
De Re Updates
Michael Cohen, Wen Tang and Yanjing Wang
103
Dynamically Rational Judgment Aggregation: A Summary
Franz Dietrich and Christian List
119
Deliberation and Epistemic Democracy
Huihui Ding and Marcus Pivato
127
No Finite Model Property for Logics of Quantified Announcements
Hans van Ditmarsch, Tim French and Rustam Galimullin
129
Fire!
Krisztina Fruzsa, Roman Kuznets and Ulrich Schmid
139
Are the Players in an Interactive Belief Model Meta-certain of the Model Itself?
Satoshi Fukuda
155
Knowledge from Probability
Jeremy Goodman and Bernhard Salow
171
Belief Inducibility and Informativeness
P. Jean-Jacques Herings, Dominik Karos and Toygar Kerman
187
Measuring Violations of Positive Involvement in Voting
Wesley H. Holliday and Eric Pacuit
189
Algorithmic Randomness, Bayesian Convergence and Merging
Simon Huttegger, Sean Walsh and Francesca Zaffora Blando
211
Game-Theoretic Models of Moral and Other-Regarding Agents (extended abstract)
Gabriel Istrate
213
Understanding Transfinite Elimination of Non-Best Replies
Stephan Jagau
229
Persuading Communicating Voters
Toygar Kerman and Anastas P. Tenev
231
Knowing How to Plan
Yanjun Li and Yanjing Wang
233
Probabilistic Stability and Statistical Learning
Krzysztof Mierzewski
249
Attainable Knowledge and Omniscience
Pavel Naumov and Jia Tao
251
Failures of Contingent Thinking
Evan Piermont and Peio Zuazo-Garin
267
Reasoning about Emergence of Collective Memory
R. Ramanujam
269
A Deontic Stit Logic Based on Beliefs and Expected Utility
Aldo Iván Ramírez Abarca and Jan Broersen
281
Epistemic Modality and Coordination under Uncertainty
Giorgio Sbardolini
295
Communication Pattern Models: An Extension of Action Models for Dynamic-Network Distributed Systems
Diego A. Velázquez, Armando Castañeda and David A. Rosenblueth
307

Preface

These proceedings contain the papers that have been accepted for presentation at the Eighteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK XVIII). The conference took place from June 25 to June 27, 2021, at Tsinghua University, Beijing, China. However, due to the COVID-19 pandemic, the conference was offered completely online.

As is to be expected from TARK, these proceedings offer a highly interdisciplinary collection of papers, including areas such as logic, computer science, philosophy, economics, game theory, decision theory and social welfare. The topics covered by the papers include semantic models for knowledge and belief, epistemic logic, computational social choice, rationality in games and decision problems, and foundations of multi-agent systems.

I wish to thank the team of local organizers, chaired by Fenrong Liu, to make this conference possible under these extraordinary circumstances. Another word of gratitude goes to the members of the program committee, not only for reviewing the submissions, but also for their valuable input concerning other aspects of the conference, such as the invited speakers and the precise format of the conference. The members of the program committee are: Christian Bach, Adam Bjorndahl, Giacomo Bonanno, Emiliano Catonini, Franz Dietrich, Davide Grossi, Joseph Halpern (conference chair), Jérôme Lang, Fenrong Liu (local organizing chair), Silvia Milano, Yoram Moses, Eric Pacuit, Andrés Perea (program committee chair), Olivier Roy, Elias Tsakas, Paolo Turrini, Rineke Verbrugge and Kevin Zollman.

I also wish to thank the invited speakers at this conference: Ariel Procaccia, Burkhard Schipper, Sonja Smets and Katie Steele.

On the practical side, the conference and the proceedings have benefitted a lot from the EasyChair platform, and the EPTCS - system. I thank Rob van Glabbeek, editor of EPTCS, for his help during the process of setting up these proceedings.

Last but not least, I am very grateful to Joseph Halpern (conference chair) and Fenrong Liu (local organizing chair) who have done so much for the organization of TARK XVIII. It was an absolute pleasure to work with you, and I am sorry for the many E-mails you had to digest from me.

I sincerly hope that these proceedings will be a source of inspiration for your research, and that you will enjoy reading the papers.

Andrés Perea
Program Committee Chair TARK XVIII
Maastricht, June 2021

A Recursive Measure of Voting Power that Satisfies Reasonable Postulates

Arash Abizadeh (Department of Political Science, McGill University, Montreal, Canada)
Adrian Vetta (Department of Mathematics and Statistics, and School of Computer Science, McGill University, Montreal, Canada)

We design a recursive measure of voting power based upon partial voting efficacy as well as full voting efficacy. In contrast, classical indices and measures of voting power incorporate only full voting efficacy. We motivate our design by representing voting games using a division lattice and via the notion of random walks in stochastic processes, and show the viability of our recursive measure by proving it satisfies a plethora of postulates that any reasonable voting measure should satisfy.

There have been two approaches to justifying measures of voting power. The first is the axiomatic approach, which seeks to identify a set of reasonable axioms that uniquely pick out a single measure of voting power. To date this justificatory approach has proved a failure: while many have succeeded in providing axiomatic characterizations of various measures, no one has succeeded in doing so for a set of axioms all of which are independently justified, i.e., in showing why it would be reasonable to expect a measure of voting power to satisfy the entire set of axioms that uniquely pick out a proposed measure. For example, Dubey (1975) and Dubey and Shapley (1979) have characterized the classic Shapely-Shubik index ($SS$) and Penrose-Banzhaf measure ($PB$) as uniquely satisfying a distinct set of axioms, respectively, but several of the axioms lack proper justification (Straffin 1982: 292-296; Felsenthal and Machover 1998: 194-195; Laruelle and Valenciano 2001). The second, two-pronged approach is more modest and involves combining two prongs of justification. The first prong is to motivate a proposed measure on conceptual grounds, showing the sense in which it captures the intuitive meaning of what voting power is. With this conceptual justification in place, the second prong of justification then requires showing that the measure satisfies a set of reasonable postulates. For the more modest approach, both prongs of justification are necessary, and the satisfaction of reasonable postulates serves, not to pick out a uniquely reasonable measure, but to rule out unreasonable measures.

The first prong of justification has been typically carried out in probabilistic terms. For example, the a priori Penrose-Banzhaf measure equates a player's voting power, in a given voting structure, with the proportion of logically possible divisions or complete vote configurations in which the player is (fully) decisive for the division outcome, i.e., in which the player has an alternative voting strategy such that, if it were to choose that alternative instead, the outcome would be different (holding all other players' votes constant). The standard interpretation is that the a priori $PB$ measure represents the probability a player will be decisive under the assumptions of equiprobable voting (the probability a player votes for an alternative is equal to the probability it votes for any other) and voting independence (votes are not correlated), which together imply equiprobable divisions (the probability of each division is equal) (Felsenthal and Machover 1998: 37-38).

However, measures of voting power based exclusively on the ex ante probability of decisiveness suffer from a crucial conceptual flaw. The motivation for basing a measure of voting power on this notion is that decisiveness is supposed to formalize the idea of a player making a difference to the outcome. To equate a player's voting power with the player's ex ante probability of being decisive is to assume that if any particular division were hypothetically to occur, then the player would have efficaciously exercised power to help produce the outcome ex post if and only if that player would have been decisive or necessary for the outcome. Yet this assumption is false: sometimes, as in causally overdetermined outcomes, an actor has efficaciously exercised its power to effect an outcome ex post, and, through the exercise of that power, made a causal contribution to the outcome, even though the actor's contribution was not decisive to it.

More specifically, reducing voting power to the ex ante probability of being decisive fails to take into account players' partial causal efficacy in producing outcomes ex post. In this paper, we design a Recursive Measure ($RM$) of voting power that remedies this shortcoming, by taking into account partial efficacy or degrees of causal efficacy. A full conceptual justification for $RM$ -- i.e., the first prong of justification on the more modest approach -- is given in Abizadeh (working paper). $RM$ represents, not the probability a player will be decisive for the division outcome (the probability the player will be fully causally efficacious in bringing it about) but, rather, the player's expected efficacy, that is, the probability the player will make a causal contribution to the outcome weighted by the degree of causal efficacy. Whereas decisiveness measures such as $PB$ solely track full efficacy, $RM$ tracks partial efficacy as well.

Our task in this paper is to furnish the second prong of justification. In particular, we take it that any reasonable measure of a priori voting power $\pi$ should satisfy, for simple voting games $\mathcal{G}$ with equiprobable divisions, where $[n]$ is the set of all voters and a dummy is a voter not decisive in any division, the following postulates:

Iso-invariance postulate: For iso-invariant voting games $\mathcal{G}$ and $\hat{\mathcal{G}}$: $\pi_i=\hat{\pi}_i$ for any player $i$.

Dummy postulates: For any game $\hat{\mathcal{G}}$ formed by the addition of a dummy voter to $\mathcal{G}$: if $i$ is a dummy voter, then $\pi_i=0$; $\pi_i=0$ only if $i$ is a dummy voter; and if $i$ is a non-dummy voter, then $\pi_i=\hat{\pi}_i$.

Dominance postulate: For any subset $S\subseteq [n]$ with $i,j\notin S$: $\pi_j\ge \pi_i$ whenever $j$ weakly dominates $i$, and $\pi_j> \pi_i$ whenever $j$ strictly dominates $i$ (where $j$ weakly dominates $i$ if whenever $S\cup i$ vote yes and the outcome is yes, then if $S\cup j$ vote yes the outcome is yes; and $i$ strictly dominates $j$ if the former weakly dominates the latter but not vice versa).

Donation postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by player $j$ transferring its vote to player $i$: $\hat{\pi}_i \ge \max (\pi_i, \pi_j )$.

Bloc postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by player $i$ annexing $i$'s vote to form a bloc $I=\{i,j\}$: $\hat{\pi}_I \ge \max (\pi_i, \pi_j )$.

Quarrel postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by inducing a symmetric, weak, monotonic quarrel between $i$ and $j$: $\hat{\pi}_i\le \pi_i$ and $\hat{\pi}_j\le \pi_j$.

Added blocker postulate: For any game $\mathcal{G}^Y$ resulting from $\mathcal{G}$ by adding an added yes-blocker, and $\mathcal{G}^N$ resulting from adding an added no-blocker: $\frac{\pi^+_i(\mathcal{G})}{\pi^+_j(\mathcal{G})} = \frac{\pi^+_i(\mathcal{G}^Y)}{\pi^+_j(\mathcal{G}^Y)}$, and $\frac{\pi^-_i(\mathcal{G})}{\pi^-_j(\mathcal{G})} = \frac{\pi^-_i(\mathcal{G}^N)}{\pi^-_j(\mathcal{G}^N)}$ (where $\pi^+$ is a player's yes-voting power, based solely on divisions in which it votes yes, and $\pi^-$ is a player's no-voting power, based solely on divisions in which it votes no.

In the full paper, we explain the intuitive justification for and fully specify each of these voting-power postulates, and then prove that $RM$ satisfies them for a priori power in simple voting games. We prove these postulates by introducing a new way of representing voting games using a division lattice, and show that previous formulations of some of these postulates require revision.

A full version of the paper can be found at: http://arxiv.org/abs/2105.03006

References

  1. Abizadeh, A. (Working paper). A Recursive Measure of Voting Power with Partial Decisiveness or Efficacy.
  2. Dubey, P. (1975). On the Uniqueness of the Shapley Value. International Journal of Game Theory, 4(3), 131-139. doi:10.1007/BF01780630
  3. Dubey, P., and Shapley, L.S. (1979). Mathematical Properties of the Banzhaf Power Index. Mathematics of Operations Research, 4(2), 99-131. doi:10.1287/moor.4.2.99
  4. Felsenthal, D., and Machover, M. (1998) The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes, Edward Elgar. doi:10.4337/9781840647761
  5. Laruelle, A., and Valenciano, F. (2001). Shapley-Shubik and Banzhaf Indices Revisited. Mathematics of Operations Research, 26(1), 89-104. doi:10.1287/moor.26.1.89.10589
  6. Straffin, P.D. (1982). Power Indices in Politics. In S.J. Brams, W.F. Lucas, and P.D. Straffin (Eds.), Political and Related Models, 256-321. New York: Springer. doi:10.1007/978-1-4612-5430-0_11

Local Dominance

Emiliano Catonini (HSE University Moscow)
Jingyi Xue (Singapore Management University)

We present a local notion of dominance that speaks to the true choice problems among actions in a game tree and does not rely on global planning. When we do not restrict the ability of the players to do contingent reasoning, a reduced strategy is weakly dominant if and only if it prescribes a locally dominant action at every decision node, therefore any dynamic decomposition of a direct mechanism that preserves strategy-proofness is robust to the lack of global planning. Under a form of wishful thinking, we also show that strategy-proofness is robust to the lack of forward-planning. Moreover, from our local perspective, we can identify rough forms of contingent reasoning that are particularly natural. We construct a dynamic game that implements the Top Trading Cycles allocation under a minimal form of contingent reasoning, related to independence of irrelevant alternatives.


Dynamically Rational Judgment Aggregation: A Summary

Franz Dietrich (Paris School of Economics & CNRS)
Christian List (LMU Munich)

Abstract

Judgment aggregation theory traditionally aims for collective judgments that are rational. So far, rationality has been understood in purely static terms: as coherence of judgments at a given time, where ‘coherence’ could for instance mean consistency, or completeness, or deductive closure, or combinations thereof. By contrast, this paper, which summarises results from Dietrich and List (2021), asks the novel question of whether collective judgments can be dynamically rational: whether they can respond well to new information, i.e., change rationally when information is learnt by everyone. Formally, we call a judgment aggregation rule dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments in light of some information (a proposition), then the new aggregate judgments are the old ones revised in light of this information. In short, aggregation and revision commute. A general impossibility theorem holds: as long as the propositions on the agenda are sufficiently interconnected, no judgment aggregation rule with standard properties is dynamically rational with respect to any revision operator satisfying mild conditions (familiar from belief revision theory). The theorem is a counterpart for dynamic rationality of known impossibility theorems for static rationality. Relaxation of the theorem’s conditions opens the door to interesting aggregation rules generating dynamically rational judgments, including certain premise-based rules, as we briefly discuss (see Dietrich and List 2020 for details).

Introduction

Suppose a group of individuals – say, a committee, expert panel, multi-member court, or other decision-making body – makes collective judgments on some propositions by aggregating its members’ individual judgments on those propositions. And now suppose some new information – in the form of the truth of some proposition – is learnt. All individuals rationally revise their judgments. Aggregating the new individual judgments yields new collective judgments. If the group is to be a rational agent, then it should incorporate new information rationally, and so the new aggregate judgments should coincide with the old ones revised in light of the information. Technically, this means that the operations of aggregation and revision commute: aggregating judgments and then revising the result yields the same as revising individual judgments and then aggregating.

In this paper, we investigate whether we can find reasonable aggregation rules that enable a group to achieve such dynamic rationality: aggregation rules which commute with reasonable revision methods. Surprisingly, this question has not been studied in the judgment-aggregation framework, where judgments are binary verdicts on some propositions: “yes”/“no”, “true”/“false”, “accept” /“reject”. (On judgment-aggregation theory, see List and Pettit 2002, Dietrich and List 2007, Nehring and Puppe 2010, Dokow and Holzman 2010, List and Puppe 2009.) The focus in judgment-aggregation theory has generally been on static rationality, namely on whether properties such as consistency, completeness, or deductive closure are preserved when individual judgments are aggregated into collective ones at a given point in time.1

By contrast, the question of dynamic rationality has received much attention in the distinct setting of probability aggregation, where judgments aren’t binary but take the form of subjective probability assignments to the elements of some algebra. In that context, a mix of possibility and impossibility results has been obtained (e.g., Madansky 1964, Genest 1984, Genest et al. 1986, Dietrich 2010, 2019, Russell et al. 2015). These show that some familiar methods of aggregation – notably, the arithmetic averaging of probabilities – fail to commute with belief revision, while other methods – particularly geometric averaging – do commute with revision. An investigation of the parallel question in the case of binary judgments is therefore overdue.

We present a negative result: for a large class of familiar judgment aggregation rules, dynamic rationality is unachievable relative to a large class of reasonable judgment revision methods. However, if we relax some of our main theorem’s conditions on the aggregation rule, dynamically rational aggregation becomes possible. In particular, “premise-based” aggregation can be dynamically rational relative to certain “premise-based”  revision methods. This extended abstract focuses on the impossibility finding, for reasons of space. Possibilities are discussed in Dietrich and List (2021), which also contains all proofs.

The formal setup

We begin with the basic setup from judgment-aggregation theory (following List and Pettit 2002 and Dietrich 2007). We assume that there is a set of individuals who hold judgments on some set of propositions, and we are looking for a method of aggregating these judgments into resulting collective judgments. The key elements of this setup are the following:

Individuals. These are represented by a finite and non-empty set N. Its members are labelled 1, 2, ..., n. We assume n ≥ 2.

Propositions. These are represented in formal logic. For our purposes, a thin notion of “logic” will suffice. Specifically, a logic, L, is a non-empty set of formal objects called “propositions”, which is endowed with two things: a negation operator, denoted ¬, so that, for every proposition p in L there is also its negation ¬p in L; and a well-behaved notion of consistency, which specifies, for each set of propositions S ⊆ L, whether S is consistent or inconsistent.2 Standard propositional, predicate, modal, and conditional logics all fall under this definition, as do Boolean algebras.3 A proposition p is contradictory if {p} is inconsistent, tautological if p} is inconsistent, and contingent if p is non-contradictory and non-tautological.

Agenda. The agenda is the set of those propositions from L on which judgments are to be made. Formally, this is a finite non-empty subset X of L which can be partitioned into proposition-negation pairs {p, ¬p}, abbreviated { ± p}. Sometimes it is useful to make this partition explicit. We write Z to denote the set of these proposition-negation pairs of X. The elements of Z can be interpreted as the binary issues under consideration. Then the agenda X is their disjoint union, formally X = ∪Z ∈ ZnZ. Throughout this paper, we assume that double-negations cancel out in agenda propositions.4

Our focus will be on agendas satisfying a non-triviality condition. To define it, call a set of propositions minimal inconsistent if it is inconsistent but all its proper subsets are consistent. Proposition-negation pairs of the form {p, ¬p} (with p contingent) are minimal inconsistent, and so are sets of the form {p, q, ¬(p ∧ q)} (with p and q contingent), where “” stands for logical conjunction (“and”). We call an agenda non-simple if it has at least one minimal inconsistent subset of size greater than two. An example of a non-simple agenda is the set X = { ± p,  ± (p → q),  ± q}, where p might be the proposition “Current atmospheric CO2 is above 407 ppm”, p → q might be the proposition “If current atmospheric CO2 is above 407 ppm, then the Arctic iceshield will melt by 2050”, and q might be the proposition “The Arctic iceshield will melt by 2050”. The conditional p → q can be formalized in standard propositional logic or in a suitable logic for conditionals. A three-member minimal inconsistent subset of this agenda is {p, p → q, ¬q}.

Judgments. Each individual’s (and subsequently the group’s) judgments on the given propositions are represented by a judgment set, which is a subset J ⊆ X, consisting of all those propositions from X that its bearer “accepts” (e.g., affirms or judges to be true). A judgment set J is

We write J to denote the set of all classically rational judgment sets on the agenda X. A list of judgment sets (J1, ..., Jn) across the individuals in N is called a profile (of individual judgment sets).

Aggregation rule. A (judgment) aggregation rule is a function, F, which maps each profile (J1, ..., Jn) in some domain D of admissible profiles (often D = Jn) to a collective judgment set J = F(J1, ..., Jn). A standard example is majority rule, which is defined as follows: for each (J1, ..., Jn) ∈ Jn,
F(J1, ..., Jn) = {p ∈ X : |{i:pJi}| > n/2}.
A typical research question in judgment aggregation theory is whether we can find aggregation rules that satisfy certain requirements of democratic responsiveness to the individual judgments and collective rationality.

Judgment revision

The idea we wish to capture is that whenever any individual (or subsequently the group) learns some new information, in the form of the truth of some proposition, this individual (or the group) must incorporate the learnt information in the judgments held – an idea familiar from belief revision theory in the tradition of Alchourrón, Gärdenfors and Makinson (1985) (see also Rott 2001 and Peppas 2008). Our central concept is that of a judgment revision operator. This is a function which assigns to any pair (J, p) of an initial judgment set J ⊆ X and a learnt proposition p ∈ X a new judgment set J|p, the revised judgment set, given p. Formally, the revision operator is any function from 2X × X to 2X. We call it regular if it satisfies the following two minimal conditions:

We further call a revision operator rationality-preserving if whenever J ∈ J, we have J|p ∈ J for all non-contradictory propositions p ∈ X. These definitions are well-illustrated by the class of distance-based revision operators, familiar from belief revision theory. Such operators require that when a judgment set is revised in light of some new information, the post-revision judgments remain as “close” as possible to the pre-revision judgments, subject to the constraint that the learnt information be incorporated and no inconsistencies be introduced. Different distance-based operators spell out the notion of “closeness” in different ways (different metrics have been introduced in the area of judgment aggregation by Konieczny and Pino-Pérez 2002 and Pigozzi 2006).

Can aggregation and revision commute?

We are now ready to turn to this paper’s question. As noted, we would ideally want any decision-making group to employ a judgment aggregation rule and a revision operator that generate the same collective judgments irrespective of whether revision takes place before or after aggregation. This requirement (an analogue of the classic “external Bayesianity” condition in probability aggregation theory, as in Madansky 1964, Genest 1984, and Genest et al. 1986) is captured by the following condition on the aggregation rule F and the revision operator |:

Dynamic rationality. For any profile (J1, ..., Jn) in the domain of F and any learnt proposition p ∈ X where the revised profile (J1|p, ..., Jn|p) is also in the domain of F, F(J1|p, ..., Jn|p) = F(J1, ..., Jn)|p.

To see that this condition is surprisingly hard to satisfy, consider an example. Suppose a three-member group is making judgments on the agenda X = { ± p,  ± (p → q),  ± q}, where p → q is understood as a subjunctive conditional. That is, apart from the subsets of X that include a proposition-negation pair, the only inconsistent subset of X is {p, p → q, ¬q}.5 Suppose, further, members’ initial judgments and the resulting majority judgments are as follows:

Individual 1: { ¬p, ¬(p → q), q}
Individual 2: { ¬p, p → q, ¬q}
Individual 3: { ¬p, ¬(p → q), ¬q}
Majority: { ¬p, ¬(p → q), ¬q}

Assume the revision operator is based on the Hamming distance, with some tie-breaking provision such that, in the case of a tie, one is more ready to change one’s judgment on p or p → q (which represent “premises”) than on q (which represents a “conclusion”). If the individuals learn the truth of p and revise their judgments, they arrive at the following post-revision judgments:

Individual 1: { p, ¬(p → q), q}
Individual 2: { p, p → q, q}
Individual 3: { p, ¬(p → q), ¬q}
Majority: { p,  ¬(p → q), q}

Crucially, the post-information group judgment set, {p, ¬(p → q), q}, differs from the revision in light of p of the pre-information group judgment set, because p, ¬(p → q), ¬q}|p = {p, ¬(p → q), ¬q}. That is, the group replaces ¬q with q in its judgment set, although learning p did not force the group to revise its position on q (recall that {p, ¬(p → q), ¬q} is perfectly consistent, given that is a subjunctive conditional). Thus the group’s (majority) judgment set does not evolve rationally.

At first sight, one might think that this problem is just an artifact of majority rule or our specific distance-based revision operator, or that it is somehow unique to our example. However, the following formal result – a simplified (‘anonymous’) version of our impossibility theorem – shows that the problem is more general. Define a uniform quota rule, with acceptance threshold m ∈ {1, ..., n}, as the aggregation rule with domain Jn such that, for each (J1, ..., Jn) ∈ Jn,
F(J1, ..., Jn) = {p ∈ X : |{i : p ∈ Ji} ≥ m}.
Majority rule is a special case of a uniform quota rule, namely the one where m is the smallest integer greater than n/2. We have:

Theorem 1: If the agenda X is non-simple, then no uniform quota rule whose threshold is not the unanimity threshold n is dynamically rational with respect to any regular rationality-preserving revision operator.

In short, replacing majority rule with some other uniform quota rule with threshold less than n wouldn’t solve our problem of dynamic irrationality, and neither would replacing our distance-based revision operator with some other regular rationality-preserving revision operator. In fact, the problem generalizes further, as shown in the next section.

A general impossibility theorem

We will now abstract away from the details of any particular aggregation rule, and suppose instead we are looking for an aggregation rule F that satisfies the following general conditions:

Universal domain: The domain of admissible inputs to the aggregation rule F is the set of all classically rational profiles, i.e., D = Jn.

Non-imposition: F does not always deliver the same antecedently fixed output judgment set J, irrespective of the individual inputs, i.e., F is not a constant function.

Monotonicity: Additional individual support for an accepted proposition does not overturn the proposition’s acceptance, i.e., for any profile (J1, ..., Jn) ∈ D and any proposition p ∈ F(J1, ..., Jn), if any Ji not containing p is replaced by some Ji containing p and the modified profile (J1, ..., Ji, ..., Jn) remains in D, then p ∈ F(J1, ..., Ji, ..., Jn).

Non-oligarchy: There is no non-empty set of individuals M ⊆ N (a set of “oligarchs”) such that, for every profile (J1, ..., Jn) ∈ D, F(J1, ..., Jn) = ∩i ∈ MJi.

Systematicity: The collective judgment on each proposition is determined fully and neutrally by individual judgments on that proposition. Formally, for any propositions p, p ∈ X and any profiles (J1, ..., Jn), (J1, ..., Jn) ∈ D, if, for all i ∈ N, p ∈ Ji ⇔ p ∈ Ji, then p ∈ F(J1, ..., Jn) ⇔ p ∈ F(J1, ..., Jn).

Why are these conditions initially plausible? The reason is that, for each of them, a violation would entail a cost. Violating universal domain would mean that the aggregation rule is not fully robust to pluralism in its inputs; it would be undefined for some classically rational judgment profiles. Violating non-imposition would mean that the collective judgments are totally unresponsive to the individual judgments, which is completely undemocratic. Violating monotonicity could make the aggregation rule erratic in some respect: an individual could come to accept a particular collectively accepted proposition and thereby overturn its acceptance. Violating non-oligarchy would mean two things. First, the collective judgments would depend only on the judgments of the “oligarchs”, which is undemocratic (unless M = N); and second, the collective judgments would be incomplete with respect to any binary issue on which there is the slightest disagreement among the oligarchs, which would lead to widespread indecision (except when M is singleton, so that the rule is dictatorial). Violating systematicity, finally, would mean that the collective judgment on each proposition is no longer determined as a proposition-independent function of individual judgments on that proposition. It may then either depend on individual judgments on other propositions too (a lack of propositionwise independence), or the pattern of dependence may vary from proposition to proposition (a lack of neutrality ). Systematicity – the conjunction of propositionwise independence and neutrality – is the most controversial condition among the five. But it is worth noting that it is satisfied by majority rule and all uniform quota rules. Indeed, majority rule and uniform quota rules (except the unanimity rule) satisfy all five conditions.

Our main theorem shows that, for non-simple agendas, the present five conditions are incompatible with dynamic rationality:

Theorem 2: If the agenda X is non-simple, then no aggregation rule satisfying universal domain, non-imposition, monotonicity, non-oligarchy, and systematicity is dynamically rational with respect to any regular rationality-preserving revision operator.

Interestingly, Theorem 2 does not impose any condition of static rationality. The theorem does not require that collective judgment sets are consistent or complete or deductively closed. The impossibility of dynamic inconsistency is thus independent of classic impossibilities of static rationality. In fact, Theorem 2 would continue to hold if its condition of dynamic rationality were replaced by static rationality in the form of consistency and completeness of collective judgment sets.

By Theorem 2, the problem identified by Theorem 1 is not restricted to uniform quota rules, but extends to all aggregation rules satisfying our conditions. Moreover, since practically all non-trivial agendas are non-simple, the impossibility applies very widely.

The natural follow-up question is that of whether any of the conditions in the theorem is redundant, i.e., could be dropped, and if not what sort of dynamically rational aggregation rules become possible after dropping any of these conditions. This question goes beyond the scope of this summary and is treated in Dietrich and List (2021). Four remarks should however be given:

Firstly, none of the theorem’s conditions on the aggregation rule, the revision operator, or the agenda is redundant. That is, whenever we drop the agenda condition (non-simplicity) or any one of the aggregation conditions (universal domain, non-imposition, monotonicity, non-oligarchy, and systematicity) or any of the revision conditions (successfulness, conservativeness, and rationality preservation), there exist dynamically rational aggregation rules such that the remaining conditions hold.

Secondly, abandoning exactly one condition on the aggregation rule leads to rather degenerate dynamically rational possibilities, in the form of ‘peculiar’ aggregation rules and/or revision operators (with the exception of universal domain, whose relaxation allows for interesting dynamically rational possibilities). One of the conditions on aggregation seems very strong: systematicity. An important difference between static and dynamic rationality is that dropping systematicity or even independece makes it easy (indeed, too easy) to satisfy static rationality – for instance by using distance-based rules or prioritarian rules or scoring rules – whereas dynamic rationality remains hard to achieve without systematicity, as illustrated by the degenerate nature of the non-systematic escape route constructed in Dietrich and List (2021). It thus seems inappropriate to blame systematicity for being the main culprit for the impossibility of dynamic rationality.

Thirdly, let us give examples of dynamically rational aggregation rules that become possible if we give up any one of the three conditions on the revision operator while preserving all other conditions on revision or aggregation.

Finally, on a more positive note, in Dietrich and List (2021) we explore an interesting class of dynamically rational aggregation rules, which simultaneously relax multiple of Theorem 2’s conditions on aggregation and revision, notably systematicity. In a nutshell, premise-based aggregation rules are dynamically rational with respect to premise-based revision operators. Presenting these rules goes beyond the scope of this summary.

Proofs of theorems and other technical details are given in Dietrich and List (2021).

References

Alchourrón, C. E., Gärdenfors, P., and Makinson, D. (1985): On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic 50(2), pp. 510–530. DOI: 10.2307/2274239

Dietrich, F. (2007): A generalised model of judgment aggregation. Social Choice and Welfare 28(4), pp. 529–565. DOI: 10.1007/s00355-006-0187-y

Dietrich, F. (2010): Bayesian group belief. Social Choice and Welfare 35(4), pp. 595–626. DOI: 10.1007/s00355-010-0453-x

Dietrich, F. (2019): A theory of Bayesian groups. Noûs 53(3), pp. 708–736. DOI: 10.1111/nous.12233

Dietrich, F., and List, C. (2007): Arrow’s theorem in judgment aggregation. Social Choice and Welfare 29(1), pp. 19–33. DOI: 10.1007/s00355-006-0196-x

Dietrich, F., and List, C. (2021): Dynamically Rational Judgment Aggregation. Working paper, see https://philpapers.org/rec/DIEDRJ

Dokow, E., and Holzman, R. (2010): Aggregation of binary evaluations. Journal of Economic Theory 145(2), pp. 495–511. DOI: 10.1016/j.jet.2007.10.004

Genest, C. (1984): A characterization theorem for externally Bayesian groups. Annals of Statistics 12(3), pp. 1100–1105. DOI: 10.1214/aos/1176346726

Genest, C., McConway, K. J., and Schervish, M. J. (1986): Characterization of externally Bayesian pooling operators. Annals of Statistics 14(2), pp. 487–501. DOI: 10.1007/BF02562628

Konieczny, S., and Pino-Pérez, R. (2002): Merging information under constraints: A logical framework. Journal of Logic and Computation 12(5), pp. 773–808. DOI: 10.1093/logcom/12.5.773

List, C. (2011): Group Communication and the Transformation of Judgments: An Impossibility Result. Journal of Political Philosophy 19(1), pp. 1–27. DOI: 10.1111/j.1467-9760.2010.00369.x

List, C., and Pettit, P. (2002): Aggregating sets of judgments: An impossibility result. Economics and Philosophy 18(1), pp. 89–110.

List, C., and Pettit, P. (2011): Group Agency: The Design, Possibility, and Status of Corporate Agents. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199591565.001.0001

List, C., and Puppe, C. (2009): Judgment aggregation: A survey. In P. Anand, C. Puppe, and P. Pattanaik, Oxford Handbook of Rational and Social Choice. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199290420.001.0001

Madansky, A. (1964): Externally Bayesian Groups. Technical Report RM-4141-PR, RAND Corporation.

Nehring, K., and Puppe, C. (2010): Abstract Arrovian aggregation. Journal of Economic Theory 145(2), pp. 467–494. DOI: 10.1016/j.jet.2010.01.010

Peppas, P. (2008): Belief Revision. In F. van Harmelen, V. Lifschitz and B. Porter, Handbook of Knowledge Representation, Elsevier, pp. 317-359.

Pettit, P. (2006): When to defer to majority testimony – and when not. Analysis 66(3), pp. 179–187.

Pigozzi, G. (2006): Belief merging and the discursive dilemma: An argument-based account to paradoxes of judgment aggregation. Synthese 152(2), pp. 285–298. DOI: 10.1007/s11229-006-9063-7

Rott, H. (2001): Change, Choice and Inference: A Study of Belief Revision and Non-monotonic Reasoning. Oxford: Oxford University Press.

Russell, J. S., Hawthorne, J., and Buchak, L. (2015): Groupthink. Philosophical Studies 172(5), pp. 1287–1309. DOI: 10.1007/s11098-014-0350-8


  1. The revision of judgments has been investigated only in a different sense in judgment aggregation theory, namely in peer-disagreement contexts, where individuals do not learn a proposition but learn the judgments of others (Pettit 2006, List 2011).↩︎

  2. Well-behavedness is a three-part requirement: (i) any proposition-negation pair {p, ¬p} is inconsistent; (ii) any subset of any consistent set is still consistent; and (iii) the empty set is consistent, and any consistent set S has a consistent superset S which contains a member of every proposition-negation pair {p, ¬p}.↩︎

  3. Readers familiar with probability theory could take L to be a Boolean algebra on a non-empty set Ω of possible worlds (e.g., the power set L = 2Ω), with negation defined as set-theoretic complementation and consistency of a set defined as non-empty intersection. The Boolean algebra could also be an abstract rather than set-theoretic Boolean algebra.↩︎

  4. To be precise, henceforth, by the negation of any proposition q ∈ X we shall mean the agenda-internal negation of q, i.e., the opposite proposition in the binary issue {p, ¬p} to which q belongs. This is logically equivalent to the ordinary negation of q and will again be denoted ¬q, for simplicity. This convention ensures that ¬¬q = q.↩︎

  5. This subjunctive understanding of p → q contrasts with the material one, where p → q is understood less realistically as ¬p ∨ q. On the material understanding, the subsets {p, ¬(p → q), q}, p, ¬(p → q), q}, and p, ¬(p → q), ¬q} would also be deemed inconsistent.↩︎


Deliberation and Epistemic Democracy

Huihui Ding (CY Cergy Paris University )
Marcus Pivato (CY Cergy Paris University)

We study the effects of deliberation on epistemic social choice, in two settings. In the first setting, the group faces a binary epistemic decision analogous to the Condorcet Jury Theorem. In the second setting, group members have probabilistic beliefs arising from their private information, and the group wants to aggregate these beliefs in a way that makes optimal use of this information. During deliberation, each agent discloses private information to persuade the other agents of her current views. But her views may also evolve over time, as she learns from other agents. This process will improve the performance of the group, but only under certain conditions; these involve the nature of the social decision rule, the group size, and also the presence of neutral agents whom the other agents try to persuade.


Belief Inducibility and Informativeness

P. Jean-Jacques Herings (Maastricht University)
Dominik Karos (Bielefeld University)
Toygar Kerman (Maastricht University)

We consider a group of receivers who share a common prior on a finite state space and who observe private correlated signals that are contingent on the true state of the world. We show that, while necessary, Bayes plausibility is not sufficient for a distribution over posterior belief vectors to be inducible, and we provide a characterization of inducible distributions. We classify communication strategies as minimal, direct, and language independent, and we show that any inducible distribution can be induced by a language independent communication strategy (LICS). We investigate the role of the different classes of communication strategies for the amount of higher order information that is revealed to receivers. We show that the least informative communication strategy which induces a fixed distribution over posterior belief vectors lies in the relative interior of the set of all language independent communication strategies which induce that distribution.


Algorithmic Randomness, Bayesian Convergence and Merging

Simon Huttegger (University of California, Irvine)
Sean Walsh (University of California, Los Angeles)
Francesca Zaffora Blando (Carnegie Mellon University)

Convergence-to-the-truth results and merging-of-opinions results are part of the basic toolkit of Bayesian epistemologists. In a nutshell, the former establish that Bayesian agents expect their beliefs to almost surely converge to the truth as the evidence accumulates. The latter, on the other hand, establish that, as they make more and more observations, two Bayesian agents with different subjective priors are guaranteed to almost surely reach inter-subjective agreement, provided that their priors are sufficiently compatible. While in and of themselves significant, convergence to the truth with probability one and merging of opinions with probability one remain somewhat elusive notions. In their classical form, these results do not specify which data streams belong to the probability-one set of sequences on which convergence to the truth or merging of opinions occurs. In particular, they do not reveal whether the data streams that ensure eventual convergence or merging share any property that might explain their conduciveness to successful learning. Thus, a natural question raised by these classical results is whether the kind of data streams that are conducive to convergence and merging for Bayesian agents are uniformly characterizable in an informative way.

The results presented in this paper provide an answer to this question. The driving idea behind this work is to approach the phenomena of convergence to the truth and merging of opinions from the perspective of computability theory and, in particular, the theory of algorithmic randomness--a branch of computability theory concerned with characterizing the notion of a sequence displaying no effectively detectable patterns. We restrict attention to Bayesian agents whose subjective priors are computable probability measures and whose goal, in the context of convergence to the truth, is estimating quantities that can be effectively approximated. These are natural restrictions to impose when studying the inductive performance of more realistic, computationally limited learners. Crucially, they also allow to provide a more fine-grained analysis of both convergence to the truth and merging of opinions. Our results establish that, in this setting, the collections of data streams along which convergence and merging occur are indeed uniformly characterizable in an informative way: they are exactly the algorithmically random data streams.

Understanding Transfinite Elimination of Non-Best Replies

Stephan Jagau (IMBS, University of California, Irvine)

In auction theory, industrial organization, and other fields of game theory, it is often convenient to let infinite strategy sets stand in for large finite strategy sets. A tacit assumption is that results from infinite games generally translate back to their finite counterparts. Transfinite eliminations of non-best replies pose a radical challenge here, suggesting that common belief in rationality in infinite games strictly refines up to k-fold belief in rationality for all finite k. I provide a general characterization of common belief in rationality for finite and infinite games that fully restores the equivalence to up to k-fold belief in rationality for all finite k. By means of eliminating non-best replies and supporting beliefs, my characterization entirely avoids transfinite eliminations. Hence, rather than revealing new depths of reasoning, transfinite eliminations signal an inadequacy of eliminating non-best replies as a general description of strategic rationality.


Persuading Communicating Voters

Toygar Kerman (Department of Microeconomics and Public Economics (MPE), Maastricht University)
Anastas P. Tenev (Department of Microeconomics and Public Economics (MPE), Maastricht University)

This paper studies a multiple-receiver Bayesian persuasion model, where a sender communicates with receivers who have homogeneous beliefs and aligned preferences. The sender wants to implement a proposal and commits to a communication strategy which sends private (possibly) correlated messages to the receivers, who are in an exogenous and commonly known network. Receivers can observe their neighbors' private messages and after updating their beliefs, vote sincerely on the proposal. We examine how networks of shared information affect the sender's gain from persuasion and find that in many cases it is not restricted by the additional information provided by the receivers' neighborhoods. Perhaps surprisingly, the sender's gain from persuasion is not monotonically decreasing with the density of the network.


Probabilistic Stability and Statistical Learning

Krzysztof Mierzewski (Carnegie Mellon University)

A canonical way to bridge the probabilistic, gradational notion of belief studied by Bayesian probability theory with the more mundane, all-or-nothing concept of qualitative belief is in terms of acceptance rules [Kelly and Lin, 2012]: maps that specify which propositions a rational agent accepts in light of their numerical credences (given by a probability model). Among the various acceptance rules proposed in the literature, an especially prominent one is Leitgeb's stability rule [Leitgeb, 2013, 2014, 2017; Rott, 2017], based on the notion of probabilistically stable hypotheses: that is, hypotheses that maintain sufficiently high probability under conditioning on new information.

When applied to discrete probability spaces, the stability rule for acceptance guarantees logically closed and consistent belief sets, and it suggests a promising account of the relationship between subjective probabilities and qualitative belief. Yet, most natural inductive problems - particularly those commonly occurring in statistical inferenc - are best modelled with continuous probability distributions and statistical models with a richer internal structure. This paper explores the possibility of extending Leitgeb's stability rule to more realistic learning scenarios and general probability spaces. This is done by considering a generalised notion of probabilistic stability, in which acceptance depends not only on the underlying probability space, but also on a learning problem - namely, a probability space equipped with a distinguished family of events capturing the relevant evidence (e.g., the observable data) in the given learning scenario. This view of acceptance as being relative to an evidence context is congenial to (topological approaches to) formal learning theory and hypothesis testing in statistics (where one typically distinguishes the hypotheses being considered from observable sample data), as well as logics of evidence-relative belief [van Benthem and Pacuit, 2011].

Here we consider the case of statistical learning. We show that, in the context of standard (parametric) Bayesian learning models, the stability rule yields a notion of acceptance that is either trivial (only hypotheses with probability 1 are accepted) or fails to be conjunctive (accepted hypotheses are not closed under conjunctions). The first problem chiefly affects statistical hypotheses, while the second one chiefly affects predictive hypotheses about future outcomes. The failure of conjunctivity for the stability rule is particularly salient, as it affects a wide class of consistent Bayesian priors and learning models with exchangeable random variables. In particular, the results presented here apply to many distributions commonly used in statistical inference, as well as to every method in Carnap's continuum of inductive logics [Carnap, 1980; Skyrms, 1996]. These results highlight a serious tension between (1) being responsive to evidence and (2) having conjunctive beliefs induced by the stability rule. In the statistical context, certain properties of priors that are conducive to inductive learning - open-mindedness, as well as certain symmetries in the agent's probability assignments - act against conjunctive belief. Thus, the main selling points of the stability account of belief - its good logical behaviour and its close connection to the Lockean thesis - do not survive the passage to richer probability models, such as canonical statistical models for i.i.d learning. We conclude by discussing the consequences the results bear on Leitgeb's Humean Thesis on belief [Leitgeb, 2017].

References

J. van Benthem and E. Pacuit. Dynamic Logics of Evidence-Based Beliefs. Studia Logica, 99(1): 61 - 92, 2011. doi: 10.1007/s11225-011-9347-x.

R. Carnap. A Basic System of Inductive Logic. in R.C. Jeffrey (ed.), Studies in Inductive Logic and Probability, vol. 2, Berkeley: University of California Press., 1980.

K. T. Kelly and H. Lin. A geo-logical solution to the lottery-paradox, with applications to conditional logic. Synthese, 186(2): 531 - 575, 2012. doi: 10.1007/s11229-011-9998-1.

H. Leitgeb. Reducing belief simpliciter to degrees of belief. Annals of Pure and Applied Logic, 164: 1338 - 1389, 2013. doi: 10.1016/j.apal.2013.06.015.

H. Leitgeb. The Stability Theory of Belief. Philosophical Review, 123(2): 131 - 171, 2014. doi: 10.1215/ 00318108-2400575.

H. Leitgeb. The Stability of Belief. Oxford University Press, Oxford, 2017.

H. Rott. Stability and Scepticism in the Modelling of Doxastic States: Probabilities and Plain Beliefs. Minds and Machines, 27(1): 167 - 197, 2017. doi: 10.1007/s11023-016-9415-0.

B. Skyrms. Carnapian inductive logic and Bayesian statistics. In Ferguson, T. S., Shapley, L. S. and MacQueen, J. B., editors, Statistics, probability and game theory: Papers in honor of David Blackwell, Hayward, CA, Institute of Mathematical Statistics, pages 321 - 336, 1996. doi: 10.1214/lnms/1215453580.

Failures of Contingent Thinking

Evan Piermont ( Royal Holloway, University of London, Department of Economics)
Peio Zuazo-Garin (Higher School of Economics, International College of Economics and Finance)

In this paper, we provide a theoretical framework to analyze an agent who misinterprets or misperceives the true decision problem she faces. Within this framework, we show that a wide range of behavior observed in experimental settings manifest as failures to perceive implications, in other words, to properly account for the logical relationships between various payoff relevant contingencies. We present behavioral characterizations corresponding to several benchmarks of logical sophistication and show how it is possible to identify which implications the agent fails to perceive. Thus, our framework delivers both a methodology for assessing an agent's level of contingent thinking and a strategy for identifying her beliefs in the absence full rationality.