Euclidean Properties of Bayesian Updating
Kyle Chauvin*
Last modified: 2022-04-17
Abstract
This paper introduces a novel framework for analyzing Bayesian updating and non-Bayesian heuristics. A learning rule consists of an arbitrary set of belief states and a set of transition functions, called arguments, from the belief set to itself. Bayesian learning rules, with beliefs in the form of probability distributions over a state and arguments in the form of Bayes’ rule, are a special case. The paper’s first main result is an axiomatic characterization of Virtual Bayesian learning rules, which can be turned into a Bayesian via a relabeling of the set of beliefs. There are three substantive axioms – that arguments are injective functions, that arguments commute, and that repeated application of an argument never produces cycles of beliefs – as well as three regularity assumptions. The axioms both identify the algebraic properties common to all Bayesians and distinguish which among familiar updating heuristics are Virtual Bayesians. The second main result establishes that any Virtual Bayesian learning rule can be embedded into Euclidean space – and therefore equipped with geometric notions of magnitude, direction, etc. – by defining the ‘agreement’ between pairs of arguments in a suitably additive manner. Applying such an embedding, an argument’s direction corresponds to the limit of the support of posterior beliefs under repeated application of the argument, and its magnitude is the extent to which a single application pushes prior beliefs towards that limit. The paper discusses how the framework of learning rules could be applied to additional contexts, including the elicitation of beliefs from laboratory subjects.