Seminars by Term:

Sort by year:
January 25, 2021

Stephen Stigler, University of Chicago, Department of Statistics

How to Beat the Odds: The Law of the Maturity of Chances

Abstract

This Law has been known and influenced gamblers for thousands of years: when betting on dice or roulette, bet on an outcome that has not occurred for a long time; its chance will have "matured" and the longer since its last appearance, the more likely it is to appear. In the academic world, this is well known as a ridiculous fallacy, a self-deception, an impossible and vain hope for which there is no rational support; and nor will there ever be - how could the die or the wheel have a memory? The more honest academics admit the disproof is tautological, it assumes the conclusion by supposing independence or lack of memory. This paradoxical conflict between theory and practice will be addressed in terms of a 250 year old lottery, highlighting some little known and unappreciated work by major figures of the time.

February 1, 2021

Haosui Duanmu, University of California, Berkeley

On extended admissible decision procedures and their nonstandard Bayes risk

Abstract

Nonstandard analysis, a powerful machinery derived from mathematical logic, has had many applications in probability theory as well as stochastic processes. Nonstandard analysis allows construction of a single object—a hyperfinite probability space—which satisfies all the first order logical properties of a finite probability space, but which can be simultaneously viewed as a measure-theoretical probability space via the Loeb construction. As a consequence, the hyperfinite/measure duality has proven to be particularly in porting discrete results into their continuous settings.

The connection between frequentist and Bayesian optimality in statistical decision theory is a longstanding open problem. For statistical decision problems with a finite parameter space, it is well known that a decision procedure is extended admissible (frequentist optimal) if and only if it is Bayes. Such connection becomes fragile for decision problems with an infinite parameter space and one must relax the notion of Bayes optimality to regain such equivalence between extended admissibility and Bayes optimality. Various attempts have been made in the literature but they are subject to technical conditions which often rule our semi-parametric and nonparametric problems. By using nonstandard analysis, we develop a novel notion of nonstandard Bayes optimality (Bayes with infinitesimal excess risk). We show that, without any technical condition, a decision procedure is extended admissible if and only if it is nonstandard Bayes. We conclude by showing that several existing standard results in the literature can be derived from our main result.

February 8, 2021

Distinguished Panel Discussion: Dana Francisco Miranda, Nell Irvin Painter, Felix Waldmann, Prakash Gorroochurn and John Aldrich

Worthy of study, worthy of honor?

Abstract

We sometimes memorialize scholars by attaching their names to lecture series, conferences, scholarships, prizes, societies, institutes, rooms, or even buildings. Today we are de-memorializing some. Why memorialize? Why de-memorialize? What are the purposes and results, intended or unintended, of these actions?

These questions have recently touched two academic disciplines that our seminar brings together: philosophy and statistics. In 2020, the University of Edinburgh removed David Hume’s name from a building on its campus, Gonville & Caius College at Cambridge removed a stain-glass Latin square commemorating R. A. Fisher, the Committee of Presidents of Statistical Societies retired the R.A. Fisher Award and Lecture, and an official inquiry into the history of eugenics at University College London recommended that Karl Pearson’s and Francis Galton’s names be removed from that institution’s Pearson Building, Galton Chair, and Galton Lecture Theatre.

This discussion will focus not on whether these institutions should or should not have taken these particular actions, but on what the initial naming and the current de-naming signifies for all the actors and all the audiences in these two disciplines.

Glenn Shafer, one of the organizers of the seminar, will chair a distinguished panel:

1. Dana Francisco Miranda, University of Massachusetts – Boston philosopher who has written thoughtfully about the complexity of memorialization.

2. Nell Irvin Painter, Professor Emerita of History at Princeton and Chairman of the Board of MacDowell, the USA’s leading artist retreat, who has written on the need for a Vergangenheitsbewältigung in the United States.

3. Felix Waldmann, Cambridge University historian whose extensive study of David Hume included the discovery of Hume’s investment in a slave plantation.

4.
Prakash Gorroochurn, Columbia University biostatistician and historian, leading scholar of R.A. Fisher’s work in statistics and population genetics, including his dissent from the UNESCO statement on race published in 1952.

5. John Aldrich, University of Southhampton historian of the English statistical school whose wide-ranging work also includes a study of eugenics in British economics.

After a virtual tea beginning at 4:15 pm, the panel discussion will begin at 4:30 pm. Each panelist will speak for 10 to 15 minutes, as formally or informally as they choose. Then the chair will invite the audience to pose questions, orally or via chat. We will close the session formally at 6:00 but stay on-line until 6:30 if anyone wants to continue.

February 15, 2021

Rescheduled to May 3, 2021

Peter Wakker, Erasmus University

Belief Hedges: correcting for subjective beliefs when measuring ambiguity attitudes even when those beliefs are unknown

Abstract

Since Keynes (1921) and Knight (1921) we know that uncertainties usually do not come with probabilities (“ambiguity”). The first half of this lecture presents history, explaining the importance of ambiguity throughout but its popularity rising only since the 1990s. The second half solves a long-standing open problem: To measure/apply ambiguity aversion, we must control for subjective beliefs. As yet, this could only be done for artificial events: Ellsberg-urns with secretized compositions or experimenter-specified probabilities. It was unknown how to handle application-relevant events. We introduce belief hedges to solve this problem. That is, we combine uncertain beliefs so that they neutralize each other, whatever they are. In the same way as hedging in finance protects against uncertain returns. Now we can measure and apply ambiguity aversion to all events, greatly increasing the applicability of ambiguous beliefs.

This is joint work with Aurélien Baillon, Han Bleichrodt, & Chen Li.

Note: This seminar was re-scheduled from February 15, 2021.

February 22, 2021

Prakash Shenoy, University of Kansas School of Business

An interval-valued utility theory for decision making with Dempster-Shafer belief functions

Abstract

The main goal of this presentation is to describe an axiomatic utility theory for Dempster-Shafer belief function lotteries. The axiomatic framework used is analogous to von Neumann-Morgenstern’s utility theory for probabilistic lotteries as described by Luce and Raiffa. Unlike the probabilistic case, our axiomatic framework leads to interval-valued utilities, and therefore, to a partial (incomplete) preference order on the set of all belief function lotteries. If the belief function reference lotteries we use are Bayesian belief functions, then our representation theorem coincides with Jaffray’s representation theorem for his linear utility theory for belief functions. We illustrate our representation theorem using some examples discussed in the literature.

March 1, 2021

Simon Huttegger, UC Irvine

Reconciling Evidential and Causal Decision Theory

Abstract

I consider dynamical models of deliberation for Newcomb's problem. These models can be used to show that a deliberating evidential decision theorist who continuously updates on the information generated by deliberation converges to choosing two boxes. This happens under certain assumptions about the deliberative process which may not always be in place. Thus, the resulting reconciliation between evidential and causal decision theory is not perfect, but the approach sharpens our understanding as to when the two theories come apart.

March 8, 2021

Jason Konek, Bristol

Accuracy for Sets of Almost Desirable Gambles

Abstract

This talk will introduce a new class of “IP scoring rules” for of sets of almost desirable gambles. A set of almost desirable gambles D is evaluable for both type 1 and type 2 error. Type 1 error is roughly a matter of the extent to which D encodes false judgments of desirability. Type 2 error is roughly a matter of the extent to which D fails to encode true judgments of desirability. Our IP scoring rules assign a single “alethic penalty” to D on the basis of both type 1 and type 2 error. We will show that these scoring rules are flexible enough to return all additive strictly proper scoring rules as a special case. They also return non-additive strictly proper scoring rules as a special case. Then we will explore preliminary results that suggest that Walley’s axioms of coherence for sets of almost desirable gambles can be justified by an accuracy dominance argument.

March 22, 2021

Catrin Campbell-Moore, Bristol

Probability Filters as a Model for Belief

Abstract

I will propose a model of belief where one's attitudes are captured by a filter on the space of probabilities. That is, it is given by endorsements of various sets of probabilities. For example, if you think the train is likely to be on time, we will say you endorse { p | p (OnTime) > 0.5 }. And if you think that a gamble g is desirable, we will say you endorse { p | Exp_p [g] > 0 }. Your endorsements should be closed under finite intersection and supersets; that is, they should form a filter. This is a very expressively powerful framework, allowing for non-Archimedean behaviour as well as imprecision. It can capture the models of belief available in the imprecise probability literature such as representation via sets of desirable gambles or sets-of-probabilities. It provides a natural and powerful model of belief.

March 29, 2021

Note: Seminar will begin at 1:00pm EST

Gert de Cooman, Ghent University

Randomness and imprecision

Abstract

I will focus on joint work with my close colleague Jasper De Bock about using the martingale-theoretic approach of game-theoretic probability to incorporate imprecision into the study of randomness. We associate (weak Martin-Löf, computable, Schnorr) randomness with interval, rather than precise, forecasting systems. The richer mathematical structure this uncovers, allows us to, amongst other things, better understand and place existing results for the precise limit. When we focus on constant—stationary—interval forecasts, we find that every sequence of binary outcomes has an associated filter of intervals it is random for. It may happen that none of these intervals is precise­—a single real number—and that is just one of our reasons for stating that ‘randomness is inherently imprecise’.

I will illustrate this by indicating that randomness associated with non-stationary precise forecasting systems can be captured by a constant interval forecast, which must then be less precise: a gain in model simplicity is thus paid for by a loss in precision. But I will also argue that imprecise randomness cannot always be explained away as a result of oversimplification: there are sequences that are random for a constant computable interval forecast, but never random for any computable precise forecasting system.

Finally, I will discuss why random sequences for interval forecasts are as rare—and therefore arguably as interesting—as their precise counterparts: both constitute meagre sets.

April 5, 2021

Sandy Zabell, Northwestern

Fisher, Bayes, and predictive Bayesian inference

Abstract

R. A. Fisher is usually perceived to have been a staunch critic of the Bayesian approach to statistics, yet his last book (_Statistical Methods and Scientific Inference_, 1956) is much closer in spirit to the Bayesian approach than the frequentist theories of Neyman and Pearson. This mismatch between perception and reality is best understood as an evolution in Fisher's views over the course of his life. In my talk I will discuss Fisher's initial and harsh criticism of "inverse probability", his subsequent advocacy of fiducial inference starting in 1930, and his admiration for Bayes expressed in his 1956 book. Several of the examples Fisher discusses there are best understood when viewed against the backdrop of earlier controversies and antagonisms.

April 12, 2021

Katie Elliot, UCLA

Where are the chances?

Abstract

Not all probability ascriptions that appear in scientific theories describe chances. There is a question about whether probability ascriptions in non-fundamental sciences, such as those found in evolutionary biology and statistical mechanics, describe chances in deterministic worlds; and, about whether there could be any chances in deterministic worlds. Recent debate over whether chance is compatible with determinism has unearthed two strategies for arguing about whether a probability ascription describes chance. That is, to speak metaphorically, two different strategies for figuring out where the chances are: find the chances by focusing on chance’s explanatory role or find the chances by focusing on chance’s predictive role. These two strategies tend to yield conflicting results about where the chances are, and debate over which strategy is appropriate tends to end in stalemate. After discussing these two strategies, I consider a new view of chance’s explanatory role. I argue that one theoretical advantage of this new view is that it allows us to make progress on the question of where the chances are by providing a principled way of determining which probability ascriptions describe chances. From the vantage of this new view, the correct application of both strategies involves figuring out where the chances are by figuring out where the probabilistic scientific explanations are and what those explanations are like.

April 19, 2021

William Ziemba, University of British Columbia

Research in investment management And their applications in various markets Including speculative markets in sports And cask and futures equity markets

Abstract

William Ziemba discusses topics from his long career of research in finance, investing, gambling and probability theory. The discussion will be divided into 3 or 4 sections on the topics of gambling, investing, finance, and practical insights from his career of working with leaders in all of these fields.

April 26, 2021

David Builes, NYU

Abstract

May 3, 2021

Peter Wakker, Erasmus University

Belief Hedges: correcting for subjective beliefs when measuring ambiguity attitudes even when those beliefs are unknown

Abstract

Since Keynes (1921) and Knight (1921) we know that uncertainties usually do not come with probabilities (“ambiguity”). The first half of this lecture presents history, explaining the importance of ambiguity throughout but its popularity rising only since the 1990s. The second half solves a long-standing open problem: To measure/apply ambiguity aversion, we must control for subjective beliefs. As yet, this could only be done for artificial events: Ellsberg-urns with secretized compositions or experimenter-specified probabilities. It was unknown how to handle application-relevant events. We introduce belief hedges to solve this problem. That is, we combine uncertain beliefs so that they neutralize each other, whatever they are. In the same way as hedging in finance protects against uncertain returns. Now we can measure and apply ambiguity aversion to all events, greatly increasing the applicability of ambiguous beliefs.

This is joint work with Aurélien Baillon, Han Bleichrodt, & Chen Li.

Note: This seminar was re-scheduled from February 15, 2021.

September 14, 2020

Dmitri Gallow, University of Pittsburgh, Department of Philosophy

Two-Dimensional Chance Deference

Abstract

Principles of chance deference say that you should align your credences with the objective chances. Roughly: so long as you don't have information about what happens after t, your credence in A, given that the time t chance of A is n%, should be n%. Principles like this face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which you've lost track of the time. For an illustration of the first problem (due to John Hawthorne and Maria Lasonen-Aarnio): suppose that there are 100 tickets in the lottery, and the winning ticket will be drawn tomorrow. Before it is drawn, we introduce the name "Lucky" to refer to whoever it is that actually holds the winning ticket. Then, you know for sure that today's chance of Lucky winning is 1%. (It's a fair lottery, so Lucky has the same chance of winning as everyone else.) But you also know for sure that Lucky will win---given the way the name "Lucky" was introduced, it is a priori knowable that Lucky wins. So this appears to be a case in which your credences should depart from the known chances. For an illustration of the second problem: suppose that, although today is Tuesday, you don't know whether today is Tuesday or Wednesday. But suppose you know for sure that today's chance of A is 75%. and yesterday's chance of A was 25%. Then, for all you know, the Tuesday chance of A is 25% (which is so iff today is Wednesday), and for all you know, the Tuesday chance of A is 75% (which is so iff today is Tuesday). Then, the principle of chance deference says that, conditional today being Tuesday, your credence in A should be 75%; whereas, conditional on today being Wednesday, your credence in A should be 25%. Since you're not sure whether today is Tuesday or Wednesday, the principle says that your credence in A should be somewhere between 25% and 75%. But you know for sure that the current chance of A is 75%. So it seems that your credence in A should be 75%. In response to these two troubles, I propose an amendment of the principle of chance deference. This amended principle has a surprising consequence for debates about Elga's Sleeping Beauty puzzle. According to this new principle of chance deference, the 'Halfer' does not defer to the chances, whereas the 'Thirder' does.

September 21, 2020

Brian Hedden, University of Sydney, Department of Philosophy

On Statistical Criteria of Algorithmic Fairness

Abstract

Predictive algorithms are playing an increasing prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm's predictions and the actual outcomes, for instance, that the rate of false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results show that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except an expectational calibration criterion, is a necessary condition for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness.

September 28, 2020

Aaditya Ramdas, Carnegie Mellon University, Department of Statistics and Data Science

Betting scores, e-values and martingales

Abstract

This talk will discuss, at a relatively high level, several advantages of dealing with betting scores as the underlying instrument for statistical inference (or their measure-theoretic cousins: e-values and nonnegative supermartingales).

Topics include: causal inference, nonparametric estimation, universal inference and multiple testing.

Some relevant papers:

- False discovery rate control with e-values R. Wang, A. Ramdas
- Admissible anytime-valid sequential inference must rely on nonnegative martingales A. Ramdas, J. Ruf, M. Larsson, W. Koolen
- Universal inference L. Wasserman, A. Ramdas, S. Balakrishnan PNAS, 2020
- Time-uniform, nonparametric, nonasymptotic confidence sequences S. Howard, A. Ramdas, J. Sekhon, J. McAuliffe Annals of Stat., 2021
- Time-uniform Chernoff bounds via nonnegative supermartingales S. Howard, A. Ramdas, J. Sekhon, J. McAuliffe Prob. Surveys, 2020

October 5, 2020

Yonatan Berman, London Mathematical Laboratory

Risk Preferences in Time Lotteries

Abstract

An important question in economics is how people choose when facing uncertainty in the timing of rewards. In this paper we study preferences over time lotteries, in which the payment amount is certain but the payment time is uncertain. In expected discounted utility (EDU) theory decision makers must be risk-seeking over time lotteries. Here we explore growth-optimality, a normative model consistent with standard axioms of choice, in which decision makers maximise the growth rate of their wealth. Growth-optimality is consistent with both risk-seeking and risk-neutral behaviour in time lotteries, depending on how growth rates are computed. We discuss two approaches to compute a growth rate: the ensemble approach and the time approach. Revisiting existing experimental evidence on risk preferences in time lotteries, we find that the time approach accords better with the evidence than the ensemble approach. Surprisingly, in contrast to the EDU prediction, the higher the ensemble-average growth rate of a time lottery is, the less attractive it becomes compared to a sure alternative. Decision makers thus may not consider the ensemble-average growth rate as a relevant criterion for their choices. Instead, the time-average growth rate may be a better criterion for decision-making.

October 12, 2020

Stewart Ethier, University of Utah, Department of Mathematics

Baccarat and Game Theory

Abstract

There are three principal versions of the card game baccarat.

1. Baccarat punto banco is a nonstrategic game that is (or was, before COVID) a 40 billion dollar industry in Macau.

2. Baccarat chemin de fer is a two-person zero-sum game that played a role in the development of game theory, and evolved, over a period of about 100 years, into baccarat punto banco.

3. Baccarat banque is a three-person zero-sum game depending on a continuous parameter for which the most natural solution concept is not the Nash equilibrium. It was a predecessor of baccarat chemin de fer.

In this talk we discuss all three games, their history, and their game-theoretic aspects.

October 19, 2020

Alexander Pruss, Baylor University, Department of Philosophy

Regular Hyperreal and Qualitative Probabilities Invariant Under Symmetries

Abstract

Suppose an analog spinner is spun. That the spinner should stop on a particular exact angle x has probability zero according to classical probability theory. But at the same time, intuitively, it seems that the probability that the spinner should stop at angle x is more likely than that the spinner should turn into a square circle, which also has probability zero. Three kinds of attempts have been made to depart from classical probability theory that allow one to make subtler comparison than classical probability allows: fundamental conditional probabilities (or Popper functions), hyperreal probabilities and qualitative probabilities. Unfortunately, it is known that in a number of prominent cases, some of these approaches are guaranteed to violate the symmetries of the situation. For instance, there is no rotationally invariant hyperreal probability on the circle that assigns non-zero probability to every possible outcome. This leads to the mathematically interesting question of exactly when these subtler probabilities can be assigned in ways that respect symmetries. On a strong (and I argue correct) understanding of "respecting symmetries", the literature is close to a complete answer for the conditional probability case. I fill out the remaining details of that case, and give complete characterizations in the hyperreal and qualitative probability cases. This leads to a philosophical question: Is there anything philosophically special about the cases where it is possible to respect symmetries? Unfortunately, I am at this point unable to find a positive answer.

October 26, 2020

Matthias Clery,

Against axiomatization of probability calculus? Borel, Frechet and Levy on probability calculus, its foundation and its mathematical apparatus

Abstract

During the first decades of the 20th century, several mathematicians throughout Europe tackled the problem of defining axioms for probability calculus. However Borel, Fréchet and Lévy, the three most prominent probabilists in France in the 1920s and 1930s, adopted mixed attitudes towards those attempts : indifferent or hostile but always attentive. Although the three of them seem not to agree, we argue that their attitude is the result of the unfavorable context for the mathematical development of probability calculus in France in the beginning of the 20th century and a common outlook on both the foundation of probability and the role played by mathematics in scientific knowledge.

November 2, 2020

Konstantin Genin, University of Toronto, Department of Philosophy

Simplicity and Scientific Progress

Abstract

A major goal of twentieth-century philosophy of science was to show how science could make progress toward the truth even if, at any moment, our best theories are false. To that end, Popper and others tried to develop a theory of truthlikeness, hoping to prove that theories get closer to the truth over time. That program encountered several notable setbacks. I propose the following: a method for answering an empirical question is progressive if the chance of outputting the true answer is strictly increasing with sample size. Surprisingly, many standard statistical methods are not even approximately progressive. What's worse, many problems do not admit strictly progressive solutions. However, I prove that it is often possible to approximate progressiveness arbitrarily well. Furthermore, every approximately progressive method must obey a version of Ockham’s razor. So it turns out that addressing the problem of progress uncovers a solution to another perennial problem: how can we give a non-circular argument for preferring simple theories when the truth may well be complex?

November 9, 2020

Mark Colyvan, University of Sydney

The Role of Toy Statistical Models in Legal Reasoning

Abstract

A great deal of theorising about the proper place of statistical reasoning in the courtroom revolves around several canonical thought experiments that invoke toy statistical models of the situation in question. I will argue that all of these canonical thought experiments are flawed in various (albeit interesting) ways. In some cases the flaws involve subtle underspecification that leads to ambiguity about the intuitive judgement; in other cases the flaw is that the thought experiment stipulates that we forgo freely-available and relevant evidence. The upshot is that these thought experiments do not succeed in undermining the use of statistical evidence in the courtroom.

November 16, 2020

Hanti Lin, University of California, San Diego /h4>
What is an Epistemology of Induction?

Abstract

An epistemology of induction should have all of the following features: 1. (The Evidentialist Theme) This epistemology should allow us to assess competing hypotheses in terms evidential support---in terms of how those hypotheses are each supported by the available evidence. 2. (The Reliabilist Theme) This epistemology should allow us to evaluate competing inductive methods in terms of reliability---in terms of how reliable those methods are for finding the true hypothesis. 3. (Continuity with Data Science) This epistemology should not only accommodate some intuitively justified inferences in science, but also join data scientists in solving some inference problems that exist in science. I will sketch how it is possible to develop such an epistemology of induction with all those three features. I will do that by walking you through a case study on inferring causation from statistical data.

November 23, 2020

Julia Staffel and Glauber De Bona, University of Colorado, Boulder and University de Sao Paolo

Updating Incoherent Credences - Extending the Dutch Strategy Argument for Conditionalization

Abstract

In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure.

November 30, 2020

Francesca Zaffora Blando, Carnegie Mellon University

Abstract

December 7, 2020

Harry Crane, Rutgers University, Department of Statistics

Models vs. Markets: An case study of Prediction Markets and Forecasters for the 2020 U.S. election

Abstract

Two common ways to forecast political outcomes are (i) prediction markets (Markets) and (ii) statistical models (Models). Which is more reliable?

Over the past several U.S. elections, the Models and Markets have disagreed substantially on their assessment of the mode likely outcome. In 2020, for example, the leading forecasters and poll aggregators forecasted a Joe Biden win with probability 90-95%, while almost all betting markets assessed Biden's chances in the 60-70% range.

An argument in favor of betting markets is that they are better able to aggregate information than forecasters, who predominantly rely on polling data. In addition, the fact that bettors risk their own money gives a concrete disincentive against betting for a candidate based purely on sentiment. Despite this, many insist that the discrepancy between the Models and Markets, in both 2020 and in previous election cycles, reflects an irrationality among market participants.

Proponents of models, such as FiveThirtyEight and the Economist, insist that their forecasts are consistently more accurate than the betting markets. Is this true?

I'll provide an overview of the claims on both sides and show the results of my analysis of the 2020 election based on a proposed market-based scoring metric, which evaluates the performance of probabilistic forecasts based on their would-be performance in the betting markets.

In addition to the analysis presented here, a running tally of results from the above analysis was updated and reported throughout the 2020 campaign at https://pivs538.herokuapp.com/.

January 27, 2020

Adam Elga, Princeton University, Department of Philosophy

Causal decision theory does not exist

Abstract

The box is either empty or contains $1 million (you can't see which). You will either receive (1) just the contents of the box, or (2) the contents of the box plus an extra $1,000. You get to choose (1) or (2). The catch is that a reliable predictor put the $1 million in the box if and only if he predicted you would refuse the extra $1,000. Should you take the $1,000?

This is Newcomb's Problem, which has divided philosophers since it was described by Nozick (1969). Those who favor taking the $1,000 are called "two-boxers" and are typically moved by the sort of "causal dominance reasoning" exemplified by the following speech: "I have no present control over the contents of the box. Whether the box is empty or not, I prefer having an extra $1,000. So I should take the $1,000."

It is often thought that various so-called "causal decision theories" (such as theories described by Stalnaker, Lewis, Gibbard & Harper, Joyce, and others) vindicate causal dominance reasoning, and so are attractive theories for two-boxers. Adapting observations due to Ahmed, Dorr, and Joyce, I argue that those theories do not vindicate causal dominance reasoning in general. Indeed, for each such theory there are Newcomb problems in which the theory recommends *one-boxing*. So none of those theories deserve the name "causal decision theory". I conclude that a true causal decision theory does not exist---or at least, does not exist yet.

Attendees interested in a playful introduction to disputes about Newcomb's problem may optionally wish to look at "Newcomb University: A Play in One Act" (The talk will be self-contained and no advance reading will be presupposed.

February 3, 2020

Anthony Aguirre, University of California, Santa Cruz, Department of Physics

From Mentaculus to Metaculus: probabilities, coarse-graining, and prediction from particles to politics.

Abstract

I’ll discuss a framework for organizing the quantum or classical (micro)states that describe a closed system into macrostates (aka "properties") associated with probabilities of measuring those properties. The time-evolution of these property probabilities had been dubbed the “mentaculus.” This framework is general enough to apply to everyday macroscopic phenomena, and I will use it to discuss determinism and predictability in the macro-world. I’ll then describe Metaculus, an online platform aiming to generate probabilistic predictions for real-world events, and tie the two together in a (somewhat vague but potentially meaningful) conjecture regarding fundamental unpredictability and probabilistic prediction.

February 17, 2020

Barry Loewer, Rutgers, Department of Philosophy

The Consequence Argument meets the Mentaculus

Abstract

The much discussed (by philosophers) Consequence Argument is claimed to establish that free will and determinism are incompatible. The argument claims that since we have no influence over the past and no influence over the laws then if determinism is true we have no influence over the future and so no free will. Influence involves counterfactual relations between decisions and what they influence and understanding these counterfactuals involves probabilities and these probabilities derive from “the Mentaculus”. I show that on a proper account of counterfactuals and probabilities the Consequence Argument is unsound and so doesn’t establish conflict between determinism and free will.

February 24, 2020

Cian Dorr, NYU, Department of Philosophy

The proportion of observers as a guide to credence

Abstract

According to an attractive principle of ideal Bayesian rationality I call ‘Proportion', one should, conditional on a hypothesis about the eternal qualitative nature of the world as a whole that entails that the proportion of all observers who have a certain qualitative property x, adopt x as one’s prior credence that one has that property oneself. In this talk I will explain the case for Proportion as I see it, and consider one of the central challenges it raises, namely the question what should count (for the purposes of applying the principle) as an “observer”.

March 2, 2020

Daniel Hoek, Princeton University, Department of Philosophy

Coin flips, Spinning Tops and the Continuum Hypothesis

Abstract

By using a roulette wheel or by flipping a countable infinity of fair coins, we can randomly pick out a point on a continuum. In this talk I will show how to combine this simple observation with general facts about chance to investigate the cardinality of the continuum. In particular I will argue on this basis that the continuum hypothesis is false. More specifically, I argue that the probabilistic inductive methods standardly used in science presuppose that every proposition about the outcome of a chancy process has a certain chance between 0 and 1. I also argue in favour of the standard view that chances are countably additive. A classic theorem from Banach and Kuratowski (1929), tells us that it follows, given the axioms of ZFC, that there are cardinalities between countable infinity and the cardinality of the continuum.

March 9, 2020

, ,

Abstract

March 23, 2020

Yonatan Berman, London Mathematical Laboratory

Abstract

March 30, 2020

Sara Aranowitz, Princeton University, University Center for Human Values

Abstract

April 6, 2020

No Seminar

Abstract

April 13, 2020

Barry Loewer, Rutgers, Department of Philosophy

Abstract

April 20, 2020

Alex Meehan, Princeton University, Department of Philosophy

Abstract

April 27, 2020

Dmitri Gallow, University of Pittsburgh, Department of Philosophy

Abstract

September 16, 2019

Richard Bradley, London School of Economics and Political Science, Department of Philosophy, Logic and Scientific Method

Chances and Credences

Abstract

In this talk I will defend two theses about the relation between chances and credences. The first is that chances should be identified with the judgements or partial beliefs of a (hypothetical) unbounded and fully informed perfect inductive reasoner and not with any of the possible grounds for her judgements, such as frequencies or propensities. The second is a conditional version of the Principal Principle, which says that that rational conditional belief given the evidence should go by the expectation of conditional chance given this evidence. The two theses are mutually supporting and make better sense (I shall claim) of the role that chance plays in our reasoning about what to believe and do than rival accounts.

Location: Miller Hall



September 23, 2019

Mike Otsuka, London School of Economics and Political Science, Department of Philosophy, Logic and Scientific Method

Determinism and the Value and Fairness of Equal Chances

Abstract

It follows from plausible claims about the laws of physics and the narrowness of the most relevant reference class that the positive chances between 0.0 and 1.0 that lotteries yield are almost certainly merely epistemic rather than objective. It is, for example, merely a matter of our ignorance that a given fair coin toss will confer a 0.5 chance of landing heads. In actual objective fact, the chances that it will land heads are almost certainly either 0.0 or 1.0. I argue that, even if all chances between 0.0 and 1.0 are merely epistemic rather than objective, the provision by lottery of such merely epistemically equal positive chances of an indivisible, life-saving resource to those with equal claims renders things fairer by providing the equal distribution of something that it is rational to value equally, which makes a material difference to the goods people end up receiving.

Location: Miller Hall



October 7, 2019

Glenn Shafer, Rutgers Business School

The 19th century origins of confidence intervals, significance tests, and p-hacking

Abstract

In March, over 800 statisticians endorsed an editorial in Nature entitled “Retire statistical significance”. Can we really blame current abuses on the word “significance”? The existence of the same abuses in the 19th century suggests not. See Working Paper 55.

Location: Hill Center

October 14, 2019

Aris Spanos, Virginia Tech, Department of Economics

The Replication Crises and the Untrustworthiness of Empirical Evidence: the Error Statistical Perspective

Abstract

The paper argues that the current discussions on replicability and the abuse of significance testing do not do justice to the real problem: the untrustworthiness of published empirical evidence. When viewed from the error-statistical perspective the p-hacking, multiple testing, cherry-picking and low power studies are only symptoms of a much broader problem relating to the recipe-like implementation of statistical methods that contributes in many different ways to the untrustworthy evidence problem, including: (i) statistical misspecification, (ii) inadequate understanding and the poor implementation of inference procedures, and (iii) unwarranted evidential interpretations of frequentist inferential results. Indeed, a case is made that the same recipe-like implementation of statistical methods could easily render untrustworthy evidence replicable, when equally uninformed practitioners follow the same questionable implementation. It is also argued that alternative methods to replace significance testing, including using observed confidence intervals and estimation-based effects sizes, as well as lowering rejection thresholds, do not address the untrustworthy evidence problem since they are equally vulnerable to (i)-(iii). The paper also discusses the question of what replication could mean for observational data, as opposed to experimental data.



Location: Hill Center

November 4, 2019

Alex von Stein, University of Arizona, Department of Philosophy

Frequentist problems for Humean theories of chance

Abstract

Frequentism has a long and troubled history as an interpretation of probability. Recently, many have pursued the metaphysical project of accounting for single-case objective chance within Humean strictures. While these subjects might seem related only in their connection to probability broadly construed, I argue that the biggest problems for Humean chance are direct analogs of serious problems for frequentism and vice versa. I then attempt to diagnose the common origins of these difficulties with a view toward the prospects for Humean theories of chance in particular.

November 11, 2019

Eliot Jacobson, Ohio University (Mathematics) and University of California, Santa Barbara (Computer Science)

Advanced Advantage Play: The Art and Science of Legally Beating Every Casino Game and Promotion

Abstract

Advantage play is the act of legally exploiting procedural or structural weaknesses in casino games, marketing or operations in a way that generates an edge over the casino. The most well-known advantage play is blackjack card counting, as first expounded by Edward Thorp in 1962. Over fifty-five years later, the range of opportunities has expanded.

Modern advantage players legally beat table games, side bets, slot machines, video poker, sports betting, casino promotions and anything else they find that is sufficiently profitable. They analyze shuffling algorithms used by automatic shufflers. They exploit bias in devices such as roulette wheels. They pounce on poorly conceived casino marketing campaigns. Today’s top players are limited only by their creativity and diligence.

In this talk we will give an overview of some of the most profitable methods used by today’s top players. These methods include hole-carding, ace tracking, information sharing, edge sorting, loss rebates and more. We will present case studies that cover some of the mathematical and strategic ideas behind these methods, including Phil Ivey’s edge sorting at Crockford’s Casino in London and Don Johnson’s exploitation of loss rebates in Atlantic City. In particular, we will present three “Loss Rebate Theorems,” based on an application of a well-known theorem on Brownian motion with drift, and show their applicability to a wide variety of highly profitable opportunities.

Speaker Bio:

Eliot Jacobson received his Ph.D. in Mathematics from the University of Arizona in 1983. Eliot was Associate Professor of Mathematics at Ohio University and Visiting Associate Professor and Lecturer of Computer Science at U.C. Santa Barbara. Eliot retired from academia in 2009.

After a decade as an advantage player, Eliot founded Jacobson Gaming, LLC in 2006. His company specialized in casino table game design, advantage play analysis, game development, and mathematical certification. Eliot's most recent book, "Advanced Advantage Play," (2015, Blue Point Books) is an industry best-seller on the topic of legally beating casino table games, side bets and promotions.

Eliot fully retired in 2017.

November 18, 2019

Marcello DiBello, CUNY

Is Algorithmic Fairness Possible?

Abstract

Algorithms are increasingly used by public and private sector entities to streamline decisions about health care, welfare benefits, child abuse, public housing, policing, bail and sentencing. Although algorithms can render decisions more efficient, they can also exacerbate existing inequities and structural biases in society. This paper focuses on the debate about the fairness of algorithms in criminal justice. In this context, fairness is understood in at least two different ways, as predictive parity and classification parity. The received view in the literature, following a number of impossibility results, is that since no algorithm can concurrently satisfy both conceptions of fairness, there are tradeoffs in pursuing one or the other conception. I believe that this framing of the debate is erroneous. Both classification parity and predictive parity can be concurrently achieved provided the algorithm is sufficiently accurate. I demonstrate this claim by simulating a decision algorithm for binary classification, showing that departures from classification and predictive parity become smaller as algorithmic accuracy increases.

November 25, 2019

Branden Fitelson, Northeastern University, Department of Philosophy

How to Model the Epistemic Probabilities of Conditionals

Abstract

David Lewis (and others) have famously argued against Adams's Thesis (that the probability of a conditional is the conditional probability of its consequent, given it antecedent) by proving various "triviality results." In this paper, I argue for two theses -- one negative and one positive. The negative thesis is that the "triviality results" do not support the rejection of Adams's Thesis, because Lewisian "triviality based" arguments against Adams's Thesis rest on an implausibly strong understanding of what it takes for some credal constraint to be a rational requirement (an understanding which Lewis himself later abandoned in other contexts). The positive thesis is that there is a simple (and plausible) way of modeling the epistemic probabilities of conditionals, which (a) obeys Adams's Thesis, and (b) avoids all of the existing triviality results.

December 2, 2019

Adam Elga, Princeton University, Department of Philosophy

Causal decision theory does not exist

Postponed to January 27, 2020

Abstract

The box is either empty or contains $1 million (you can't see which). You will either receive (1) just the contents of the box, or (2) the contents of the box plus an extra $1,000. You get to choose (1) or (2). The catch is that a reliable predictor put the $1 million in the box if and only if he predicted you would refuse the extra $1,000. Should you take the $1,000?

This is Newcomb's Problem, which has divided philosophers since it was described by Nozick (1969). Those who favor taking the $1,000 are called "two-boxers" and are typically moved by the sort of "causal dominance reasoning" exemplified by the following speech: "I have no present control over the contents of the box. Whether the box is empty or not, I prefer having an extra $1,000. So I should take the $1,000."

It is often thought that various so-called "causal decision theories" (such as theories described by Stalnaker, Lewis, Gibbard & Harper, Joyce, and others) vindicate causal dominance reasoning, and so are attractive theories for two-boxers. Adapting observations due to Ahmed, Dorr, and Joyce, I argue that those theories do not vindicate causal dominance reasoning in general. Indeed, for each such theory there are Newcomb problems in which the theory recommends *one-boxing*. So none of those theories deserve the name "causal decision theory". I conclude that a true causal decision theory does not exist---or at least, does not exist yet.

Attendees interested in a playful introduction to disputes about Newcomb's problem may optionally wish to look at "Newcomb University: A Play in One Act" (The talk will be self-contained and no advance reading will be presupposed.

September 10, 2018

Branden Fitelson, Northeastern University, Department of Philosophy

Two Approaches to Belief Update

Abstract

There are two dominant paradigms in the theory of qualitative belief change. While belief revision theory attempts to describe the way in which rational agents revise their beliefs upon gaining new evidence about an essentially static world, the theory of belief update is concerned with describing how such agents change their qualitative beliefs upon learning that the world is changing in some way.

A similar distinction can be made when it comes to describing the way in which a rational agent changes their subjective probability assignments, or `credences', over time. On the one hand, we need a way to describe how these credences evolve when the agent learns something new about a static environment. On the other hand, we need a way to describe how they evolve when the agent learns that the world has changed.

According to orthodoxy, the correct answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about a static world are given by the axiomatic AGM theory of belief revision and Bayesian conditionalisation, respectively. Now, under the influential Lockean theory of belief, an agent believes a proposition p if and only if their credence in p is sufficiently high (where what counts as `sufficiently high' is determined by some threshold value t 2 (1=2; 1]). Thus, assuming a Lockean theory of belief, Bayesian conditionalization de nes an alternative theory of qualitative belief revision, where p is in the revised belief set if and only if the agent's posterior credence in p is above the relevant threshold after conditionalising on the new evidence. Call this theory of belief revision `Lockean revision'. The relationship between Lockean revision and the AGM theory of belief revision was systematically described by Shear and Fitelson (forthcoming).

With regards to belief updating, the most widely accepted answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about how the world is changing over time are given by Katsuno and Mendelzon's axiomatic theory of belief update (KM-update) and Lewis's technique of probabilistic imaging, respectively. In this sequel to our study of the relationship between Bayesian (viz., Lockean revision) and AGM revision, we investigate the relationships between Bayesian (viz., Lockean imaging) KM updating.

The latest draft can be downloaded: http://fitelson.org/tatbr.pdf

September 17, 2018

Bill Benter, Avenue Four Analytics

Horse Sense

Abstract

Wagering on horse racing is an unforgiving arena in which the bettor’s probability estimates are tested against betting odds set by public consensus via the pari-mutuel system.

The underlying event, the horse race, is a complex real world phenomenon whose outcome is determined principally by the laws of physics and biology combined with a certain amount of random variation. Overlaid on this physical process is a complex milieu of jockeys, trainers and owners whose changing proclivities can also affect race outcomes. A race can be represented with a state-space model in which the race outcome is a stochastic function of the state parameters. Complicating the situation further, most of the parameters of interest (e.g. the fitness of a particular horse or the skill of a jockey) cannot be observed directly but must be inferred from past race results.

The large takeout (~17%) levied by the racetrack means that a would-be successful bettor needs to identify situations wherein the true probability of a bet winning is at least ~17% higher than that implied by the betting odds. Racetrack betting markets are largely efficient in the sense used to describe financial markets in that the betting odds are highly informative and largely unbiased estimators of the horses’ probabilities of winning. However in the speaker’s experience, probability lies in the eye of the beholder. A practical methodology will be described whereby superior probability estimates can be produced which result in long term betting profits.

September 24, 2018

Mario Hubert, Columbia University, Philosophy Department

How Statistical Explanations Rely on Typicality

Abstract

Typicality has been widely used in statistical mechanics in order to help explaining the approach to equilibrium of thermodynamic systems. I aim at showing that typicality reasoning qualifies for many situations beyond statistical mechanics. In particular, I show how regression to the mean rely on typicality and how probabilites arise from typicality.

October 1, 2018

Rohit Parikh, Brooklyn College of CUNY, and CUNY Graduate Center.

The logic of the personal world

Abstract

In his work on the inner world, Jakob von Uexkull describes how a creature like a tick, or a dog, or a child sees the world and how it moves inside its personal world. That world is called by him the umwelt of that creature. We adults also have our umwelts, but unlike a tick or a dog we enrich our umwelt by consulting the umwelts of other people and using science. A tick has its own, meager logic. A dog or a baby has a richer logic and we adults have a richer logic still. How do these logics work, especially since the first two logics are outside language? Can we still speak of the inferences made by a dog? An insect? Can we use our language to describe their inferences? Uexkull anticpiated many of the ideas current in Artificial Intelligence. The example of the Wumpus, popular in AI literature is anticipated by Uexkull with his example of a tick.

October 8, 2018

Nozer Singpurwalla, City University of Hong Kong, Department of Systems Engineering and Engineering Management, and Department of Management Science

Entropy, Information, and Extropy in the Courtroom and a Hacker's Bedroom

Abstract

We start by motivating as to how the notion of information arose, and how it evolved, via the idealistic scenario of a courtroom, and that of a hacker trying to break a computer's password. We then introduce the notion of Shannon entropy as a natural consequence of the basic Fisher-Hartley idea of self-information, and subsequently make the charge that Shannon took a giant leap of faith when he proposed his famous, and well lubricated, formula. A consequence is that Shannon's formula overestimates the inherent uncertainty in a random variable. We also question Shannon's strategy of taking expectations and suggest alternatives to it based on the Kolmogorov-Nagumo functions for the mean of a sequence of numbers. In the sequel, we put forward the case that the only way to justify Shannon's formula is to look at self-information as a utility in a decision theoretic context. This in turn enables an interpretation for the recently proposed notion of "extropy". We conclude our presentation with the assertion that a complete way to evaluate the efficacy of a predictive distribution (or a mathematical model) is by the tandem use of both entropy and extropy.

October 15, 2018

Robin Hanson, George Mason University, Department of Economics

Uncommon Priors Require Origin Disputes

Abstract

In a standard Bayesian belief model, the prior is always common knowledge. This prevents such a model from representing agents’ probabilistic beliefs about the origins of their priors. By embedding such a standard model in a larger standard model, however, we can describe such beliefs using "pre-priors". When an agent’s prior and pre-prior are mutually consistent in a particular way, she must believe that her prior would only have been different in situations where relevant event chances were different, but that variations in other agents’ priors are otherwise completely unrelated to which events are how likely. Thus Bayesians who agree enough about the origins of their priors must have the same priors.

The paper can be found here: http://hanson.gmu.edu/prior.pdf

October 22, 2018

Harry Crane, Rutgers University, Department of Statistics

Replication Crisis, Prediction Markets, and the Fundamental Principle of Probability

Abstract

I will discuss how ideas from the foundations of probability can be applied to resolve the current scientific replication crisis. I focus on two specific approaches:

1. Prediction Markets to incentivize more accurate assessments of the scientific community’s confidence that a result will replicate.
2. The Fundamental Principle of Probability to incentivize more accurate assessments of the author’s confidence that their own results will replicate.

I compare and contrast the merits and drawbacks of these two approaches.

The article associated to this talk can be found at https://researchers.one/article/2018-08-16.

Lecture Handout

Robin Hanson's response on his blog

Slides

October 29, 2018

Robin Gong, Rutgers University, Department of Statistics

Modeling uncertainty with sets of probabilities

Abstract

Uncertainty in real life takes many forms. An analyst can hesitate to specify a prior for a Bayesian model, or be ignorant of the mechanism that gave rise to the missing data. Such kinds of uncertainty cannot be faithfully captured by a single probability, but can be by a set of probabilities, and in special cases by a capacity function or a belief function.

In this talk, I motivate sets of probabilities as an attractive modeling strategy, which encodes low-resolution information in both the data and the model with little need to concoct unwarranted assumptions. I present a prior-free belief function model for multinomial data, which delivers posterior inference as a class of random convex polytopes. I discuss challenges that arise with the employment of belief and capacity functions. Specifically, how the choice of conditioning rules reconciles among a trio of unsettling posterior phenomena: dilation, contraction and sure loss. These findings underscores the invaluable role of judicious judgment in handling low-resolution probabilistic information.

November 5, 2018

David Bellhouse, University of Western Ontario

The Emergence of Actuarial Science in the Eighteenth Century

Abstract

In 1987, historian of science Lorraine Daston wrote about the impact of mathematics on the nascent insurance industry in England.

"Despite the efforts of mathematicians to apply probability theory and mortality statistics to problems in insurance and annuities in the late seventeenth and early eighteenth centuries, the influence of this mathematical literature on the voluminous trade in annuities and insurance was negligible until the end of the eighteenth century."

This view is a standard one in the history of insurance today. There is, however, a small conundrum attached to this view. Throughout the eighteenth century, several mathematicians were writing books about life annuities, often long ones containing tables requiring many hours of calculation. The natural question is: who were they writing for and why? Themselves? For the pure academic joy of the exercise? The answer that I will put forward is that mathematicians were not writing for the insurance industry, but about something else – life contingent contracts related to property. These included valuing leases whose terms were based on the lives of three people, marriage settlements and reversions on estates. As in many other situations, it took a crisis to change the insurance industry. This happened in the 1770s when many newly-founded companies offered pension-type products that were grossly underfunded. This was pointed out at length in layman’s terms by Richard Price, the first Bayesian. The crisis reached the floor of the House of Commons before the mathematicians won the day.

November 12, 2018

Eddy Keming Chen, Rutgers University, Philosophy Department

On the Fundamental Probabilities in Physics

Abstract

There are two sources of randomness in our fundamental physical theories: quantum mechanical probabilities and statistical mechanical probabilities. The former are crucial for understanding quantum effects such as the interference patterns. The latter are important for understanding thermodynamic phenomena such as the arrow of time. It is standard to postulate the two kinds of probabilities independently and to take both of them seriously. In this talk, I will introduce a new framework for thinking about quantum mechanics in a time-asymmetric universe, for which the two kinds of probabilities can be reduced to just one. We will then consider what that means for the Mentaculus Vision (Loewer 2016) and the phenomena that I call “nomic vagueness.” Time permitting, we will also briefly compare and contrast my theory with some other proposals in the literature by Albert (2000), Wallace (2012), and Wallace (2016).

We will not assume detailed knowledge of physics, and we will introduce the necessary concepts in the first half of the talk. For optional reading, please see: https://arxiv.org/abs/1712.01666

November 19, 2018

Doron Zeilberger, Rutgers University, Mathematics Department

An Ultra-Finitistic Foundation of Probability

Abstract

Probability theory started on the right foot with Cardano, Fermat and Pascal when it restricted itself to finite sample spaces, and was reduced to counting finite sets. Then it got ruined by attempts to come to grips with that fictional (and completely superfluous) 'notion' they called 'infinity'.

A lot of probability theory can be done by keeping everything finite, and whatever can't be done that way, is not worth doing. We live in a finite world, and any talk of 'infinite' sample spaces is not even wrong, it is utterly meaningless. The only change needed, when talking about an 'infinite' sequence of sample spaces, say of n coin tosses, {H,T}^n, for 'any' n, tacitly implying that you have an 'infinite' supply of such n, is to replace it by the phrase 'symbolic n'.

This new approach is inspired by the philosophy and ideology behind symbolic computation. Symbolic computation can also redo, ab initio, without any human help, large parts of classical probability theory.

November 26, 2018

Arthur Van Camp, Ghent University, Department of Electronics and Information Systems

Choice functions as a tool to model uncertainty

Abstract

Choice functions constitute a very general and simple mathematical framework for modelling choice under uncertainty. In particular, they are able to represent the set-valued choices that typically arise from applying decision rules to imprecise-probabilistic uncertainty models. Choice functions can be given a clear behavioural interpretation in terms of attitudes towards gambling.

I will introduce choice functions as a tool to model uncertainty, and connect them with sets of desirable gambles, a very popular but less general imprecise-probabilistic uncertainty model. Once this connection is in place, I will focus on two important devices for both models. First, I will discuss about performing conservative inferences with both models. Second, I will discuss how both models can cope with assessments of symmetry and indifference.

https://www.youtube.com/watch?v=k0h_1qV2DXw
December 3, 2018

Deborah G. Mayo, Virginia Tech University, Philosophy Department

Statistical Inference as Severe Testing (How it Gets You Beyond the Statistics Wars)

Abstract

High-profile failures of replication in the social and biological sciences underwrites a minimal requirement of evidence: If little or nothing has been done to rule out flaws in inferring a claim, then it has not passed a severe test. A claim is severely tested to the extent it has been subjected to and passes a test that probably would have found flaws, were they present. This minimal severe-testing requirement leads to reformulating significance tests (and related methods) to avoid familiar criticisms and abuses. Viewing statistical inference as severe testing–whether or not you accept it–offers a key to understand and get beyond the statistics wars.

Bio: Deborah G. Mayo is Professor Emerita in the Department of Philosophy at Virginia Tech and is a visiting professor at the London School of Economics and Political Science, Centre for the Philosophy of Natural and Social Science. She is the author of Error and the Growth of Experimental Knowledge (Chicago, 1996), which won the 1998 Lakatos Prize awarded to the most outstanding contribution to the philosophy of science during the previous six years. She co-edited Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science (CUP, 2010) with Aris Spanos, and has published widely in the philosophy of science, statistics, and experimental inference. She will co-direct a summer seminar on Philosophy of Statistics, intended for philosophy and social science faculty and post docs, July 28-August, 2019.

Link to the proofs of the first Tour of Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (Mayo 2018, CUP)

December 10, 2018

No Seminar

Abstract

There will be no seminar today.

January 28, 2019

E. Glen Weyl, Microsoft Research

Political Economy for Increasing Returns

Abstract

The very existence of civilization implies that typically many people must together be able to produce more value than the sum of what they could each produce independently. Yet neoliberal capitalism is wildly inefficient in the presence of such “increasing returns” and standard democracy is far too rigid to address this failure. Drawing on novel, approximately optimal market mechanisms for increasing returns, I will sketch what a political economy adapted to increasing returns would look like. Nearly all consumption would be in the form of public goods and most private property would belong to a range of communities. A rich plurality of emergent public good-providing communities would replace democratic states and monopolistic corporations. The distinction between economics and political science, and between methodological individualism and communitarianism, would dissolve. This vision suggests many intellectual directions and a social ideology I call Liberal Radicalism.

February 11, 2019

Ole Peters, London Mathematical Laboratory and Santa Fe Institute

The ergodicity problem in economics

Abstract

The ergodicity problem queries the equality or inequality of time averages and expectation values. I will trace its curious history, beginning with the origins of formal probability theory in the context of gambling and economic problems in the 17th century. This is long before ergodicity was a word or a known concept, which led to an implicit assumption of ergodicity in the foundations of economic theory. 200 years later, when randomness entered physics, the ergodicity question was made explicit. Over the past decade I have asked what happens to foundational problems in economic theory if we export what is known about the ergodicity problem in physics and mathematics back to economics. Many problems can be resolved. Following an overview of our theoretical and conceptual progress, I will report on a recent experiment that strongly supports our view that human economic behavior is better described as optimizing time-average growth rates of wealth than as optimizing expectation values of wealth or utility of wealth.

Speaker Bio:

Ole Peters is a Fellow at the London Mathematical Laboratory and External Professor at the Santa Fe Institute. He works on different conceptualizations of randomness in the context of economics. His thesis is that the mathematical techniques adopted by economics in the 17th and 18th centuries are at the heart of many problems besetting the modern theory. Using a view of randomness developed largely in the 20th century he has proposed an alternative solution to the discipline-defining problem of evaluating risky propositions. This implies solutions to the 300-year-old St. Petersburg paradox, the leverage optimization problem, the equity premium puzzle, and the insurance puzzle. It leads to deep insights into the origin of cooperation and the dynamics of economic inequality. He maintains a popular blog at https://ergodicityeconomics.com/ that also hosts the ergodicity economics lecture notes.

February 18, 2019

Jennifer Carr, University of California, San Diego

A Modesty Proposal

Abstract

Accuracy-first epistemology aims to show that the norms of epistemic rationality, including probabilism and conditionalization, can be derived from the effective pursuit of accuracy. This paper explores the prospects within accuracy-first epistemology for vindicating “modesty”: the thesis that ideal rationality permits uncertainty about one’s own rationality. I give prima facie arguments against accuracy-first epistemology’s ability to accommodate three forms of modesty: uncertainty about what priors are rational, uncertainty about whether one’s update policy is rational, and uncertainty about what one’s evidence is. I argue that the problem stems from the representation of epistemic decision problems. The appropriate representation of decision problems, and corresponding decision rules, for (diachronic) update policies should be a generalization of decision problems and decision rules used in the assessment of (synchronic) coherence.

March 4, 2019

Glenn Shafer, Rutgers, Business School

The Language of Betting as a Strategy for Communicating Statistical Results

Abstract

Our vocabulary for statistical testing is too complicated. Even statistics teachers and scientists who use statistics answer questions about p-values incorrectly. We can communicate statistical results better using the language of betting.

Betting provides
• a simple frequentist interpretation of likelihood ratios, significance levels, and p-values,
• a framework for multiple testing and meta-analysis.

Complex problems require carefully defined betting games. See Game-Theoretic Foundations for Probability and Finance, (Wiley, May 2019).

The betting language also helps us avoid the fantasy of multiple unseen worlds when interpreting probabilistic models in science.

March 11, 2019

Dan Bouk, Colgate University, History Department

Making Statistical Individuals at the Turn of the Twentieth Century: How Insurance Corporations Numbered Americans' Days and Valued Their Lives

Abstract

Statistics describe big groups. Probability works best for large numbers. For much of the history of both fields, statistics and probability had little to say about particular individuals. Today, in contrast, Big Data enthusiasts thrill at the promise that more data and more sophisticated methods can lead to better predictions of individuals' futures. This talk looks at one important time and place where the concept of the statistical individual developed: the life insurance industry in the late-nineteenth and early-twentieth century. This talk focuses on the work of actuaries, doctors, statisticians, and others in and around the U.S. life insurance industry who developed techniques to value lives, describe bodies as "risks," and justify existing forms of racism, inequality, and discrimination.

March 25, 2019

Harry Crane, Rutgers University, Statistics Department

A Formal Model for Intuitive Probabilistic Reasoning

Abstract

I propose a formal framework for intuitive probabilistic reasoning (IPR). The proposed system aims to capture the informal, albeit logical, process by which individuals justify beliefs about uncertain claims in legal argument, mathematical conjecture, scientific theorizing, and common sense reasoning. The philosophical grounding and mathematical formalism of IPR takes root in Brouwer's mathematical intuitionism, as formalized in intuitionistic Martin-Lof type theory (MLTT) and homotopy type theory (HoTT). Formally, IPR is distinct from more conventional treatments of subjective belief, such as Bayesianism and Dempster-Shafer theory. Conceptually, the approaches share some common motivations, and can be viewed as complementary.

Assuming no prior knowledge of intuitionistic logic, MLTT, or HoTT, I discuss the conceptual motivations for this new system, explain what it captures that the Bayesian approach does not, and outline some intuitive consequences that arise as theorems. Time permitting, I also discuss a formal connection between IPR and more traditional theories of decision under uncertainty, in particular Bayesian decision theory, Dempster--Shafer theory, and imprecise probability.

References:
H. Crane. (2018). The Logic of Probability and Conjecture. Researchers.One. https://www.researchers.one/article/2018-08-5.

H. Crane. (2019). Imprecise probabilities as a semantics for intuitive probabilistic reasoning. Researchers.One. https://www.researchers.one/article/2018-08-8.

H. Crane and I. Wilhelm. (2019). The Logic of Typicality. In Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature (V. Allori, ed.). Available at Researchers.One.

April 8, 2019

Simon DeDeo, Carnegie Mellon University and the Santa Fe Institute

Predictive Brains, Dark Rooms, and the Origins of Life

Abstract

A coincidence between thermodynamic quantities such as entropy and free energy, and epistemic quantities such as compression and coding efficiency, has led a number of physicists to wonder if apparently material features of our world are simply states of knowledge. In 2005, Karl Friston, a neuroscientist with physicist sympathies, turned this ontological claim to eleven with the introduction of the predictive brain theory. This is a radical approach to cognitive science that understands organisms, in a perception-action loop with the environment, devoted to minimizing prediction error. This error is quantified as the free energy between an organism's sensory states and its environment. Because free energy is both an epistemic and a physical quantity, it may be possible to derive not just cognition, but life itself, from purely epistemic considerations and without the introduction of an additional fitness or utility function. A central difficulty for this theory has been the Dark Room Problem: such minimizers, it seems, would prefer actions that fuzz out sensory data and avoid opportunities to revise their theories. This leads to a paradox, because organisms can not become better predictors if they do not make the mistakes that help them to learn. I present recent results that show how, by contrast, predictive brains are curious configurations that naturally explore the world, it turns out, leaving dark rooms because the physical drive to minimize free energy makes them take decisive, theory-simplifying actions. I argue that many fields that rely on a utility function for their predictive power may be able to do without it: free energy minimizers might not just be successful organisms or good reasoners, for example, but also good at running a business.

April 15, 2019

Dustin Lazarovici, Université de Lausanne, Section de Philosophie

Arrows of Time without a Past Hypothesis

Abstract

The talk will discuss recent attempts by Sean Carroll and Julian Barbour to account for the thermodynamic arrow of time in our universe without a Past Hypothesis, i.e., the assumption of a special (low-entropy) initial state. In this context, I will also propose the definition of a Boltzmann entropy for a classical gravitating system and argue that it may provide a relevant example of a "Carroll universe".

Preprint.

April 22, 2019

Anya Farennikova, CUNY

Probabilistic Perception

Abstract

Scientists and philosophers have been debating whether perception is probabilistic. However, the debate is conflating different senses in which perception is probabilistic. In this talk, I'm going to sort those senses out and review new (and unexpected) evidence for probabilistic perception from split-vision experiments and anomalous conscious states. This evidence sheds new light on the concept of probabilistic perception and helps answer the challenge of how perceptual experience can have probabilistic phenomenology.

April 29, 2019

John Wu, Rutgers University, Physics Department

Deep learning an astrophysics: galaxy scaling relations

Abstract

I will discuss some applications deep convolutional neural networks (convnets), and present a high-level overview of how to select, optimize, and interpret convnet models. We have trained a convnet to recognize a galaxy's chemical abundance using only an image of the galaxy; the traditional approach using spectroscopy requires at least an order of magnitude more telescope time and achieves a comparable level of accuracy to our method. We discover that the convnet can recover an empirically known scaling relation that connects galaxies' chemical enrichment and star formation histories with zero additional scatter, implying that there exists (and that the convnet has learned) a novel representation of the chemical abundance that is strongly linked to its optical-wavelength morphology.

May 6, 2019

Kevin Dorst, MIT, Philosophy Department

Evidence of Evidence: A Higher-Order Approach

Abstract

"Evidence of evidence is evidence" (EEE) is a slogan that has stirred much recent debate in epistemology. The intuitive idea seems straightforward: if you have reason to think that there is evidence supporting p, then---since what's supported by evidence is likely to be true---you thereby have (some) reason to think that p. However, formulating precise, nontrivial versions of this thesis has proven difficult. In this paper we propose to do so using a higher-order approach---a framework that lets us model (higher-order) opinions about what opinions you should have, i.e. opinions about what opinions your evidence warrants. This framework allows us to formulate propositions about your evidence as objects of uncertainty, and therefore to formulate principles connecting evidence about evidence for p to evidence about p. Drawing on a general theory of rational higher-order uncertainty developed elsewhere, we examine which versions of EEE principles are tenable---showing that although many are not, several strong ones are. If these details are correct, then it has (broadly conciliationist) implications for the peer disagreement debate that started the EEE discussion. And regardless of the details, we hope to show that a higher-order approach is fruitful for formulating and testing precise versions of the "evidence of evidence is evidence" slogan.

September 11, 2017

Nassim Nicholas Taleb, NYU, Tandon School of Engineering

Central problems with probability

Abstract

1) Confusion at the level of the payoff functions (convexity matters), 2) Confusion concerning the Law of Large Numbers, 3) Misuse of the notion of probability. http://www.fooledbyrandomness.com/FatTails.html for more details and papers.

September 18, 2017

Eddy Chen, Rutgers, Department of Philosophy

Our Knowledge of the Past: Some Puzzles about Time’s Arrow and Self-Locating Probabilities

Abstract

Why is there an apparent arrow of time? The standard answer, due to Ludwig Boltzmann and developed by the contemporary Boltzmannians, attributes its origin in a special boundary condition on the physical space-time, now known as the “Past Hypothesis.” In essence, it says that the “initial” state of the universe was in a very orderly (low-entropy) state. In this talk, I would like to consider an alternative theory, motivated by the (in)famous Principle of Indifference. I will argue that the two theories, at least in some cosmological models, are in fact empirically on a par when we consider their de se (self-locating) content about where we are in time. As we shall see, our comparative study leads to an unexpected skeptical conclusion about our knowledge of the past. We will then think about what this means for the general issue in philosophy of science about theory choice and pragmatic considerations.

September 25, 2017

Nina Emery, Mount Holyoke College, Department of Philosophy

The Explanatory Role Argument for Deterministic Chance

Abstract

One common reason given for thinking that there are non-trivial objective probabilities—or chances—in worlds where the fundamental laws are deterministic is that such probabilities play an important explanatory role. I examine this argument in detail and show that insofar as it is successful it places significant constraints on the further metaphysical theory that we give of deterministic chance.

October 2, 2017

Jacob Feldman, Rutgers University, Department of Psychology and Center for Cognitive Science

Subjective probability in Bayesian cognitive science

Abstract

The last twenty years has seen an enormous rise in Bayesian models of cognitive phenomena, which posit that human mental function is approximately rational or optimal in nature. Contemporary theorizing has begun to settle on a “Common Wisdom”, in which human perception and cognition are seen as approximately optimal relative to the objective probabilities in the real world — “the statistics of the environment,” as it is often put. However traditional philosophy of probability in Bayesian theory generally assumes an epistemic or subjectivist conception of probability, which holds that probabilities are characteristics of observers’ states of knowledge, and do not have objective values — which implies, contrary to the Common Wisdom, that there is actually no such thing as an objective observer-independent “statistics of the environment.” In this talk I will discuss why exactly Bayesians have historically favored the subjectivist attitude towards probability, and why cognitive science should as well, highlighting some of the inconsistencies in current theoretical debate in cognitive science. My aim is partly to criticize the current state of the field, but mostly to point to what I see as a more productive way in which a subjective conception of probability can inform models of cognition.

October 9, 2017

Alison Fernandes, University of Warwick

Do Humean Reductions of Chance Justify the Principal Principle?

Abstract

Objective chances are used to guide credences and in scientific explanations. Knowing there’s a high chance that the smoke in the room disperses, you can both infer that it will, and explain why it does. Defenders of ‘Best Systems’ and other Humean accounts (Lewis, Loewer, Hoefer) claim to be uniquely well placed to account for both features. These accounts reduce chance to non-modal features of reality. Chances are therefore objective and suitable for use in scientific explanations. Because Humean accounts reduce chance to patterns in actual events, they limit the possible divergence between relative frequencies and chances. Agents who align their credences with known chances are then guaranteed to do reasonably well when predicting events. So it seems Humean accounts can justify principles linking chance to credence such as Lewis’ Principal Principle. But there’s a problem. When used in scientific explanations, Humean chances and relative frequencies must be allowed to diverge to arbitrarily high degrees. So if we consider the scientific question of whether agents who align their credences to the (actual) Humean chances will do well, it is merely probable they will. The scientific use of chance undercuts the advantage Humeans claim over their rivals in showing how chance and credence principles are justified. By seeing how, we clarify the role of chance−credence principles in accounts of chance.

October 16, 2017

Christopher Phillips, Carnegie Mellon, History Department

Number the Stars: Baseball Statistics, Scouts, and the History of Data Analysis

Abstract

Baseball has seemingly become a showcase for the triumph of statistical and probabilistic analysis over subjective, biased, traditional knowledge--the expertise of scorers replacing that of scouts. Little is known, however, about the way scorers and scouts actually make assessments of value. Over the twentieth century, scouts, no less than scorers, had to express their judgments numerically--the practices of scorers and scouts are far more similar than different. Through the history of judgments of value in baseball, we can come to a deeper understanding about the nature of expertise and numerical objectivity, as well as the rise of data analysis more broadly.

October 23, 2017

Herbert Weisberg, Causalytics, LLC and Correlation Research, Inc.

Probability, Paradox, Protocol, and Personalized Medicine

Abstract

In Willful Ignorance: The Mismeasure of Probability (Wiley, 2014) I traced the evolution of (additive) probability from its humble origins in games of chance to its current dominance in scientific and business activity. My main thesis was that mathematical probability is nothing more nor less than a way to quantify uncertainty by drawing an analogy with a “metaphorical lottery.” In some situations, this hypothetical lottery can be more complex than simply drawing a ball from an urn. In that case, the resulting probabilities are based on a protocol, essentially a set of procedures that define precisely how such a lottery is being performed. Absent an explicit protocol, there may be considerable ambiguity and confusion about what, if anything, the probability statement really means. I believe that many philosophical debates about foundational issues in statistics could be illuminated by thoughtful elucidation of implicit protocols. Attention to such protocols is increasingly important in the context of Big Data problems. I will conclude with a rather surprising application of these ideas to the analysis of individualized causal effects.

October 30, 2017

Daniel Kahneman, Princeton University, Pyschology Department

A Conversation with Daniel Kahneman

Abstract

Glenn Shafer interviews Daniel Kahneman.

November 6, 2017

Jessica John Collins, Columbia University, Philosophy Department

Imaging and Instability

Abstract

Causal decision theory (CDT) has serious difficulty handling asymmetric instability problems (Richter 1984, Weirich 1985, Egan 2007). I explore the idea that the key to solving these problems is Isaac Levi’s thesis that "deliberation crowds out prediction" i.e. that agents cannot assign determinate credences to their currently available options. I defend the view against recent arguments of Alan Hájek’s and sketch an imaging-based version of CDT with indeterminate credences and expected values. I suggest that imaging might be thought of as the hypothetical revision method appropriate to making true rather than learning true and argue that CDT should be seen not as a rival to orthodox decision theory, but simply as a more permissive account of the norms of rationality.

November 13, 2017

Glenn Shafer, Rutgers Business School

How speculation can explain the equity premium

Abstract

When measured over decades in countries that have been relatively stable, returns from stocks have been substantially better than returns from bonds. This is often attributed to investors' risk aversion.

In the theory of finance based on game-theoretic probability, in contrast, time-rescaled Brownian motion and the equity premium both emerge from speculation. This explanation accounts for the magnitude of the premium better than risk aversion.

See Working Paper 47 at www.probabilityandfinance.com. Direct link is http://www.probabilityandfinance.com/articles/47.pdf.

November 20, 2017

Jenann Ismael, University of Arizona, Philosophy Department

On Chance (or, Why I am only a half-Humean)

Abstract

The main divide in the philosophical discussion of chances, is between Humean and anti-Humean views. Humeans think that statements about chance can be reduced to statements about patterns in the manifold of actual fact (the ‘Humean Mosaic’). Non-humeans deny that reduction is possible. If one goes back and looks at Lewis’ early papers on chance, there are actually two separable threads in the discussion: one that treats chances as recommended credences and one that identifies chances with patterns in the manifold of categorical fact. I will defend a half humean view that retains the first thread and rejects the second.

The suggestion wiill that what the Humean view can be thought of as presenting the patterns in the Humean mosaic as the basis for inductive judgments built into the content of probabilistic belief. This could be offered as a template for accounts of laws, capacities, dispositions, and causes – i.e., all of the modal outputs of Best System style theorizing. In each case, the suggestion will be, these are derivative quantities that encode inductive judgments based on patterns in the manifold of fact. They extract projectible regularities from the pattern of fact and give us belief-forming and decision-making policies that have a general, pragmatic justification.

November 27, 2017

Harry Crane, Rutgers, Department of Statistics and Biostatistics

Why "redefining statistical significance" will make the reproducibility crisis worse

Abstract

A recent proposal to "redefine statistical significance" (Benjamin, et al. Nature Human Behaviour, 2017) claims that false positive rates "would immediately improve" by factors greater than two and fall as low as 5%, and replication rates would double simply by changing the conventional cutoff for 'statistical significance' from P<0.05 to P<0.005. I will survey several criticisms of this proposal and also analyze the veracity of these major claims, focusing especially on how Benjamin, et al neglect the effects of P-hacking in assessing the impact of their proposal. My analysis shows that once P-hacking is accounted for the perceived benefits of the lower threshold all but disappear, prompting two main conclusions:

(i) The claimed improvements to false positive rate and replication rate in Benjamin, et al (2017) are exaggerated and misleading.

(ii) There are plausible scenarios under which the lower cutoff will make the replication crisis worse.

My full analysis can be downloaded here.

December 4, 2017

Elke Weber, Princeton University, Psychology and Public Affairs

Query Theory: A Process Account of Preference Construction

Abstract

Psychologists and behavioral economists agree that many of our preferences are constructed, rather than innate or pre-computed and stored. Little research, however, has explored the implications that established facts about human attention and memory have when people marshal evidence for their decisions. This talk provides an introduction to Query Theory, a psychological process model of preference construction that explains a broad range of phenomena in individual choice with important personal and social consequences, including our reluctance to change also known as status-quo-bias and our excessive impatience when asked to delay consumption.

January 22, 2018

Charles Randy Gallistel, Rutgers University, Department of Psychology

The Perception of Probability

Abstract

Human and non-human animals estimate the probabilities of events spread out in time. They do so on the basis of a record in memory of the sequence of events, not by the event-by-event updating of the estimate. The current estimate of the probability is the byproduct of the construction of a hierarchical stochastic model for the event sequence. The model enables efficient encoding of the sequence (minimizing memory demands) and it enables nearly optimal prediction (The Minimum Description Length Principle). The estimates are generally close to those of an ideal observer over the full range of probabilities. Changes are quickly detected. Human subjects, at least, have second thoughts about their most recently detected change, revising their opinion in the light of subsequent data, thereby retroactively correcting for the effects of garden path sequences on their model. Their detection of changes is affected by their estimate of the probability of such changes, as it should be. Thus, a sophisticated mechanism for the perception of probability joins the mechanisms for the perception of other abstractions, such as duration, distance, direction, and numerosity, as a foundational and evolutionarily ancient brain mechanism.

January 29, 2018

Andrew Gelman, Columbia University, Department of Statistics and Political Science

Bayes, statistics, and reproducibility

Abstract

The two central ideas in the foundations of statistics--Bayesian inference and frequentist evaluation--both are defined in terms of replications. For a Bayesian, the replication comes in the prior distribution, which represents possible parameter values under the set of problems to which a given model might be applied; for a frequentist, the replication comes in the reference set or sampling distribution of possible data that could be seen if the data collection process were repeated. Many serious problems with statistics in practice arise from Bayesian inference that is not Bayesian enough, or frequentist evaluation that is not frequentist enough, in both cases using replication distributions that do not make scientific sense or do not reflect the actual procedures being performed on the data. We consider the implications for the replication crisis in science and discuss how scientists can do better, both in data collection and in learning from the data they have.

February 5, 2018

Volodya Vovk, University of London

Game-theoretic probability for mathematical finance

Abstract

My plan is to give an overview of recent work in continuous-time game-theoretic probability and related areas of mainstream mathematical finance, including stochastic portfolio theory and capital asset pricing model. Game-theoretic probability does not postulate a stochastic model but various properties of stochasticity emerge naturally in various games, including the game of trading in financial markets. I will argue that game-theoretic probability provides an answer to the question “where do probabilities come from?” in the context of idealized financial markets with continuous price paths. This talk is obviously related to the talk given by Glenn Shafer in Fall about equity premium, but it will be completely self-contained and will concentrate on different topics.

February 12, 2018

Ioannis Karatzas, Columbia University

Mathematical Aspects of Arbitrage

Abstract

We introduce models for financial markets and, in their context, the notions of "portfolio rules" and of "arbitrage". The normative assumption of "absence of arbitrage" is central in the modern theories of mathematical economics and finance. We relate it to probabilistic concepts such as "fair game", "martingale", "coherence" in the sense of deFinetti, and "equivalent martingale measure".

We also survey recent work in the context of the Stochastic Portfolio Theory pioneered by E.R. Fernholz. This theory provides descriptive conditions under which arbitrage, or "outperformance", opportunities do exist, then constructs simple portfolios that implement them. We also explain how, even in the presence of such arbitrage, most of the standard mathematical theory of finance still functions, though in somewhat modified form.

February 19, 2018

Jean Baccelli, Munich Center for Mathematical Philosophy

Act-State Dependence, Moral Hazard, and State-Dependent Utility

Abstract

I will present ongoing work on the behavioral identification of beliefs in Savage-style decision theory. I start by distinguishing between two kinds of so-called act-state dependence. One has to do with moral hazard, i.e., the fact that the decision-maker can influence the resolution of the uncertainty to which she is exposed. The other has to do with non-expected utility, i.e., the fact that the decision-maker does not, in the face of uncertainty, behave like an expected utility maximizer. Second, I introduce the problem of state-dependent utility, i.e., the challenges posed by state-dependent utility to the behavioral identification of beliefs. I illustrate this problem in the traditional case of expected utility, and I distinguish between two aspects of the problem—the problem of total and partial unidentification, respectively. Third, equipped with the previous two distinctions, I examine two views that are well established in the literature. The first view is that expected utility and non-expected utility are equally exposed to the problem of state-dependent utility. The second view is that any choice-based solution to this problem must involve moral hazard. I show that these two views must be rejected at once. Non-expected utility is less exposed than expected utility to the problem of state-dependent utility, and (as I explain: relatedly) there are choice-based solutions to this problem that do not involve moral hazard. Building on this conclusion, I re-assess the philosophical and methodological significance of the problem of state-dependent utility.

February 26, 2018

Isaac Wilhelm, Rutgers University, Department of Philosophy

Typical: A Theory of Typicality and Typicality Explanation

Abstract

Typicality is routinely invoked in everyday contexts: bobcats are typically four-legged; birds can typically fly; people are typically less than seven feet tall. Typicality is invoked in scientific contexts as well: typical gases expand; typical quantum systems exhibit probabilistic behavior. And typicality facts like these---about bobcats, birds, and gases---back many explanations, both quotidian and scientific. But what is it for something to be typical? And how do typicality facts explain? In this talk, I propose a general theory of typicality. I analyze the notions of typical sets, typical properties, and typical objects. I provide a formalism for typicality explanations, drawing on analogies with probabilistic explanations. Along the way, I put the analyses and the formalism to work: I show how typicality can be used to explain a variety of phenomena, from everyday phenomena to the statistical mechanical behavior of gases.

March 5, 2018

David Papineau, Kings College London and City University of New York

Correlations, Causes, and Actions

Abstract

I shall examine the currently popular ‘interventionist’ approach to causation, and show that, contrary to its billing, it does not explain causation in terms of the possibility of action, but solely in terms of objective population correlations.

March 19, 2018

Mike Titelbaum, University of Wisconsin, Philosophy Department

Ranged Credence, Dilation, and Three Features of Evidence

Abstract

The philosophical literature has recently developed a fondness for working not just with numerical credences, but with numerical ranges assigned to propositions. I will discuss why a ranged model offers useful flexibility in representing agents' attitudes and appropriate responses to evidence. Then I will discuss why complaints based on "dilation" effects—especially recent puzzle cases from White (2010) and Sturgeon (2010)—do not present serious problems for the ranged credence approach.

March 26, 2018

Nick DiBella, Stanford University, Philosophy Department

Qualitative Probability and Infinitesimal Probability

Abstract

Infinitesimal probability has long occupied a prominent niche in the philosophy of probability. It has been employed for such purposes as defending the principle of regularity, making sense of rational belief update upon learning evidence of classical probability 0, modeling fair infinite lotteries, and applying decision theory in infinitary contexts. In this talk, I argue that many of the philosophical purposes infinitesimal probability has been enlisted to serve can be served more simply and perspicuously by appealing instead to qualitative probability--that is, the binary relation of one event's being at least as probable as another event. I also discuss results that show that qualitative probability has comparable (if not greater) representational power than infinitesimal probability.

April 2, 2018

Ryan Martin, North Carolina State University

Probability dilution, false confidence, and non-additive beliefs

Abstract

In the context of statistical inference, data is used to construct degrees of belief about the quantity of interest. If the beliefs assigned to certain hypotheses tend to be large, not because the data provides supporting evidence, but because of some other structural deficiency, then inferences drawn would be questionable. Motivated by the paradoxical probability dilution phenomenon arising in satellite collision analysis, I will introduce a notion of false confidence and show that all additive belief functions have the aforementioned structural deficiency. Therefore, in order to avoid false confidence, a certain class of non-additive belief functions are required, and I will describe these functions and how to construct them.

April 9, 2018

Alex Meehan and Snow Zhang, Princeton University

Chance and independence: do we need a revolution?

Abstract

What does it mean for two chancy events A and B to be independent? According to the standard analysis, A and B are independent just in case Ch(A and B)=Ch(A)Ch(B). However, this analysis runs into a problem: it implies that a chance-zero event is independent of itself. To get around this issue, Fitelson and Hajek (2017) have recently proposed an alternative analysis: Ch(A)=Ch(A|B). Going one step further, they argue that Kolmogorov's formal framework, as a whole, can't do justice to this new analysis. In fact, they call for a "revolution" in which we "bring to an end the hegemony of Kolmogorov's axiomatization".

We begin by motivating Fitelson and Hajek's initial worry about independence via examples from scientific practice, in which independence judgments are made concerning chance-zero events. Then, we turn to defend Kolmogorov from Fitelson and Hajek's stronger claim. We argue that, at least for chances, there is a motivated extension of Kolmogorov's framework which can accommodate their analysis, and which also does a decent job of systematizing what the scientists are doing. Thus, the call for a "revolution" may be premature.

April 16, 2018

Sean Carroll, California Institute of Technology, Physics Department

Locating Yourself in a Large Universe

Abstract

Modern physics frequently envisions scenarios in which the universe is very large indeed: large enough that any allowed local situation is likely to exist more than once, perhaps an infinite number of times. Multiple copies of you might exist elsewhere in space, in time, or on other branches of the wave function. I will argue for a unified strategy for dealing with self-locating uncertainty that recovers the Born Rule of quantum mechanics in ordinary situations, and suggests a cosmological measure in a multiverse. The approach is fundamentally Bayesian, treating probability talk as arising from credences in conditions of uncertainty. Such an approach doesn't work in cosmologies dominated by random fluctuations (Boltzmann Brains), so I will argue in favor of excluding such models on the basis of cognitive instability.

April 23, 2018

Kenny Easwaran, Texas A & M

Countable additivity - and beyond?

Abstract

While countable additivity is a requirement of the orthodox mathematical theory of probability, some theorists (notably Bruno de Finetti and followers) have argued that only finite additivity ought to be required. I point out that using merely finitely-additive functions actually brings in *more* infinitary complexity rather than less. If we must go beyond finite additivity to avoid this infinitary complexity, there is a question of why to stop at countable additivity. I give two arguments for countable additivity that don't generalize to go further.

April 30, 2018

Ted Porter, UCLA, Department of History

How Human Genetics Was Shaped by Data on Madness

Abstract

September 12, 2016

Glenn Shafer, Rutgers University, Business School

Calibrate p-values by taking the square root

Abstract

For nearly 100 years, researchers have persisted in using p-values in spite of fierce criticism. Both Bayesians and Neyman-Pearson purists contend that use of a p-value is cheating even in the simplest case, where the hypothesis to be tested and a test statistic are specified in advance. Bayesians point out that a small p-value often does not translate into a strong Bayes factor against the hypothesis. Neyman-Pearson purists insist that you should state a significance level in advance and stick with it, even if the p-value turns out to be much smaller than this significance level. But many applied statisticians persist in feeling that a p-value much smaller than the significance level is meaningful evidence. In the game-theoretic approach to probability (see my 2001 book with Vladimir Vovk, described at www.probabilityandfinance.com, you test a statistical hypothesis by using its probabilities to bet. You reject at a significance level of 0.01, say, if you succeed in multiplying the capital you risk by 100. In this picture, we can calibrate small p-values so as to measure their meaningfulness while absolving them of cheating. There are various ways to implement this calibration, but one of them leads to a very simple rule of thumb: take the square root of the p-value. Thus rejection at a significance level of 0.01 requires a p-value of one in 10,000.

September 19, 2016

Isaac Wilhelm, Rutgers University, Philosophy Department

A statistical analysis of luck

Abstract

According to Pritchard's analysis of luck (PAL), an event is lucky just in case it fails to obtain in a sufficiently large class of sufficiently close possible worlds. Though there are several reasons to like the PAL, it faces at least two counterexamples. After reviewing those counterexamples, I introduce a new, statistical analysis of luck (SAL). The reasons to like the PAL are also reasons to like the SAL, but the SAL is not susceptible to the counterexamples.

September 26, 2016

Barry Loewer, Rutgers University, Philosophy Department

What probabilities there are and what probabilities are

Abstract

The sciences, especially fundamental physics, contain theories that posit objective probabilities. But what are objective probabilities?

Are they fundamental features of reality as mass or charge might be? or do more fundamental facts, for example frequencies, ground probabilities?

In my talk I will survey some views about what probabilities there are and what grounds them.

October 3, 2016

Michael Strevens, New York University, Philosophy Department

Dynamic Probabilities and Initial Conditions

Abstract

Dynamic approaches to understanding the foundations of physical probability in the non-fundamental sciences (from statistical physics through evolutionary biology and beyond) turn on special properties of physical processes that are apt to produce "probabilistically patterned" outcomes. I will introduce one particular dynamic approach of especially wide scope.

Then a problem: dynamic properties on their own are never quite sufficient to produce the observed patterns; in addition, some sort of probabilistic assumption about initial conditions must be made. What grounds the initial condition assumption? I discuss some possible answers.

October 10, 2016

Prakash Gorroochurn, Columbia University, Biostatistics Department

Fisher’s fiducial probability – a historical perspective

Abstract

Of R.A. Fisher's countless statistical innovations, fiducial probability is one of the very few that has found little favor among probabilists and statisticians. Fiducial probability is still misunderstood today and rarely mentioned in current textbooks. This presentation will attempt to offer a historical perspective on the topic, explaining Fisher's motivations and subsequent oppositions from his contemporaries. The talk is based on my newly released book "Classic Topics on the History of Modern Mathematical Statistics: From Laplace to More Recent Times."

October 17, 2016

Teddy Seidenfeld, Carnegie Mellon University, Philosophy Department

A modest proposal to use rates of incoherence as a guide for personal uncertainties about logic and mathematics

Abstract

It is an old and familiar challenge to normative theories of personal probability that they do not make room for non-trivial uncertainties about (the non-controversial parts of) logic and mathematics. Savage (1967) gives a frank presentation of the problem, noting that his own (1954) classic theory of rational preference serves as a poster-child for the challenge.

Here is the outline of this presentation:
     First is a review of the challenge.
     Second, I comment on two approaches that try to solve the challenge by making surgical adjustments to the canonical theory of coherent personal probability. One approach relaxes the Total Evidence Condition: see Good (1971). The other relaxes the closure conditions on a measure space: see Gaifman (2004). Hacking (1967) incorporates both of these approaches.
     Third, I summarize an account of rates of incoherence, explain how to model uncertainties about logical and mathematical questions with rates of incoherence, and outline how to use this approach in order to guide the uncertain agent in the use of, e.g., familiar numerical Monte Carlo methods in order to improve her/his credal state about such questions (2012).

Based on joint work with J.B.Kadane and M.J.Schervish

References:
Gaifman, H. (2004) Reasoning with Limited Resources and Assigning Probabilities to Arithmetic Statements. Synthese 140: 97-119.
Good, I.J. (1971) Twenty-seven Principles of Rationality. In Good Thinking, Minn. U. Press (1983): 15-19.
Hacking, I. (1967) Slightly More Realistic Personal Probability. Phil. Sci. 34: 311-325.
Savage, L.J. (1967) Difficulties in the Theory of Personal Probability. Phil. Sci. 34: 305-310.
Seidenfeld, T., Schervish, M.J., and Kadane, J.B. (2012) What kind of uncertainty is that? J.Phil. 109: 516-533.

October 24, 2016

Alan Hajek, Australian National University, School of Philosophy

Staying Regular?

Abstract

'Regularity' conditions provide bridges between possibility and probability. They have the form:

If X is possible, then the probability of X is positive (or equivalents). Especially interesting are the conditions we get when we understand 'possible' doxastically, and 'probability' subjectively. I characterize these senses of 'regularity' in terms of a certain internal harmony of an agent's probability space (omega, F, P). I distinguish three grades of probabilistic involvement. A set of possibilities may be recognized by such a probability space by being a subset of omega; by being an element of F; and by receiving positive probability from P. An agent's space is regular if these three grades collapse into one.

I review several arguments for regularity as a rationality norm. An agent could violate this norm in two ways: by assigning probability zero to some doxastic possibility, and by failing to assign probability altogether to some doxastic possibility. I argue for the rationality of each kind of violation.

Both kinds of violations of regularity have serious consequences for traditional Bayesian epistemology. I consider their ramifications for:

- conditional probability

- conditionalization

- probabilistic independence

- decision theory

October 31, 2016

Vladimir Vapnik, Facebook AI and Columbia University

Brute force and intelligent models of learning

Abstract

This talk is devoted to a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The talk describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer.

In this talk I also will discuss a general ideas in philosophical foundation of induction and generalization related to the Huber's concept of falsifiability and to holistic methods of inference.

November 7, 2016

Adam Elga, Princeton University, Philosophy Department

Fragmented decision theory

Abstract

Bayesian decision theory assumes that its subjects are perfectly coherent: logically omniscient and able to perfectly access their information. Since imperfect coherence is both rationally permissible and widespread, it is desirable to extend decision theory to accommodate incoherent subjects. New 'no-go' proofs show that the rational dispositions of an incoherent subject cannot in general be represented by a single assignment of numerical magnitudes to sentences (whether or not those magnitudes satisfy the probability axioms). Instead, we should attribute to each incoherent subject a whole family of probability functions, indexed to choice conditions. If, in addition, we impose a "local coherence" condition, we can make good on the thought that rationality requires respecting easy logical entailments but not hard ones. The result is an extension of decision theory that applies to incoherent or fragmented subjects, assimilates into decision theory the distinction between knowledge-that and knowledge-how, and applies to cases of "in-between belief".

This is joint work with Agustin Rayo (MIT).

November 14, 2016

Jamie Pietruska , Rutgers University, Department of History

"Old Probabilities" and "Cotton Guesses": Weather Forecasts, Agricultural Statistics, and Uncertainty in the Late-Nineteenth and Early-Twentieth-Century United States

Abstract

This talk, which is drawn from Looking Forward: Prediction and Uncertainty in Modern America (forthcoming, University of Chicago Press), will examine weather forecasting and cotton forecasting as forms of knowledge production that initially sought to conquer unpredictability but ultimately accepted uncertainty in modern economic life. It will focus on contests between government and commercial forecasters over who had the authority to predict the future and the ensuing epistemological debates over the value and meaning of forecasting itself. Intellectual historians and historians of science have conceptualized the late nineteenth century in terms of “the taming of chance” in the shift from positivism to probabilism, but, as this talk will demonstrate, Americans also grappled with predictive uncertainties in daily life during a time when they increasingly came to believe in but also question the predictability of the weather, the harvest, and the future.

November 21, 2016

Glenn Shafer , Rutgers University, Business School

Defensive forecasting

Abstract

In game-theoretic probability, Forecaster gives probabilities (or upper expectations) on each round of the game, and Skeptic tests these probabilities by betting, while Reality decides the outcomes. Can Forecaster pass Skeptic's tests?

As it turns out, Forecaster can defeat any particular strategy for Skeptic, provided only that each move prescribed by the strategy varies continuously with respect to Forecaster's previous move. Forecaster wants to defeat more than a single strategy for Skeptic; he wants to defeat simultaneously all the strategies Skeptic might use. But as we will see, Forecaster can often amalgamate the strategies he needs to defeat by averaging them, and then he can play against the average. This is called defensive forecasting. Defeating the average may be good enough, because when any one of the strategies rejects Forecaster's validity, the average will reject as well, albeit less strongly.

This result has implications for the meaning of probability. It reveals that the crucial step in placing an evidential question in a probabilistic framework is its placement in a sequence of questions. Once we have chosen the sequence, good sequential probabilities can be given, and the validation of these probabilities by experience signifies less than commonly thought.

References:
(1) Defensive forecasting, by Vladimir Vovk, Akimichi Takemura. and Glenn Shafer (Working Paper \#8 at http://www.probabilityandfinance.com/articles/08.pdf).
(2) Game-theoretic probability and its uses, especially defensive forecasting, by Glenn Shafer (Working Paper \#22 at http://www.probabilityandfinance.com/articles/22.pdf).

November 28, 2016

Elie Ayache, Ito 33

Writing the future

Abstract

Derivative valuation theory is based on the formalism of abstract probability theory and random variables. However, when it is made part of the pricing tool that the 'quant' (quantitative analyst) develops and that the option trader uses, it becomes a pricing technology. The latter exceeds the theory and the formalism. Indeed, the contingent payoff (defining the derivative) is no longer the unproblematic random variable that we used to synthesize by dynamic replication, or whose mathematical expectation we used merely to evaluate, but it becomes a contingent claim. By this distinction we mean that the contingent claim crucially becomes traded independently of its underlying asset, and that its price is no longer identified with the result of a valuation. On the contrary, it becomes a market given and will now be used as an input to the pricing models, inverting them (implied volatility and calibration). One must recognize a necessity, not an accident, in this breach of the formal framework, even read in it the definition of the market now including the derivative instrument. Indeed, the trading of derivatives is the primary purpose of their pricing technology, and not a subsidiary usage. The question then poses itself of a possible formalization of this augmented market, or more simply, of the market. To that purpose we introduce the key notion of writing.

December 5, 2016

Ben Levinstein, Rutgers University, Philosophy Department

Higher-order evidence, Accuracy, and Information Loss

Abstract

Higher-order evidence is evidence that you're handling information out of accord with epistemic norms. For instance, you may gain evidence that you're possibly drugged and can't think straight. A natural thought is that you respond by lowering your confidence that you got a complex calculation right. If so, HOE has a number of peculiar features. For instance, if you should take it into account, it leads to violations of Good's theorem and the norm to update by conditionalization. This motivates a number of philosophers to embrace the steadfast position: you shouldn't lower your confidence even though you have evidence you're drugged. I disagree. I argue that HOE is a kind of information-loss. This both explains its peculiar features and shows what's wrong with some recent steadfast arguments. Telling agents not to respond is like telling them never to forget anything.

December 12, 2016

Vladimir Vovk, University of London

Treatment of uncertainty in the foundations of probability

Abstract

Kolmogorov's measure-theoretic axioms of probability formalize the Knightean notion of risk. Classical statistics adds a degree of Knightean uncertainty, since there is no probability distribution on the parameters, but uncertainty and risk are clearly separated. Game-theoretic probability formalizes the picture in which both risk and uncertainty interfere at every moment. The fruitfulness of this picture will be demonstrated by open theories in science and the emergence of stochasticity and probability in finance.

January 23, 2017

Shelly Goldstein, Rutgers University, Mathematics Department

Probability in Quantum Mechanics (and Bohmian Mechanics)

Abstract

No abstract.

January 30, 2017

Alexander Stein, Brooklyn Law School

Behavioral Probability

Abstract

Throughout their long history, humans have worked hard to tame chance. They adapted to their uncertain physical and social environments by using the method of trial and error. This evolutionary process made humans reason about uncertain facts the way they do. Behavioral economists argue that humans’ natural selection of their prevalent mode of reasoning wasn’t wise. They censure this mode of reasoning for violating the canons of mathematical probability that a rational person must obey.

Based on the insights from probability theory and the philosophy of induction, I argue that a rational person need not apply mathematical probability in making decisions about individual causes and effects. Instead, she should be free to use common sense reasoning that generally aligns with causative probability. I also show that behavioral experiments uniformly miss their target when they ask reasoners to extract probability from information that combines causal evidence with statistical data. Because it is perfectly rational for a person focusing on a specific event to prefer causal evidence to general statistics, those experiments establish no deviations from rational reasoning. Those experiments are also flawed in that they do not separate the reasoners’ unreflective beliefs from rule-driven acceptances. The behavioral economists’ claim that people are probabilistically challenged consequently remains unproven.

Paper can be downloaded here.

February 6, 2017

Branden Fitelson, Northeastern University, Philosophy Department

Two Approaches to Belief Revision

Abstract

In this paper, we compare and contrast two methods for the qualitative revision of (viz., “full”) beliefs. The first (“Bayesian”) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second (“Logical”) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other -- especially in the special case where the agent’s belief sets are deductively cogent.

The latest draft can be downloaded: http://fitelson.org/tatbr.pdf

February 13, 2017

Gretchen Chapman, Rutgers University, Psychology Department

Empirical Experiments on the Gambler's Fallacy

Abstract

The gambler’s fallacy (GF) is a classic judgment bias where, when predicting events from an i.i.d. sequence, decision makers inflate the perceived likelihood of one outcome (e.g. red outcome from a roulette wheel spin) after a run of the opposing outcome (e.g., a streak of black outcomes). This phenomenon suggests that decision makers act as if the sampling is performed without replacement rather than with replacement. A series of empirical experiments support the idea that lay decision makers indeed have this type of underlying mental model. In an online experiment, MTurk participants drew marbles from an urn after receiving instructions that made clear that the marble draws were performed with vs. without replacement. The GF pattern appeared only under the without-replacement instructions. In two in-lab experiments, student participants predicted a series of roulette spins that were either grouped into blocks or ungrouped as one session. The GF pattern was manifest on most trials, but it was eliminated on the first trial of each block in the blocked condition. This bracketing result suggests that the sampling frame is reset when a new block is initiated. Both studies had a number of methodological strengths: they used actual random draws with no deception of participants, and participants made real-outcome bets on their predictions, such that exhibiting the GF was costly to subjects (yet they still showed it). Finally, the GF was operationalized as predicting or betting on an outcome as a function of run length of the opposing outcome, which revealed a nonlinear form of the GF. These results illuminate the nature of the GF and the decision processes underlying it as well as illustrate a method to eliminate this classic judgment bias.

February 20, 2017

Michał Godziszewski, University of Warsaw, Institute of Philosophy

Dutch Books and nonclassical probability spaces

Abstract

We investigate how Dutch Book considerations can be conducted in the context of two classes of nonclassical probability spaces used in philosophy of physics. In particular we show that a recent proposal by B. Feintzeig to find so called “generalized probability spaces” which would not be susceptible to a Dutch Book and would not possess a classical extension is doomed to fail. Noting that the particular notion of a nonclassical probability space used by Feintzeig is not the most common employed in philosophy of physics, and that his usage of the “classical” Dutch Book concept is not appropriate in “nonclassical” contexts, we then argue that if we switch to the more frequently used formalism and use the correct notion of a Dutch Book, then all probability spaces are not susceptible to a Dutch Book. We also settle a hypothesis regarding the existence of classical extensions of a class of generalized probability spaces.

This is a joint work with Leszek Wroński (Jagiellonian University).

February 27, 2017

Hans Halvorson, Princeton University, Philosophy Department

Probability Ex Nihilo

Abstract

In many mathematical settings, there is a sense in which we get probability "for free." I’ll consider some ways in which this notion "for free" can be made precise - and its connection (or lack thereof) to rational credences. As one specific application, I’ll consider the meaning of cosmological probabilities, i.e. probabilities over the space of possible universes.

March 6, 2017

Tamar Lando, Columbia University, Philosophy Department

Runaway Credences and the Principle of Indifference

Abstract

The principle of indifference is a rule for rationally assigning precise degrees of confidence to possibilities among which we have no reason to discriminate. I argue that this principle, in combination with standard Bayesian conditionalization, has untenable consequences. In particu- lar, it allows agents to leverage their ignorance toward a position of very strong confidence vis-`a-vis propositions about which they know very little. I study the consequences for our response to puzzles about self-locating belief, where a restricted principle of indifference (together with Bayesian conditionalization) is widely endorsed.

March 20, 2017

Sandy Zabell, Northwestern University, Mathematics Department

Alan Turing and the Applications of Probability to Cryptography

Abstract

In the years before World War II Bayesian statistics went into eclipse, a casualty of the combined attacks of statisticians such as R. A. Fisher and Jerzy Neyman. During the war itself, however, the brilliant but statistical naif Alan Turing developed de novo a Bayesian approach to cryptananalysis which he then applied to good effect against a number of German encryption systems. The year 2012 was the centenary of the birth of Alan Turing, and as part of the celebrations the British authorities released materials casting light on Turing's Bayesian approach. In this talk I discuss how Turing's Bayesian view of inductive inference was reflected in his approach to cryptanalysis, and give an example where his Bayesian methods proved more effective than the orthodox ones more commonly used. I will conclude by discussing the curious career of I. J. Good, initially one of Turing's assistants at Bletchley Park. Good became one of the most influential advocates for Bayesian statistics after the war, although he hid the reasons for his belief in their efficacy for many decades due to their classified origins.

March 27, 2017

Brad Weslake, New York University-Shanghai, Philosophy Department

Fitness and Variance

Abstract

This paper is about the role of probability in evolutionary theory. I present some models of natural selection in populations with variance in reproductive success. The models have been taken by many to entail that the propensity theory of fitness is false. I argue that the models do not entail that fitness is not a propensity. Instead, I argue that the lesson of the models is that the fitness of a type is not grounded in the fitness of individuals of that type.

April 3, 2017

Peter Achistein, Johns Hopkins University, Philosophy Department

Epistemic Simplicity: The Last Refuge of a Scoundrel

Abstract

Some of the greatest scientists, including Newton and Einstein, invoke simplicity in defense of a theory they promote. Newton does so in defense of his law of gravity, Einstein in defense of his general theory of relativity. Both claim that nature is simple, and that, because of this, simplicity is an epistemic virtue. I propose to ask what these claims mean and whether, and if so how, they can be supported. The title of the talk should tell you where I am headed.

April 10, 2017

Harry Crane, Rutgers University, Department of Statistics

Probabilities as Shapes

Abstract

In mathematics, statistics, and perhaps even in our intuition, it is conventional to regard probabilities as numbers, but I prefer instead to think of them as shapes. I'll explain how and why I prefer to think of probabilities as shapes instead of numbers, and will discuss how these probability shapes can be formalized in terms of infinity groupoids (or homotopy types) from homotopy type theory (HoTT).

April 17, 2017

Dimitris Tsementzis, Rutgers University, Department of Statistics

Sample Structures

Abstract

I will outline some difficult cases for the classical formalization of a sample space as a *set* of outcomes, and argue that some of these cases are better served by a formalization of a sample space as an appropriate *structure* of outcomes.

April 24, 2017

Miriam Schoenfield, University of Texas, Department of Philosophy

Beliefs Formed Arbitrarily

Abstract

This paper addresses the concern of beliefs formed arbitrarily: for example, religious, political and moral beliefs that we realize we possess because of the social environments we grew up in. The paper motivates a set of criteria for determining when the fact that our beliefs were arbitrarily formed should motivate a revision. What matters, I will argue, is how precise or imprecise your probabilities are with respect to the matter in question.

May 1, 2017

Nicholas Teh, Notre Dame University, Philosophy Department

Probability, Inconsistency, and the Quantum

Abstract

Various images of the inconsistency between (the empirical probabilities of) quantum theory and classical probability have been handed down to us by tradition. Of these, two of the most compelling are the "geometric" image of inconsistency implicit in Kochen-Specker arguments, and the "Dutch Book violation" image of inconsistency which is familiar to us from epistemology and the philosophy of rationality. In this talk, I will argue that there is a systematic and highly general relationship between the two images.