Despite remarkable advances in statistical theory, methods, and computing in the last 50+ years, fundamental questions about probability and its role in statistical inference remain unanswered. There is no shortage of ways to construct data-dependent probabilities for the purpose of inference, Bayes being the most common, but none are fully satisfactory. One concern is the recent discovery that, for any data-dependent probability, there are false hypotheses about the unknown quantity of interest that tend to be assigned high probability – a phenomenon we call false confidence – which creates a risk for systematically misleading inferences. Here I argue that these challenges can be overcome by broadening our perspective, allowing for uncertainty quantification via imprecise probabilities. In particular, I will demonstrate that it is possible to achieve valid inference, free of false confidence and the associated risks of systematic errors, by working with a special class of imprecise probabilities driven by random sets. Examples will be given to illustrate the key concepts and results, and connections between this new framework and familiar things from classical statistics will be made.
Find out more at https://riskinstitute.uk/riskinstituteonline