Master's students get credit for attendance. Please make sure you indicate your name on the sheet that will be passed around during the talk.
If you have difficulty accessing the eLearning site (https://elearning.uni-bayreuth.de/course/view.php?id=35326), please let us know (Patricia Rich / Paolo Galeazzi).
February 07th 2023 I Gen Eickers (Bayreuth) I Emotional Marginalization
Abstract: In recent years, an array of critical emotion theorists have emerged who call for change with respect to how emotion theory is done, how emotions are understood, and how we do emotion. In this talk, I draw on the work that some of these authors have produced to analyze what emotional marginalization is (or what it could be) by drawing on experiences of emotional marginalization. I identify three different stages at which emotional marginalization may take place: emotion experience, emotional display, and emotion recognition. My talk will be largely based on a forthcoming paper I have written called “Pathologizing Disabled and Trans Identities: How Emotions Become Marginalized”. In this paper, I argue that the pathologization of trans and disabled identities is ultimately connected to emotional marginalization. This framing becomes clear when we look at how disability, transness, and emotions have repeatedly been constructed as natural phenomena (or, as “unnatural deviations”) rather than social phenomena. I argue that this failure to investigate the social constitution of emotions ultimately reinforces and reproduces the marginalization of emotions that members of subordinated social groups experience. In my talk, I will try to extend this preliminary understanding of emotional marginalization in order to see whether it can (and should) be applied to social groups that are not subordinated.
January 24th 2023 I SM Amadae (Helsinki) I Hawk Dove Binary and the Evolution of Domination
Abstract: This paper introduces the Hawk Dove Binary model (see Amadae and Watts 2022) and analyzes its implications for understanding the evolution of domination and patterns of systemic discrimination. The aims of this paper are the following: (1) The paper introduces the Hawk Dove Binary agent-based model and its implications. (2) The paper compares the standard conclusions drawn from repeating multi-agent Prisoner’s Dilemma games with what we can learn from multi-agent repeating Hawk Dove Binary (HDB) games (these have two populations). (3) The paper concludes by discussing the potential implications for our understanding of the evolution of domination from HDB in comparison to the standard conclusions drawn about the evolution of cooperation from PD. The overall point is that, in an otherwise homogeneous population, the introduction of binary tags combined with individuals’ readiness to resort to threats of violence will lead to a systemic (and not easily reversed) pattern of domination. Given the evidence of systemic hierarchies in human societies along the lines of ethnicity and gender and the propensity for threats of violence, I explore the possibility that the Hawk Dove Binary model may be as important as the Prisoner’s Dilemma for modeling repeating social interactions in 2-group multi-agent populations.
January 17th 2023 I Rasmus Rendsvig (Copenhagen) I Reiter and the Underdog: A Foundational Discussion in Non-monotonic ReasoningAbstract: In 1980, Raymond Reiter published his seminal paper *A Logic for Default Reasoning*. The paper presents a formal system for non-monotonic reasoning with default rules, i.e., reasoning based on defeasible “rules of thumb” believed to lead from true premises to true conclusions by default, but which may—in contexts with more information—no longer be warranted. E.g., we may find the rule “If X is a bird, then X can fly” to be a reasonable default rule, and feel warranted in concluding from “Tux is a bird” to “Tux can fly.” Learning that Tux is a penguin, however, defeats the rule, making the conclusion unwarranted. The reasoning is called non-monotonic as more information may lead to fewer conclusions, contra classical logic.Reiter's logic has since spawned a vast literature on default reasoning framework that extend his original system, with one important focus being how to reasonably introduce priorities between default rules, used to specify a choice of an agent's default-based beliefs should more than one “solution” to these exist. As a sanity check results, these extending frameworks typically show that their belief set solutions are a subset of the Reiter solutions. That belief set solutions should be Reiter solutions is taken for granted.In this talk, I will critique this assumption. I will argue that the underlying approach taken by the default logic of Reiter and extending frameworks are unfit for inducing belief sets of ideal reasoners. As a teaser, the argument involves the deduction theorem from classical logic, Savage's sure-thing principle, and a fundamental distinction about how the logical nature of default rules, about which I will argue that a little discussed paper from the default logic literature got it right.
December 20th 2022 I Atoosa Kasirzadeh (Edinburgh) I Two Approaches for Applying Mathematics
Abstract: I offer a novel distinction between two concepts of applied mathematics. The first, reflective applied mathematics, requires the target for representation or explanation be prior and unchanged by applying mathematics. The second, performative applied mathematics, generates or modifies its target of application. I argue that the contemporary philosophical accounts of applied mathematics are primarily successful in formulating the reflective concept. By presenting two examples of performative applied mathematics from economics and computer science, I show the limits of reflective applied mathematics to capture these instances. I then argue that this distinction carries significant normative implications concerning central debates in philosophy of science and ethics of artificial intelligence.
December 06th 2022 I Moritz Schulz (Dresden) I Deciding to Blame
Abstract: There is evidence that blaming somebody is only appropriate when one knows (or justifiably believes) that the object of one's blame actually did the deed. For example, Lara Buchak (2014) suggests that we cannot blame somebody on statistical grounds alone for having stolen one's phone. For Buchak, this indicates that belief and credence may be attitudes designed for different kinds of domains. We may have to look at what we believe when deciding whether to blame somebody. But we may have to look at our credences when deciding whether to invest in a start-up, say. In my talk, I am going to criticize this picture. I argue that norms about reasons for blame have two readings depending on whether these reasons are assumed to be possessed or not. On the reading I favor, there won't be a need for dividing the realm of our choices into two classes – decisions about blame can be made in the same way as decisions about where to invest one's money.
+++ postponed until next semester +++ November 15th 2022 I Emanuele Ratti (Linz) I Automated Science, Machine Learning, and Values +++ postponed until next semester +++
Abstract: Dreams of automated science have accompanied the development of modern natural sciences. However, it is difficult to pinpoint a specific conception of automated science. Recent developments in artificial intelligence (AI), and machine learning in particular (ML), have been associated to the idea of automated science, suggesting that ML-driven contemporary automated science is just another step towards what science has always promised to fulfil. In this talk, I investigate the relation between ML and automated science. First, I reconstruct and identify two views of automated science. The first, called traditional automated science (TAS), sees science as mechanical, fostering intersubjective agreement, and suppressing scientists' subjectivity. The second view is Paul Humphreys' conception, and it differs from the first because it does minimize the suppression of subjectivity, and it attributes to automated science a non-human epistemic horizon. I compare these two views to ML to see which kind of automated science contemporary AI can possibly foster. However, by analyzing the practice of building algorithmic systems, I claim that none of the conceptions capture the nature of automated science in light of AI. Unlike TAS and Humphreys' view, ML automated science is enveloped in human subjectivity, as ML training is shaped by cognitive and non-cognitive values. Contemporary AI-driven automated science is just a tool, and as any other designed artifact, it displays the mark of its designers.
November 8th 2022 I Frederik van de Putte (Rotterdam) I Original Position Arguments and Multidimensional Ignorance
Abstract: Original position arguments start from the assumption of rational, self-interested individuals making choices behind a veil of ignorance, and thus not knowing their position in society. On Rawls’ well-known account, this leads them to choose in line with the difference principle, evaluating options in terms of the welfare of the worst off. Other scholars such as Harsanyi and Parfit have argued for different conceptions of justice, based on different notions of individual rationality behind the veil of ignorance. Even if such arguments may not lead to a unique policy or social choice rule, they still serve as a useful tool in sorting out our intuitions regarding procedural fairness and its relation to social choice.
The aim of this paper is to develop a formal model of original position arguments for social choice rules; a model that is rich enough to account for a range of philosophical positions, and yet in line with standard decision theoretic accounts of individual rationality. We will focus on cases of choice under ignorance, i.e. cases where the distribution of welfare that results from a given choice depends in part on the (unknown) state of the world. We will show that, while the arguments by Rawls and Harsanyi can be generalized relatively easily to this setting, problems arise for other, more fine-grained conceptions of justice in the face of ignorance. The upshot is that we have to enrich the standard model of decisions in this setting, if we want to obtain a good model of original position arguments.