Research projects

Current research projects

Project “The epistemic role of statistical significance testing for scientific explanations” (since 2017)

Statistical null hypothesis testing (short: NHST) plays an important role in several disciplines, such as biology, medicine, and psychology, but also in experimental philosophy. It is employed to test the plausibility of the contradiction of the statistical hypothesis (aka null hypothesis) by determining the probability of the collected data given the null hypothesis. This probability is dubbed the p-value or significance value. For several decades, statisticians and philosophers of science have deemed NHST methodologically inadequate, and they have made the case for alternatives. Until today, NHST is nonetheless frequently used. Since the early 2010s and due to a growing awareness of the so-called replicability crisis (i.e., the non-replicability of many results), NHST finally received more critical appraisal within science. However, the dominant view among scientists is not to dispense with NHST, but to improve upon its use. Apart from appeals to end so-called questionable research practices, such as optional stopping of data collection, p-value rounding, and manipulation of outliers, the suggestion is to modify the dominant application of NHST. For instance, in July 2017, a large group of scientists from renowned institutions signed the demand for a lower p-value threshold standard: 0.005 instead of 0.05 ( Another proposal is to require a larger sample size standard than the current one.

Yet, from a theoretical point of view, reactions like the latter are beside the point. It is not the threshold value standard or the sample size standard which renders NHST methodologically flawed; the method is intrinsically flawed. If so, what follows from this for hypotheses, explanations, and theories that are (in part) taken to be supported by NHST applications? My research project is concerned with questions along these lines. It tackles epistemic dimensions of NHST in scientific practice based on case studies, and in particular, the epistemic role and significance of NHST regarding explanatory claims.

From October 2017 until the end of March 2018, I work on this project at the University of Salzburg (host: Charlotte Werndl; financially sponsored by an Ernst Mach visiting research fellowship from the OeAD).

Project “Reductionism about understanding why” (since 2017)

Is understanding why p simply knowing why p? Reductionists claim that it is. Anti-reductionists argue that understanding why and knowing why come apart in several respects. I employ my notion of shallow knowing why in order to shed light on the relationship between knowing why and understanding why. I explore the plausibility of analyzing understanding why in terms of shallow knowing why. I dubb this form of reductionism about understanding why modest and parsimonious reductionism.


Past projects

PhD thesis “Knowing why” (2014−2018)

My PhD thesis was conducted as part of the Volkswagen Foundation project “A study in explanatory power”. It is concerned with an in-depth analysis of knowing why (or: knowledge-why), paying special attention to the case of model-based knowledge-why in science.

The major contribution of my dissertation is to provide — irrespective of any particular account of explanation — a thorough basic analysis of knowledge-why, as well as an analysis of features which knowledge-why possesses or could possess despite being of the same kind as the commonly discussed knowing that p. The dissertation comprises seven chapters. In the first chapter, I explain a rationale for the lacuna of a detailed analysis of knowledge-why, and I argue for the relevance of closing this gap. In the second chapter, I provide a basic analysis of the content of knowing why. Based on standard analyses of so-called knowledge-wh, I analyze the content of knowledge-why in terms of direct answers to the so-called embedded why-question. I propose that an answer to such a question should be construed as an explanatory proposition. This is, at its minimum, a proposition q for which it holds that an explanatorily relevant dependency between the p-phenomenon and the q-phenomenon obtains. I then show that explanatory propositions that figure into the content of knowledge-why must relate the p-phenomenon and the q-phenomenon. A paradigmatic example of such a proposition is a (p because q) proposition. However, I argue that not all answers to why-questions need to fit this form. In the third chapter, I make the case for the claim that (p because q) knowledge can come apart from knowledge of facts or principles that establish the explanatory connections between the p-phenomenon and the q-phenomenon. Based on this argument, I propose and defend a distinction between shallow knowledge-why and non-shallow knowledge-why. In the fourth chapter, I rebut the claim that knowledge-why is inherently contrastive by rebutting the claim that why-questions are inherently contrastive. In the fifth chapter, I am concerned with the claim that knowledge-why is gradable even if we take for granted that knowledge-why qua knowledge is not gradable. I first reject the claim that ‘know why’ is a gradable term. Then, I argue that the kind of gradability which knowledge-why possesses is akin to degrees of abilities. The quality of knowledge-why is gradable; the quality of its content can be graded in regard to both quantitative and qualitative differences. In the sixth chapter, I address a challenge to an account of knowledge-why in regard to factivity. This is the challenge raised by purportedly correct explanations based on idealized models that cannot be de-idealized. The result of my analysis is that explanations based on such models do not contain the idealizations involved in the respective models’ propositional content. In the seventh and last chapter, I summarize the results of my thesis, and I point to a utility of an account of knowledge-why, namely to being better able to capture whether, and if so, how precisely understanding why p and knowing why p differ.

October 2015 until the end of March 2016 I spent at NYU in the research center “Varieties of Understanding” under the supervision of Michael Strevens and Catherine Elgin in order to deepen my studies. April & May 2017 I did research at the University of Edinburgh under the supervision of Duncan Pritchard.

Research on the formal semantics of multi-modal utterances together with Hannes Rieser and Florian Hahn (2011−2017)

From 2011 until 2015, I worked as a researcher in the interdisciplinary project “Speech-gesture alignment” of the CRC 673 “Alignment in Communication” (based at Bielefeld University, sponsored by the DFG), which was led by Hannes Rieser (linguistics) and Stefan Kopp (computer science). Together with Hannes Rieser and Florian Hahn I explored speech accompanying gestures and we continued conducting our research until the end of 2017. Our research is based on empirical data from our corpus, which is one of the largest multi-modal corpora worldwide. Inter alia, we analyze multi-modal utterances from a formal semantics point of view. Intuitively, such utterances convey a joint content which can exceed the verbal one. In order to obtain multi-modal propositions, we developed a formal semantics for co-speech gestures and formal constructions for interfacing speech semantics and gesture semantics by employing the ψ-calculus, a recent extension of Milner’s π-calculus. We also took a look at the details of how speech and gestures (and their respective semantics) are aligned to each other both intra- and inter-personally in dialogues and trialogues. Part of our work was conducted in collaboration with Stefan Kopp, Kirsten Bergmann, Thies Pfeiffer, and Udo Klein. You can find some outcomes of our research here:

Master thesis “A shotgun wedding? Non-declarative sentences and intensional semantics” (2013−2014)

My thesis was concerned with the question whether intensional semantics are able to give a satisfactory semantics for non-declarative sentences. Advocates of an intensional semantics claim that the meaning of a sentence consists in its truth conditions. While this claim seems plausible regarding declarative sentences, it seems odd regarding non-declarative sentences (such as interrogatives and imperatives) insofar as these are intuitively not evaluated as true or false and thus seem to lack truth conditions. Nonetheless, there are good reasons to strive for a unified semantic. A common semantic framework for all kinds of sentences would accommodate the thought that sub-sentential expressions have the same meaning in all kinds of sentences, and it could deal with so-called mixed-mood sentences in which different kinds of sentences are combined (e. g., “If you put me on the mat, then where do you put the cat?”).  I examined two proposals of how to obtain such a framework by employing tools from intensional semantics. The first stems from David Lewis (“General Semantics”, 1970) and the second from Roland Hausser (esp. “Surface Compositional Grammar”, 1980).

Lewis’ basic idea is that non-declaratives are syntactic variants of certain explicit performatives (“Do you love me?” vs. “I ask you whether you love me”). Since the latter arguably have truth conditions, non-declaratives have them, too. Their syntactic surfaces differ but not their semantics. This idea is plausible in light of the fact that such variants are both used to achieve the same communicative goal (e.g., to ask the addressee whether she loves the langue user). Nonetheless, Lewis’ proposal has been widely rejected and never been worked out. My evaluation of the objections shows that they are not conclusive, though. Some of them arise from too simplified an interpretation of his proposal, some rely on disputable premises, and some can be rebutted. My conclusion is that Lewis’ proposal deserves to be fully worked out.

Hausser’s basic idea is that non-declarative sentences do not have truth conditions but the same kind of semantic values as certain sub-sentential expressions. For instance, an imperative sentence has a similar semantics as one-place predicates; it specifies the property the addressee of the utterance is directed to achieve. While this analysis does justice to our intuitions insofar as that it does not ascribe truth conditions to non-declarative sentences, Hausser’s semantics has technical problems, some conceptual inconsistencies, and is not able to treat complex non-declarative sentences and mixed-mood sentences in its current form. My conclusion is that the idea requires a different implementation to yield a convincing proposal.

So, even though both proposals are not fully satisfactory, it seems promising to integrate non-declarative sentences into an intensional semantics. The alleged shotgun wedding might turn out to be a happy one.

Project on critical thinking instruction together with Daniel Milne-Plückebaum (2012−2016)

Our project featured two sub-projects: (i) We worked on how to teach the basics of critical thinking and philosophical logic to beginners and non-specialists in an effective manner. Our goal is to provide students with tools they can use for analyzing, understanding, criticizing, and constructing arguments beyond the classroom. From 2012 to 2014, we developed and applied a course concept for teaching how to analyze natural language argumentation using both formal and informal methods. This part of our project was supported by the project “Handwerk Philosophie” (which is sponsored by the program “Richtig Einsteigen” by the Federal Ministry of Education and Research). You can find some of our materials (in German) here:
(ii) We rethought classical argument taxonomies in order to gain a unified and detailed classification of arguments which treats deductively valid and deductively non-valid arguments on a par. While using concepts of deductive logic does not yield the desired outcome insofar as it results in analyzing deductively non-valid arguments too coarse-grainedly, we work on employing concepts from inductive logic to obtain a uniform treatment of both argument types and their respective subtypes.