Offre de thèse
Les LLM et leur potentiel explicatif
Date limite de candidature
15-04-2026
Date de début de contrat
01-10-2026
Directeur de thèse
IMBERT Cyrille
Encadrement
The doctoral student will be supervised by Cyrille Imbert (Archives Poincaré, Nancy). If it relevant, a cosupervision with someone from another field/university/country may be organized. Cyrille Imbert is a senior researcher at CNRS, affiliated with Archives Poincaré (CNRS, lies in the general philosophy of science (in particular, explanation, complexity, and models), science, and the social epistemology of science and epistemic activities (in particular Supervision. Regular oral and written exchanges. Co-supervision with relevant colleagues in science or philosophy, in France or abroad, is possible, if appropriate.
Type de contrat
école doctorale
équipe
contexte
Archives Poincaré and ENACT cluster The process involves two stages. - Preselection stage. Candidates are preselected by the supervisors of the institutions (like Archives Poincaré) involved in the ENACT program. Pre-applications should agree with the target topic descriptions deposited on ADUM - Application stage. Preselected candidates apply to the ENACT program with the support of the potential supervisors (deadline: end of April) More details about the three targeted PhD topics may be found here by following the links below (select the English version on each page): For this preselection stage, candidates are requested to apply online on April 15 at the latest (early bird applications are welcome, given the constrained timeline). We apologize for the very short delays due to the late publication of the call. The applications will be reviewed, and interviews with short-listed candidates will be conducted after April 15. Preselected candidate will then apply to the ENACT program and, if selected, will be interviewed in May. To apply, please apply online the following documents: - Candidate's CV - Candidate's cover letter/letter of motivation - Candidate's Bachelor's and Master's transcripts/certificates, with grades. - Letter(s) of recommendation (at most two), typically, letter of supervisor of the M2 internship (if completed or started more than 3 months ago) or of a previous project - Official certificate or evidence of English Proficiency - A research proposal (that agrees with the target PhD subject) describing the specific questions you plan to tackle and including a concrete work plan or first sketch. Maximum length: 1200 words (plus references) - Writing sample (max. 20 pages, e.g., term paper, essay, part of a Master's thesis, dissertation chapter, etc.) Please also send your application to Cyrille.Imbert@univ-lorraine.fr (but note that the official version is the one online).spécialité
Philosophielaboratoire
AHP-PReST - Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies
Mots clés
Valeur explicative, Compréhension, Narratifs Explicatifs, Pragmatique, LLM, Formats d'explication
Détail de l'offre
LLMs and Their Potential for Explanatory Value
This research proposal aims to analyze the extent to which (a) the development of large language models (LLMs) renews existing discussions and questions concerning the nature of explainability and understanding, and (b) the emergence of LLMs raises novel questions and problems related to explainability in both scientific and everyday contexts.
Over the past decades, many philosophers of science have moved away from the view that explanations should primarily be understood as linguistic entities or arguments. Nevertheless, several central aspects of explainability remain closely connected to linguistic dimensions, both at the epistemological and pragmatic levels (that is, with respect to how explanations are discovered, presented, and understood, as well as how agents reason about them). Explanatory knowledge is typically communicated to both lay and expert audiences through textual means, narratives, or argumentative structures. In the formal sciences (notably mathematics), the objects to be explained and understood—such as proofs, programs, and algorithms—also possess an essential linguistic or symbolic dimension.
In this context, the development and application of LLMs can be expected to enhance the methods by which humans acquire, articulate, and reason about explanations, while also generating new challenges concerning these notions.
The proposed PhD project aims to connect scientific research on LLMs and their applications, philosophical work on AI-based explanations and understanding, and broader philosophical discussions of explanation and understanding that conceive of explanations as symbolic or linguistic entities.
A non-exhaustive set of research questions to be addressed includes:
- How can LLMs contribute to the production and analysis of explanations, for instance, by generating appropriate explanatory narratives?
- What specific epistemological problems does the understanding of LLM as an object raise?
- Do LLMs prompt a reconceptualization of explanation as primarily pragmatic and/or linguistic, rather than causal and truth-tracking?
- Can LLMs contribute to the understanding of scientific practices or results that are often regarded as resistant to understanding, such as simulations or other AI-based methods?
- Do LLMs challenge traditional epistemological accounts that associate understanding with mental representation or the grasp of reasons?
- What role do LLMs play within the broader framework of explainable artificial intelligence (XAI)?
- What specific epistemic roles might LLM-based methods play in the development of explanatory understanding?
- Are there particular epistemic norms associated with the use of LLMs when they are intended to convey understanding?
- When LLMs are employed to foster understanding, are their uses and outputs associated with specific explanatory values?
- Can LLM-based methods contribute to all forms of explanation and understanding or just to specific non-causal ones?
- Are the potential explanatory gains associated with LLM methods domain-specific?
- What forms of spurious explanation or illusory understanding are specifically associated with LLM-based methods, and which risks warrant particular attention?
- Is “understanding with an LLM” a distinct epistemic category, different from individual or collective understanding?
The PhD project will address a selected subset of these questions. The anticipated stages of the research are as follows:
- A survey of cases in which LLMs appear to contribute to explanatory understanding
- An analysis of the relevant philosophical literature on explanation and understanding
- Cross-comparative analysis of the selected cases
- A synthesis of insights derived from the case studies and the philosophical literature
- Conclusions concerning how LLM-based methods may contribute to explanatory understanding
- How do LLM methods from other AI-based methods on the above issues?
Keywords
Explanatory Value, Understanding, Explanatory Narratives, Pragmatics, LLM, Formats of explanation
Subject details
LLMs and Their Potential for Explanatory Value This research proposal aims to analyze the extent to which (a) the development of large language models (LLMs) renews existing discussions and questions concerning the nature of explainability and understanding, and (b) the emergence of LLMs raises novel questions and problems related to explainability in both scientific and everyday contexts. Over the past decades, many philosophers of science have moved away from the view that explanations should primarily be understood as linguistic entities or arguments. Nevertheless, several central aspects of explainability remain closely connected to linguistic dimensions, both at the epistemological and pragmatic levels (that is, with respect to how explanations are discovered, presented, and understood, as well as how agents reason about them). Explanatory knowledge is typically communicated to both lay and expert audiences through textual means, narratives, or argumentative structures. In the formal sciences (notably mathematics), the objects to be explained and understood—such as proofs, programs, and algorithms—also possess an essential linguistic or symbolic dimension. In this context, the development and application of LLMs can be expected to enhance the methods by which humans acquire, articulate, and reason about explanations, while also generating new challenges concerning these notions. The proposed PhD project aims to connect scientific research on LLMs and their applications, philosophical work on AI-based explanations and understanding, and broader philosophical discussions of explanation and understanding that conceive of explanations as symbolic or linguistic entities. A non-exhaustive set of research questions to be addressed includes: - How can LLMs contribute to the production and analysis of explanations, for instance, by generating appropriate explanatory narratives? - What specific epistemological problems does the understanding of LLM as an object raise? - Do LLMs prompt a reconceptualization of explanation as primarily pragmatic and/or linguistic, rather than causal and truth-tracking? - Can LLMs contribute to the understanding of scientific practices or results that are often regarded as resistant to understanding, such as simulations or other AI-based methods? - Do LLMs challenge traditional epistemological accounts that associate understanding with mental representation or the grasp of reasons? - What role do LLMs play within the broader framework of explainable artificial intelligence (XAI)? - What specific epistemic roles might LLM-based methods play in the development of explanatory understanding? - Are there particular epistemic norms associated with the use of LLMs when they are intended to convey understanding? - When LLMs are employed to foster understanding, are their uses and outputs associated with specific explanatory values? - Can LLM-based methods contribute to all forms of explanation and understanding or just to specific non-causal ones? - Are the potential explanatory gains associated with LLM methods domain-specific? - What forms of spurious explanation or illusory understanding are specifically associated with LLM-based methods, and which risks warrant particular attention? - Is “understanding with an LLM” a distinct epistemic category, different from individual or collective understanding? The PhD project will address a selected subset of these questions. The anticipated stages of the research are as follows: - A survey of cases in which LLMs appear to contribute to explanatory understanding - An analysis of the relevant philosophical literature on explanation and understanding - Cross-comparative analysis of the selected cases - A synthesis of insights derived from the case studies and the philosophical literature - Conclusions concerning how LLM-based methods may contribute to explanatory understanding - How do LLM methods from other AI-based methods on the above issues?
Profil du candidat
Le/la candidat(e) doit être titulaire d'un master en philosophie des sciences, en philosophie ou en épistémologie, ou être sur le point de l'obtenir. Son cursus doit idéalement démontrer sa capacité à mener une analyse philosophique des méthodes formelles et scientifiques, et notamment des applications scientifiques de l'IA. D'excellentes compétences rédactionnelles sont particulièrement attendues.
Le/la candidat(e) doit posséder une compréhension suffisante des méthodes d'IA ou une capacité avérée à acquérir rapidement des connaissances pertinentes et suffisantes sur ces méthodes et leurs applications dans certains domaines scientifiques. Il/elle peut avoir suivi des formations ou des cours pertinents en méthodes d'IA, être titulaire d'une formation scientifique supérieure (par exemple, une licence en sciences dans un domaine pertinent), ou justifier de travaux universitaires démontrant sa capacité à développer des arguments philosophiques s'appuyant sur des informations relatives aux méthodes d'IA.
Le/la candidat(e) doit posséder d'excellentes aptitudes au travail d'équipe et être disposé(e) à interagir régulièrement avec différents sites de recherche afin d'optimiser son environnement de recherche philosophique à Nancy (France), dans d'autres instituts de recherche (le cas échéant) et avec d'éventuels partenaires de recherche au sein du cluster ENACT.
Le candidat devra posséder d'excellentes aptitudes relationnelles et organisationnelles afin de pouvoir engager des échanges avec des scientifiques et des philosophes pertinents au sein de la communauté internationale de philosophie de l'IA, notamment en Europe, ainsi qu'avec des philosophes possédant une expertise pertinente mais non spécialisés en IA.
Candidate profile
- The candidate is expected to have a Master's degree in philosophy of science, philosophy, or epistemology or to be about to complete this degree. By default, his/her curriculum should provide strong evidence of his/her ability to engage in a philosophical analysis of formal and scientific methods and, in particular, the scientific uses of AI. Strong evidence of excellent writing skills is particularly expected.
- The candidate should have a sufficient understanding of AI methods or a demonstrated ability to quickly acquire relevant and sufficient knowledge about these methods and their application in some scientific fields. Typically, the candidate may have some training or relevant courses in AI methods, an advanced scientific education such as the possession of a Bachelor of Science in a relevant field, or academic evidence that the candidate is able to develop philosophical arguments relying on information about AI methods.
- The candidate should have advanced teamwork skills and be prepared to develop regular interactions in several research sites in order to make the best of his/her joint philosophical research environment in Nancy (France), other research institutes (if applicable), and potential research partners within the ENACT cluster.
- The candidate is expected to have strong interaction and organizational skills in order to engage in exchanges with relevant scientists, philosophers in the larger international philosophy of AI community, especially in Europe, and philosophers who have relevant expertise but are not specialized in AI.
Référence biblio
Baumberger, C., Beisbart, C., & Brun, G. (2017). What is understanding? An overview of recent debates in epistemology and philosophy of science. In S. Grimm, C. Baumberger, & S. Ammon (Eds.), Explaining understanding: New perspectives from epistemology and philosophy of science (pp. 1–34). Routledge.
Buijsman, Stefan. “Machine Learning Models as Mathematics: Interpreting Explainable AI in Non-Causal Terms.” In Philosophy of Science for Machine Learning: Core Issues and New Perspectives, edited by Juan M. Durán and Giorgia Pozzi. Springer Nature Switzerland, 2026.
Fraassen, Bas van. “The Pragmatics of Explanation.” American Philosophical Quarterly 14 (1977): 143–50.
Hempel, Carl. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. Free Press, 1965.
Morgan, Mary S., Kim M. Hajek, and Dominic J. Berry, eds. Narrative Science: Reasoning, Representing and Knowing since 1800. Cambridge University Press, 2022.
Morgan, Mary S. and Wise, M. Norton (2017) Narrative science and narrative knowing.
Introduction to special issue on narrative science. Studies in History and Philosophy of Science
Part A, 62. pp. 1-5. ISSN 0039-3681
Páez, Andrés. “Axe the X in XAI: A Plea for Understandable AI.” In Philosophy of Science for Machine Learning: Core Issues and New Perspectives, edited by Juan M. Durán and Giorgia Pozzi. Springer Nature Switzerland, 2026. https://doi.org/10.1007/978-3-032-03083-2_7.
Siegel, Gabriel. “Scientific Understanding as Narrative Intelligibility.” Philosophical Studies 181, no. 10 (2024): 2843–66.
Sullivan, Emily. “Understanding from Machine Learning Models.” The British Journal for the Philosophy of Science 73, no. 1 (March 2022): 109–33.
Zhao, Haiyan, Hanjie Chen, Fan Yang, et al. “Explainability for Large Language Models: A Survey.” arXiv:2309.01029. Preprint, arXiv, November 28, 2023. https://doi.org/10.48550/arXiv.2309.01029.

