Offre de thèse
ENACT. Méthodes scientifiques reposant sur l'IA et compréhension explicative
Date limite de candidature
15-04-2026
Date de début de contrat
01-10-2026
Directeur de thèse
IMBERT Cyrille
Encadrement
This PhD offer is provided by the ENACT AI Cluster and its partners. Find all ENACT PhD offers and actions on https://cluster-ia-enact.ai/.' The doctoral student will be supervised by Cyrille Imbert (Archives Poincaré, Nancy, France). Cyrille Imbert is a senior researcher at CNRS, affiliated with Archives Poincaré (CNRS, Université de Lorraine). His main research interest lies in the general philosophy of science (in particular, explanation, complexity, and models), the philosophical analysis of computational science, and the social epistemology of science and epistemic activities (in particular with formal models).
Type de contrat
école doctorale
équipe
contexte
Contexte scientifique Archives Poincaré and ENACT cluster The process involves two stages. - Preselection stage. Candidates are preselected by the supervisors of the institutions (like Archives Poincaré) involved in the ENACT program. Pre-applications should agree with the target topic descriptions deposited on ADUM - Application stage. Preselected candidates apply to the ENACT program with the support of the potential supervisors (deadline: end of April) More details about the three targeted PhD topics may be found here by following the links below (select the English version on each page): For this preselection stage, candidates are requested to apply online on April 15 at the latest (early bird applications are welcome, given the constrained timeline). We apologize for the very short delays due to the late publication of the call. The applications will be reviewed, and interviews with short-listed candidates will be conducted after April 15. Preselected candidate will then apply to the ENACT program and, if selected, will be interviewed in May. To apply, please apply online the following documents: - Candidate's CV - Candidate's cover letter/letter of motivation - Candidate's Bachelor's and Master's transcripts/certificates, with grades. - Letter(s) of recommendation (at most two), typically, letter of supervisor of the M2 internship (if completed or started more than 3 months ago) or of a previous project - Official certificate or evidence of English Proficiency - A research proposal (that agrees with the target PhD subject) describing the specific questions you plan to tackle and including a concrete work plan or first sketch. Maximum length: 1200 words (plus references) - Writing sample (max. 20 pages, e.g., term paper, essay, part of a Master's thesis, dissertation chapter, etc.) Please also send your application to Cyrille.Imbert@univ-lorraine.fr (but note that the official version is the one online).spécialité
Philosophielaboratoire
AHP-PReST - Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies
Mots clés
Méthodes et pratiques scientifiques, Explication, Compréhension, science computationelle, Réseaux de neurones et IA, Modèles scientifiques
Détail de l'offre
Producing scientific explanations and reaching understanding constitute some of the ultimate aims of science (see Baumberger et al. 2017). Since AI-based methods are increasingly used across science, it is important to analyze their impact on reaching these goals. Interestingly, AI methods are often supposed to be a failure when it comes to understanding. However, AI methods and their scientific use are just in their infancy, and both scientists and philosophers lack insights about them. Similar skeptical claims were made in the past concerning other computational methods. Further, case studies (e.g., Knüsel & Baumberger, 2020) suggest that, AI may yield, if not enhance, scientific understanding, and a burgeoning philosophical literature has developed on these issues (e.g., Meskhidze 2023) – over and beyond the many scientific publications that tackle these issues but ignore philosophical insights inherited from decades of discussions about these issues.
The proposed PhD project addresses the question of how AI-based research may lead to new explanations and boost scientific understanding. It will connect the philosophical research literature about AI-rooted explanations and understanding, that about scientific explanation and understanding, recent cases, and relevant scientific discourses.
A sample of questions that may be addressed include
- What are the relations if any between traditional discussions about explanations and explanatory AI
- How can ANNs specifically aid the discovery of hitherto unknown explanations and the development of understanding? What specific obstacles arise in this context?
- What specific features should AI-based inquiries exemplify if they are to provide sound scientific explanations and understanding?
- What specific epistemic roles AI methods and ANN play in the development of explanatory understanding?
- Are AI-based methods associated with novel intuitions, if not norms concerning scientific explanation and understanding?
- Which standards of good scientific explanation and understanding are easily fulfilled by AI applications, and which standards are more difficult to fulfill?
- How do AI-based explanations impact the appreciation of explanatory values?
- Are AI-based methods related to specific accounts of explanations and understanding (e.g., statistical, counterfactual, mathematical, structural, etc.), beyond the literature about causal AI, and why? Or, can they contribute to all forms of explanations & understanding?
- Is AI's power to gain understanding roughly equal in all disciplines? If not, why?
- What do the features of AI-based methods (typically their opacity, their mathematical forms) imply for the understanding they may provide? Are these features common to other computational methods?
- What is the role of humans in the development of AI-based explanatory understanding? How much is AI-rooted understanding easily accessible and usable by cognitive creatures like humans?
- Can AI-based methods provide spurious explanations and understanding? If so, when and why? What risks should be given specific attention?
- Is the psychology and pragmatics of explanation crucial when analyzing the above issues?
- Is there something general to be said about the link between AI-based methods and explanatory understanding or are AI tools neutral, and it all depends on the contexts?
The PhD project will answer a selection of these questions. We anticipate the following stages:
· Survey of cases in which AI seems to provide explanatory understanding
· Analysis of the relevant philosophical literature about explanation and understanding
· Critical analysis of relevant scientific discourses about these issues
· Cross-comparisons between AI-based cases; and between AI-based methods and other, computational or analytical methods
· Combination of the findings from case studies and the philosophical literature.
· Conclusions for the analysis of how AI methods may bring explanatory understanding
Keywords
IA-based methods and practices, explanation, understanding, computational science, Artificial neural networks and AI, Scientific Models
Subject details
Producing scientific explanations and reaching understanding constitute some of the ultimate aims of science (see Baumberger et al. 2017). Since AI-based methods are increasingly used across science, it is important to analyze their impact on reaching these goals. Interestingly, AI methods are often supposed to be a failure when it comes to understanding. However, AI methods and their scientific use are just in their infancy, and both scientists and philosophers lack insights about them. Similar skeptical claims were made in the past concerning other computational methods. Further, case studies (e.g., Knüsel & Baumberger, 2020) suggest that, AI may yield, if not enhance, scientific understanding, and a burgeoning philosophical literature has developed on these issues (e.g., Meskhidze 2023) – over and beyond the many scientific publications that tackle these issues but ignore philosophical insights inherited from decades of discussions about these issues. The proposed PhD project addresses the question of how AI-based research may lead to new explanations and boost scientific understanding. It will connect the philosophical research literature about AI-rooted explanations and understanding, that about scientific explanation and understanding, recent cases, and relevant scientific discourses. A sample of questions that may be addressed include - What are the relations if any between traditional discussions about explanations and explanatory AI - How can ANNs specifically aid the discovery of hitherto unknown explanations and the development of understanding? What specific obstacles arise in this context? - What specific features should AI-based inquiries exemplify if they are to provide sound scientific explanations and understanding? - What specific epistemic roles AI methods and ANN play in the development of explanatory understanding? - Are AI-based methods associated with novel intuitions, if not norms concerning scientific explanation and understanding? - Which standards of good scientific explanation and understanding are easily fulfilled by AI applications, and which standards are more difficult to fulfill? - How do AI-based explanations impact the appreciation of explanatory values? - Are AI-based methods related to specific accounts of explanations and understanding (e.g., statistical, counterfactual, mathematical, structural, etc.), beyond the literature about causal AI, and why? Or, can they contribute to all forms of explanations & understanding? - Is AI's power to gain understanding roughly equal in all disciplines? If not, why? - What do the features of AI-based methods (typically their opacity, their mathematical forms) imply for the understanding they may provide? Are these features common to other computational methods? - What is the role of humans in the development of AI-based explanatory understanding? How much is AI-rooted understanding easily accessible and usable by cognitive creatures like humans? - Can AI-based methods provide spurious explanations and understanding? If so, when and why? What risks should be given specific attention? - Is the psychology and pragmatics of explanation crucial when analyzing the above issues? - Is there something general to be said about the link between AI-based methods and explanatory understanding or are AI tools neutral, and it all depends on the contexts? The PhD project will answer a selection of these questions. We anticipate the following stages: · Survey of cases in which AI seems to provide explanatory understanding · Analysis of the relevant philosophical literature about explanation and understanding · Critical analysis of relevant scientific discourses about these issues · Cross-comparisons between AI-based cases; and between AI-based methods and other, computational or analytical methods · Combination of the findings from case studies and the philosophical literature. · Conclusions for the analysis of how AI methods may bring explanatory understanding
Profil du candidat
Le/la candidat(e) doit être titulaire d'un master en philosophie des sciences, en philosophie ou en épistémologie, ou être sur le point de l'obtenir. Son cursus doit idéalement démontrer sa capacité à mener une analyse philosophique des méthodes formelles et scientifiques, et notamment des applications scientifiques de l'IA. D'excellentes compétences rédactionnelles sont particulièrement attendues.
Le/la candidat(e) doit posséder une compréhension suffisante des méthodes d'IA ou une capacité avérée à acquérir rapidement des connaissances pertinentes et suffisantes sur ces méthodes et leurs applications dans certains domaines scientifiques. Il/elle peut avoir suivi des formations ou des cours pertinents en méthodes d'IA, être titulaire d'une formation scientifique supérieure (par exemple, une licence en sciences dans un domaine pertinent), ou justifier de travaux universitaires démontrant sa capacité à développer des arguments philosophiques s'appuyant sur des informations relatives aux méthodes d'IA.
Le/la candidat(e) doit posséder d'excellentes aptitudes au travail d'équipe et être disposé(e) à interagir régulièrement avec différents sites de recherche afin d'optimiser son environnement de recherche philosophique à Nancy (France), dans d'autres instituts de recherche (le cas échéant) et avec d'éventuels partenaires de recherche au sein du cluster ENACT.
Le candidat devra posséder d'excellentes aptitudes relationnelles et organisationnelles afin de pouvoir engager des échanges avec des scientifiques et des philosophes pertinents au sein de la communauté internationale de philosophie de l'IA, notamment en Europe, ainsi qu'avec des philosophes possédant une expertise pertinente mais non spécialisés en IA.
Candidate profile
- The candidate is expected to have a Master's degree in philosophy of science, philosophy, or epistemology or to be about to complete this degree. By default, his/her curriculum should provide strong evidence of his/her ability to engage in a philosophical analysis of formal and scientific methods and, in particular, the scientific uses of AI. Strong evidence of excellent writing skills is particularly expected.
- The candidate should have a sufficient understanding of AI methods or a demonstrated ability to quickly acquire relevant and sufficient knowledge about these methods and their application in some scientific fields. Typically, the candidate may have some training or relevant courses in AI methods, an advanced scientific education such as the possession of a Bachelor of Science in a relevant field, or academic evidence that the candidate is able to develop philosophical arguments relying on information about AI methods.
- The candidate should have advanced teamwork skills and be prepared to develop regular interactions in several research sites in order to make the best of his/her joint philosophical research environment in Nancy (France), other research institutes (if applicable), and potential research partners within the ENACT cluster.
- The candidate is expected to have strong interaction and organizational skills in order to engage in exchanges with relevant scientists, philosophers in the larger international philosophy of AI community, especially in Europe, and philosophers who have relevant expertise but are not specialized in AI.
Référence biblio
Bibliography
Beisbart, Claus, and Tim Räz. “Philosophy of Science at Sea: Clarifying the Interpretability of Machine Learning.” Philosophy Compass 17, no. 6 (2022): e12830.
Baumberger, Christoph, Claus Beisbart, and Georg Brun. “What Is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.” In Explaining Understanding. Routledge, 2016.
Hamami, Yacin, and Rebecca Lea Morris. “Understanding in Mathematics: The Case of Mathematical Proofs.” Noûs 58, no. 4 (2024): 1073–106. https://doi.org/10.1111/nous.12489.
Imbert, Cyrille. “Computer Simulations and Computational Models in Science.” In Springer Handbook of Model-Based Science, Magnani, Lorenzo et Bertolotti Tommaso., 735–81. Springer Handbooks. Cham: Springer International Publishing, 2017, §4. 4. “Computer simulations, explanation and understanding”
Jebeile, Julie, Vincent Lam, and Tim Räz. “Understanding Climate Change with Statistical Downscaling and Machine Learning.” Synthese 199, no. 1 (December 1, 2021): 1877–97.
Knüsel, B., & Baumberger, C. (2020). Understanding climate phenomena with data-driven models. Studies in History and Philosophy of Science Part A, 84(C), 46–56.
Meskhidze, H. (2023). Can machine learning provide understanding? How cosmologists use machine learning to understand observations of the universe. Erkenntnis, 88(5), 1895–1909.
Sullivan, Emily. “Understanding from Machine Learning Models.” The British Journal for the Philosophy of Science 73, no. 1 (March 2022): 109–33.
Woodward, James. “Scientific Explanation.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Winter 2014., 2014. http://plato.stanford.edu/archives/win2014/entries/scientific-explanation/.

