*

ENACT. Méthodes scientifiques reposant l'IA et compréhension explicative

Offre de thèse

ENACT. Méthodes scientifiques reposant l'IA et compréhension explicative

Date limite de candidature

27-04-2025

Date de début de contrat

01-10-2025

Directeur de thèse

IMBERT Cyrille

Encadrement

Co-supervision: Cyrille Imbert (AHP, CNRS, Université de Lorraine) Claus Beisbart (Institute of Philosophy at the University of Bern)

Type de contrat

Concours pour un contrat doctoral

école doctorale

SLTC - SOCIETES, LANGAGES, TEMPS, CONNAISSANCES

équipe

contexte

This PhD offer is provided by the ENACT AI Cluster and its partners. Find all ENACT PhD offers and actions on https://cluster-ia-enact.ai/.' The doctoral student will be co-supervised by Cyrille Imbert (Archives Poincaré, Nancy, France) and Claus Beisbart (University of Bern; Switzerland). Cyrille Imbert is a senior researcher at CNRS, affiliated with Archives Poincaré (CNRS, Université de Lorraine). His main research interest lies in the general philosophy of science (in particular, explanation, complexity, and models), the philosophical analysis of computational science, and the social epistemology of science and epistemic activities (in particular with formal models). Claus Beisbart is a professor of philosophy of science at the University of Bern, where he is affiliated with the Institute of Philosophy and the Center for Artificial Intelligence in Medicine. His main focus is the investigation of computer-based methods in the sciences. On April 1st, he will start a new research project on the epistemology of machine learning. He is also co-directing the Embedded Ethics Lab at the Center for Artificial Intelligence in Medicine. Several people in his research group focus on AI. The group is also collaborating with a group about neuromorphic computing. Claus Beisbart's group will thus offer a stimulating environment for the PhD candidate and allow for fruitful interaction with other researchers. Archives Poincaré (Université de Lorraine, CNRS, Nancy-Strasbourg, France) The Archives Poincaré is a research institute UMR 7117 that is affiliated with the Université de Lorraine and CNRS (National Institute for Scientific Research), and it benefits from their joint support in terms of hiring academics, facilitating staff, and financial support. The IA PhD fellowships are in the continuity of the research carried out at Poincaré Archives for decades and how it will extended in the coming years. Information about the institute, its members, and salient activities may be found on its site https://poincare.univ-lorraine.fr/fr/axes. The Poincaré Archives are recognized in France and internationally for analysis by philosophers of mathematical and scientific practices in direct dialogue with science, its practices, and the new schemes of scientific reasoning. The following orientations are of particular relevance in the context of research IA research. - Analysis of theoretical and applied mathematics and their mutations - Epistemological analysis is anchored in practices, particularly mathematics and computational science. - Studies related to computer science (codes, concepts, history, digital humanities) - Study of scientific-technological mutations and their social impact (one health medicine, big data, human-machine interactions) - Ethical and political analysis of the transformations of human action: epistemic and digital democracy, social epistemology, climate ethics, decision support Research in the philosophy of science and mathematics, particularly computational science and IA, benefits from strong and regular interactions with the network of other major French or international institutions working on these questions. They typically comprise collaborative work, joint seminars, co-organized workshops and conferences, and cosupervised doctoral students. The Poincaré Archives also has a tradition of strong interactions with scientific fields. The University de Lorraine and the local institutes belonging to national research institutions (such as CNRS, INRIA, INRA, INSERM, GeorgiaTech, or AgroParisTech) provide a rich context for this purpose on fields related to physics, energy science, agriculture, computer science, or health, with various funding opportunities for interdisciplinary research, in particular in the framework of the https://www.univ-lorraine.fr/lue/ The Ph.D. candidates are expected to make the best of this appropriate research environment and to contribute to developing interactions with relevant partners in relation to the philosophy of IA. Institute of Philosophy at the University of Bern The Institute of Philosophy at the University of Bern is one of the largest philosophy departments in Switzerland. It consists of up to 35 researchers and employees. In addition, the Institute is pleased to be welcoming many world-renowned philosophers each semester, as guests to a considerable number of international workshops, lecture series, and courses organized by the Institute. The Institute of Philosophy is subdivided into three departments, the department for logic and theoretical philosophy, the department for practical philosophy and the department for history of philosophy, which are supported by a shared administrative office. The Logic and Theoretical Philosophy division deals in teaching and research, systematically and in part historically, with problems in the areas of logic, metaphysics, epistemology, ontology, philosophy of mind, theory of action, philosophy of language, semantics, and philosophy of mathematics. In addition, in Bern we also focus on the theory and philosophy of natural sciences, and the history of the theory of natural sciences. Research working on subjects related to the philosophy and epistemology of AI involve Claus Beisbart, Dr. Tim Räz Julie Jebeile, or Vincent Lam. Recent or ongoing projects in theoretical philosophy include: - Extending the Scope of Causal Realism - Climate Change Adaptation through the Feminist Kaleidoscope - Ethical Considerations of the Relationships and Interactions between Science, Policy and the Media during the COVID-19 pandemic (ESPRIM) , MCID ECRG_03 - Ethik der Infektionskrankheiten - Improving Interpretability. Philosophy of Science Perspectives on Machine Learning - Explaining Human Nature - The Epistemology of Climate Change - The Rationales and Risks of Systematization: A Pragmatic Account of When to Systematize Thought More details available on https://www.philosophie.unibe.ch/index_eng.html.

spécialité

Philosophie

laboratoire

AHP-PReST - Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies

Mots clés

Méthodes reposant sur l'IA, Explication, compréhension, science computationnelle, Réseaux de neurones artificiels, Modèles scientifiques

Détail de l'offre

Producing scientific explanations and reaching understanding constitute some of the ultimate aims of science (see Baumberger et al. 2017). Since AI-based methods are increasingly used across science, it is important to analyze their impact on reaching these goals. Interestingly, AI methods are often supposed to be a failure when it comes to understanding. However, AI methods and their scientific use are just in their infancy, and both scientists and philosophers lack insights about them. Similar skeptical claims were made in the past concerning other computational methods. Further, case studies (e.g., Knüsel & Baumberger, 2020) suggest that, AI may yield, if not enhance, scientific understanding, and a burgeoning philosophical literature has developed on these issues (e.g., Meskhidze 2023) – over and beyond the many scientific publications that tackle these issues but ignore philosophical insights inherited from decades of discussions about these issues.

The proposed PhD project addresses the question of how AI-based research may lead to new explanations and boost scientific understanding. It will connect the philosophical research literature about AI-rooted explanations and understanding, that about scientific explanation and understanding, recent cases, and relevant scientific discourses.

A sample of questions that may be addressed include
- How can ANNs specifically aid the discovery of hitherto unknown explanations and the development of understanding? What specific obstacles arise in this context?
- What specific features should AI-based inquiries exemplify if they are to provide sound scientific explanations and understanding?
- What specific epistemic roles AI methods and ANN play in the development of explanatory understanding?
- Are AI-based methods associated with novel intuitions, claims, if not norms concerning what constitutes scientific explanations and understanding?
- Which standards of good scientific explanation and understanding are easily fulfilled by AI applications, and which standards are more difficult to fulfill?
- How do AI-based explanations impact the appreciation of explanatory values?
- Are AI-based methods related to specific accounts of explanations and understanding (e.g., statistical, counterfactual, mathematical, structural, etc.), beyond the literature about causal AI, and why? Or, can they contribute to all forms of explanations & understanding?
- Is AI's power to gain understanding roughly equal in all disciplines? If not, why?
- What do the features of AI-based methods (typically their opacity, their mathematical forms) imply for the understanding they may provide? Are these features common to other computational methods?
- What is the role of humans in the development of AI-based explanatory understanding? How much is AI-rooted understanding easily accessible and usable by cognitive creatures like humans?
- Can AI-based methods provide spurious explanations and understanding? If so, when and why? What risks should be given specific attention?
- Is the psychology and pragmatics of explanation crucial when analyzing the above issues?
- Is there something general to be said about the link between AI-based methods and explanatory understanding or are AI tools neutral, and it all depends on the contexts?

The PhD project will answer a selection of these questions. We anticipate the following stages:
· Survey of cases in which AI seems to provide explanatory understanding
· Analysis of the relevant philosophical literature about explanation and understanding
· Critical analysis of relevant scientific discourses about these issues
· Cross-comparisons between AI-based cases; and between AI-based methods and other, computational or analytical methods
· Combination of the findings from case studies and the philosophical literature.
· Conclusions for the analysis of how AI methods may bring explanatory understanding

Keywords

IA-based methods , explanation, understanding, computational science , Artificial neural networks , Scientific Models

Subject details

Producing scientific explanations and reaching understanding constitute some of the ultimate aims of science (see Baumberger et al. 2017). Since AI-based methods are increasingly used across science, it is important to analyze their impact on reaching these goals. Interestingly, AI methods are often supposed to be a failure when it comes to understanding. However, AI methods and their scientific use are just in their infancy, and both scientists and philosophers lack insights about them. Similar skeptical claims were made in the past concerning other computational methods. Further, case studies (e.g., Knüsel & Baumberger, 2020) suggest that, AI may yield, if not enhance, scientific understanding, and a burgeoning philosophical literature has developed on these issues (e.g., Meskhidze 2023) – over and beyond the many scientific publications that tackle these issues but ignore philosophical insights inherited from decades of discussions about these issues. The proposed PhD project addresses the question of how AI-based research may lead to new explanations and boost scientific understanding. It will connect the philosophical research literature about AI-rooted explanations and understanding, that about scientific explanation and understanding, recent cases, and relevant scientific discourses. A sample of questions that may be addressed include - How can ANNs specifically aid the discovery of hitherto unknown explanations and the development of understanding? What specific obstacles arise in this context? - What specific features should AI-based inquiries exemplify if they are to provide sound scientific explanations and understanding? - What specific epistemic roles AI methods and ANN play in the development of explanatory understanding? - Are AI-based methods associated with novel intuitions, claims, if not norms concerning what constitutes scientific explanations and understanding? - Which standards of good scientific explanation and understanding are easily fulfilled by AI applications, and which standards are more difficult to fulfill? - How do AI-based explanations impact the appreciation of explanatory values? - Are AI-based methods related to specific accounts of explanations and understanding (e.g., statistical, counterfactual, mathematical, structural, etc.), beyond the literature about causal AI, and why? Or, can they contribute to all forms of explanations & understanding? - Is AI's power to gain understanding roughly equal in all disciplines? If not, why? - What do the features of AI-based methods (typically their opacity, their mathematical forms) imply for the understanding they may provide? Are these features common to other computational methods? - What is the role of humans in the development of AI-based explanatory understanding? How much is AI-rooted understanding easily accessible and usable by cognitive creatures like humans? - Can AI-based methods provide spurious explanations and understanding? If so, when and why? What risks should be given specific attention? - Is the psychology and pragmatics of explanation crucial when analyzing the above issues? - Is there something general to be said about the link between AI-based methods and explanatory understanding or are AI tools neutral, and it all depends on the contexts? The PhD project will answer a selection of these questions. We anticipate the following stages: · Survey of cases in which AI seems to provide explanatory understanding · Analysis of the relevant philosophical literature about explanation and understanding · Critical analysis of relevant scientific discourses about these issues · Cross-comparisons between AI-based cases; and between AI-based methods and other, computational or analytical methods · Combination of the findings from case studies and the philosophical literature. · Conclusions for the analysis of how AI methods may bring explanatory understanding

Profil du candidat

- The candidate is expected to have a Master's degree in philosophy of science, philosophy, or epistemology or to be about to complete this degree. By default, his/her curriculum should provide strong evidence of his/her ability to engage in a philosophical analysis of formal and scientific methods and, in particular, the scientific uses of AI. Strong evidence of excellent writing skills is particularly expected.

- The candidate should have a sufficient understanding of AI methods or a demonstrated ability to quickly acquire relevant and sufficient knowledge about these methods and their application in some scientific fields. Typically, the candidate may have some training or relevant courses in AI methods, an advanced scientific education such as the possession of a Bachelor of Science in a relevant field, or academic evidence that the candidate is able to develop philosophical arguments relying on information about AI methods.

- The candidate should have advanced teamwork skills and be prepared to develop regular interactions in two research sites in order to make the best of his/her joint philosophical research environment in Nancy (France) and Bern (Switzerland).

- The candidate is expected to have strong interaction and organizational skills in order to engage in exchanges with relevant scientists, philosophers in the larger international philosophy of AI community, especially in Europe, and philosophers who have relevant expertise but are not specialized in AI.

Candidate profile

- The candidate is expected to have a Master's degree in philosophy of science, philosophy, or epistemology or to be about to complete this degree. By default, his/her curriculum should provide strong evidence of his/her ability to engage in a philosophical analysis of formal and scientific methods and, in particular, the scientific uses of AI. Strong evidence of excellent writing skills is particularly expected.

- The candidate should have a sufficient understanding of AI methods or a demonstrated ability to quickly acquire relevant and sufficient knowledge about these methods and their application in some scientific fields. Typically, the candidate may have some training or relevant courses in AI methods, an advanced scientific education such as the possession of a Bachelor of Science in a relevant field, or academic evidence that the candidate is able to develop philosophical arguments relying on information about AI methods.

- The candidate should have advanced teamwork skills and be prepared to develop regular interactions in two research sites in order to make the best of his/her joint philosophical research environment in Nancy (France) and Bern (Switzerland).

- The candidate is expected to have strong interaction and organizational skills in order to engage in exchanges with relevant scientists, philosophers in the larger international philosophy of AI community, especially in Europe, and philosophers who have relevant expertise but are not specialized in AI.

Référence biblio

Beisbart, Claus, and Tim Räz. “Philosophy of Science at Sea: Clarifying the Interpretability of Machine Learning.” Philosophy Compass 17, no. 6 (2022): e12830.

Baumberger, Christoph, Claus Beisbart, and Georg Brun. “What Is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.” In Explaining Understanding. Routledge, 2016.

Imbert, Cyrille. “Computer Simulations and Computational Models in Science.” In Springer Handbook of Model-Based Science, Magnani, Lorenzo et Bertolotti Tommaso., 735–81. Springer Handbooks. Cham: Springer International Publishing, 2017, §4. 4. “Computer simulations, explanation and understanding”

Jebeile, Julie, Vincent Lam, and Tim Räz. “Understanding Climate Change with Statistical Downscaling and Machine Learning.” Synthese 199, no. 1 (December 1, 2021): 1877–97.

Knüsel, B., & Baumberger, C. (2020). Understanding climate phenomena with data-driven models. Studies in History and Philosophy of Science Part A, 84(C), 46–56.

Meskhidze, H. (2023). Can machine learning provide understanding? How cosmologists use machine learning to understand observations of the universe. Erkenntnis, 88(5), 1895–1909.

Sullivan, Emily. “Understanding from Machine Learning Models.” The British Journal for the Philosophy of Science 73, no. 1 (March 2022): 109–33.

Woodward, James. “Scientific Explanation.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Winter 2014., 2014. http://plato.stanford.edu/archives/win2014/entries/scientific-explanation/.