Stata (32) D678
olawale [at] mit [dot] edu
Stata (32) D678
olawale [at] mit [dot] edu
Incoming AI Institute Postdoctoral Fellow
Schmidt Sciences
Postdoctoral Associate
Massachusetts Institute of Technology
Postdoctoral Scholar
Eric and Wendy Schmidt Center
The Broad Institute of MIT and Harvard
Please see CV for more info.
I am broadly interested in reliable and trustworthy AI. I primarily study questions related to the robustness of artificial intelligence (AI) in real-world decision-making. I work on developing methods that enable AI systems to generalize and adapt to new environments different from their training data (distribution shifts). I also work on developing the principles and practices of reliable AI evaluation. This includes studying the external validity key benchmarks in deep learning, the reliability of benchmarks for out-of-distribution generalization, and frameworks for valid evaluation of AI capabilities. Application areas of my work include biological imaging, algorithmic fairness, healthcare, and AI policy.
See Publications for more. * denotes equal contribution. α-β denotes alphabetical order.
Domain Generalization + Causality
Causally Inspired Regularization Enables Domain General Representations
Olawale Salaudeen, Oluwasanmi Koyejo
In AISTATS 2024
[arXiv] [code] [webpage]
Are Domain Generalization Benchmarks with Accuracy on the Line Misspecified?
A version (On Domain Generalization Datasets as Proxy Benchmarks for Causal Representation Learning) was presented as an Oral Presentation at the NeurIPS 2024 Causal Representation Learning Workshop
Olawale Salaudeen, Nicole Chiou, Shiny Weng, Oluwasanmi Koyejo
In Review
[arXiv] [code] [webpage] [news]
Domain Adaptation + Causality
Adapting to Latent Subgroup Shifts via Concepts and Proxies
α–β. Ibrahim Alabdulmohsin*, Nicole Chiou*, Alexander D’Amour*, Arthur Gretton*, Sanmi Koyejo*, Matt J. Kusner*, Stephen R. Pfohl*, Olawale Salaudeen*, Jessica Schrouff*, Katherine Tsai*.
In AISTATS 2023
[arXiv] [code] [webpage]
Proxy Methods for Domain Generalization
Katherine Tsai, Stephen R. Pfohl, O. Salaudeen, Nicole Chiou, Matt J. Kusner, Alexander D’Amour, Sanmi Koyejo, Arthur Gretton.
In AISTATS 2024
[arXiv] [code]
AI Evaluation
ImageNot: A contrast with ImageNet preserves model rankings
Olawale Salaudeen, Moritz Hardt
In Review
[arXiv] [code] [webpage]
Measurement to Meaning: A Validity-Centered Framework for AI Evaluation
Olawale Salaudeen, Anka Reuel, Ahmed Ahmed, Suhana Bedi, Zachary Robertson, Sudharsan Sundar, Ben Domingue, Angelina Wang, Sanmi Koyejo
Working Paper
[arXiv] [webpage]
Toward an Evaluation Science for Generative AI Systems
Laura Weidinger*, Inioluwa Deborah Raji*, Hanna Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Deep Ganguli, Sanmi Koyejo, William Isaac
In The Bridge 2025, National Academy of Engineering
[arXiv]
Summer 2025. [service]. I am serving as a program chair for the Machine Learning for Health (ML4H) conference in San Diego, CA, in December. Please reach out if you are interested in sponsoring this great conference!
Summer 2025. [honors/appointment]. I will spend the next year at Schmidt Sciences in NYC as an AI Institute Fellow and Visiting Scientist starting this summer! Please reach out if you are in NYC!
Spring 2025. Our [preprint] on AI evaluation and validity – Measurement to Meaning: A Validity-Centered Framework for AI Evaluation – is now available on arXiv!
Spring 2025. [appointment]. I joined the Eric and Wendy Schmidt Center, led by Prof. Caroline Uhler at the Broad Institute of MIT and Harvard, as a postdoctoral scholar.
Spring 2025. Our [preprint] on domain generalization benchmarks – Are Domain Generalization Benchmarks with Accuracy on the Line Misspecified? – is now available on arXiv!
Spring 2025. Our [paper] Toward an Evaluation Science for Generative AI Systems appeared in the National Academy of Engineering's latest edition on "AI Promises & Risks."
Spring 2025. I gave a [talk] on addressing distribution shifts with varying levels of deployment distribution information at the MIT LIDS Postdoc NEXUS meeting!
Winter 2025. [service]. I am co-organizing the new AI for Society seminar at MIT.
Winter 2025. Our [paper] titled "What’s in a Query: Examining Distribution-based Amortized Fair Ranking" will appear at the International World Wide Web Conference (WWW), 2025.
Winter 2025. I was selected as an NYU Tandon Faculty First-Look Fellow; I look forward to visiting and giving a [honors/talk] on our work on distribution shifts at NYU in February; news!
Winter 2025. [service]. I am co-organizing the 30th Annual Sanjoy K. Mitter LIDS Student Conference at MIT.
Winter 2025. I was selected as a Georgia Tech FOCUS Fellow; I look forward to visiting and giving a [honors/talk] on our work on distribution shifts at Georgia Tech in January!
Fall 2024. Our [paper] titled “On Domain Generalization Datasets as Proxy Benchmarks for Causal Representation Learning” will appear at the Neurips 2024 workshop on causal representation learning as an Oral Presentation.
Fall 2024. [appointment]. I joined the Healthy ML Lab, led by Prof. Marzyeh Ghassemi, at MIT as a postdoctoral associate!
Summer 2024. I gave a talk on our work on distribution shift at Texas State's Computer Science seminar.
Summer 2024. I gave a [talk] on our work on distribution shift at UT Austin's Institute for Foundations of Machine Learning (IFML).
Summer 2024. I successfully defended my PhD dissertation titled “Towards External Valid Machine Learning: A Spurious Correlations Perspective”!
Spring 2024. I gave a [talk] on AI for critical systems at the MobiliT.AI forum (May 28-29)!
Spring 2024. I gave a [talk] at UIUC Machine Learning Seminar on our work on the external validity of ImageNet; artifacts here!
Spring 2024. Our [preprint] demonstrating the external validity of ImageNet model/architecture rankings – ImageNot: A contrast with ImageNet preserves model ranking – is now available on arXiv!
Winter 2024. Two [paper] on machine learning under distribution shift will appear at AISTATS 2024 (see Publications)!
Winter 2024. I have returned to Stanford from MPI!
Fall 2023. I will join the Social Foundations of Computation department at the Max Planck Institute for Intelligent Systems in Tübingen, Germany this fall as a Research Intern working with Dr. Moritz Hardt!
Spring 2023. I passed my PhD Preliminary Exam!
Spring 2023. I will join Cruise LLC's Autonomous Vehicles Behaviors team in San Francisco, CA this summer as a Machine Learning Intern!
Fall 2022. I have moved to Stanford University as a "student of new faculty (SNF)" with Professor Sanmi Koyejo!
Summer 2022. I am honored to be selected as a top reviewer (10%) of ICML 2022!
Summer 2022. I will join Google Brain (now Google Deepmind) in Cambridge, MA this summer as a Research Intern!
Spring 2021. I gave a [talk] on leveraging causal discovery for fMRI denoising at the Beckman Institute Graduate Student Seminar!
Fall 2021. Our [paper] titled "Exploiting Causal Chains for Domain Generalization" was accepted at the 2021 NeurIPS Workshop on Distribution Shift!
Fall 2021. I was selected as a Miniature Brain Machinery (MBM) NSF Research Trainee!
Summer 2021. I was selected to receive an Illinois GEM Associate Fellowship!
Spring 2021. I passed my Ph.D. qualifying exam!
Spring 2020. I was selected to receive a 2020 Beckman Institute Graduate Fellowship!
I am happy to mentor students with overlapping research interests. Particularly for undergrads at MIT, programs like UROP are a great mechanism for mentorship.
More generally, I am very happy and available to give advice and feedback on applying to and navigating both undergraduate and graduate programs in computer science and related disciplines – especially for those to whom this type of feedback and guidance would be otherwise unavailable.