Julius Adebayo


I am Co-founder of Guide Labs, where we are building interpretable foundation models and AI systems that are easy to {audit, steer, and understand}.

I got my PhD in Computer Science from MIT where I was advised by Hal Abelson, and supported by Open Philanthropy. In 2022-2023, I was a postdoctoral researcher at Prescient Design, where I worked with Prof Kyunghyun Cho and Dr. Stephan Ra. Before that, I was a brain resident at Google, and worked as a research engineer at Fast Forward Labs.

Previously, I got a Masters degree in Computer Science and Technology Policy from MIT. I did my undergrad in Mechanical Engieering at BYU.

Selected papers Show all Show selected
(* equal contribution)


Concept Bottleneck Language Models for Protein Design
Aya Abdelsalam Ismail, Tuomas Oikarinen, Amy Wang, Julius Adebayo, Samuel Stanton, Taylor Joren, Joseph Kleinhenz, Allen Goodman, Hector Corrada Bravo, Kyunghyun Cho, Nathan C. Frey
Under Review

Concept Bottleneck Generative Models
Aya Abdelsalam Ismail*, Julius Adebayo*, Hector Corrada Bravo, Stephen Ra, Kyunghyun Cho
ICLR 2024

Error Discovery by Clustering Influence Embeddings
Fulton Wang*, Julius Adebayo*, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan
NeurIPS 2023
Code

Quantifying and Mitigating the Impact of Label Errors on Model Disparity Metrics
Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern
ICLR 2023

Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Signals
Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim
ICLR 2022

Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
Nishanth Arun, Nathan Gaw, Praveer Singh, Ken Chang, Mehak Aggarwal, Bryan Chen, Katharina Hoebel, Sharut Gupta, Mishka Gidwani, Julius Adebayo, Matthew Li, Jayashree Kalpathy-Cramer
Radiology: Artificial Intelligence, 3 (6): 2021.

Debugging Tests for Model Explanations
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim
NeurIPS 2020

Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Michael Muelly, Moritz Hardt, Been Kim
NeurIPS 2018

The (un)reliability of Saliency Methods
Pieter-jan Kindermans*, Sara Hooker*, Julius Adebayo, Maximilian Alber, Kristof Schutt, Sven Dahne, Dumitru Erhan, Been Kim
Explainable AI: Interpreting, Explaining, and Visualizing Deep Learning, 2018

Credit Scoring in the Era of Big Data
Mikella Hurley and Julius Adebayo
Yale Journal of Law and Technology, 2017

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-box Models.
Julius Adebayo and Lalana Kagal
Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) Workshop, 2016