Mathias Lécuyer

Assistant Professor, The University of British Columbia, Vancouver

Mathias Lécuyer

I am an assistant professor at the University of British Columbia in Vancouver, where I am part of Systopia, UBC S&P, TrustML, and CAIDA. Prior to this, I was a postdoctoral researcher at Microsoft Research in NY. I completed my PhD at Columbia University. I work on trustworthy Artificial Intelligence (AI) systems, with a focus on auditing AI models, and developing techniques to enforce provable guarantees in models and their data ecosystems.

Office: ICICS 317
Email: mathias.lecuyer@ubc.ca

Teaching

I teach the following classes: (currentall)

Students

Postdoctoral researchers

Graduate students

Alumni

Research

I work on trustworthy Artificial Intelligence (AI) systems, with a focus on enforcing provable guarantees in models and their data ecosystems. My recent contributions tackle these challenges in four broad directions.

All Publications & Preprints

Also available on my resume or Google Scholar.

Privacy preserving data systems

AI systems rely on large scale data collection and aggregation, which exposes sensitive information. I develop new theory, algorithms, and system mechanisms for Differential Privacy (DP), to make end-to-end privacy preserving systems more practical. Specifically, I have worked on new DP composition theory, system mechanisms for efficient privacy accounting and resource allocation, and new learning algoalgorithms to train DP models. Cookie Monster serves as the DP blueprint for the Privacy-Preserving Attribution (advertising measurement) API under standardization with the W3C.

Explainability and transparency for AI systems

I develop techniques to better understand, explain, and audit AI systems under different threat models. For instance, PANORAMIA aims to quantify privacy leakage from AI models without access to, or control of, the training pipeline. I have also worked on measuring the impact of training data on the test-time behaviour of AI models. By framing this question as a causal inference question, I aim provide explanations that are faithful to model behaviour in manipulated settings.

Adversarial robustness

AI models show a worrying susceptibility to adversarial attacks, in which an attacker applies imperceptible changes to the input to arbitrarily influence a target model. My work laid the foundations for Randomized Smoothing (RS), the only technique to provably ensure robustness that scales to the largest AI models. Recently, I have extended RS to input-adaptive multi-step defences. RS is still an active area of research (my foundational paper has more than a thousand citations), and serves as a building block for other AI safety tools, such as to enforce fairness guarantees, or create robust watermarks and unlearnable examples.

Causal machine learning

AI models often inform downstream decisions, using repeated forecasts on different values of a controllable feature (e.g., the price of a good) to select the best outcome (e.g., the demand yielding the highest revenue). This decision procedure implicitly assumes that models will generalize to actions outside of the training distribution. This is often not the case, leading to poor performance. I develop and evaluate causal AI models, that generalize better when forecasting the effect of actions outside of the training distribution. This project is already deployed at a company.

Recent Invited Talks

Service

For outreach activities, see my resume.