
I am an assistant professor at the University of British Columbia in Vancouver, where I am part of Systopia, UBC S&P, TrustML, and CAIDA. Prior to this, I was a postdoctoral researcher at Microsoft Research in NY. I completed my PhD at Columbia University. I work on trustworthy Artificial Intelligence (AI) systems, with a focus on auditing AI models, and developing techniques to enforce provable guarantees in models and their data ecosystems.
Office: ICICS 317
Email: mathias.lecuyer@ubc.ca
Teaching
I teach the following classes:
- CPSC 532Y: Causal Machine Learning (Fall 2024, Fall 2023, Fall 2022)
- CPSC 330: Applied Machine Learning (Spring 2024, Spring 2023)
- CPSC 538L: Differential Privacy - Theory and Practice (Spring 2022)
Students
Postdoctoral researchers
- Bingshan Hu (with Danica Sutherland)
Graduate students
- Qiaoyue Tang (PhD)
- Saiyue Lyu (PhD)
- Frederick Shpilevskiy (PhD)
Alumni
- Mishaal Kazmi (MSc, with Ivan Beschastnikh) → PhD student at Northeastern University
- Shadab Shaikh (MSc)
- Alain Zhiyanov (BSc research)
- Jessica Bator (BSc research)
- Amir Sabzi (MSc, with Aastha Mehta) → PhD student at Princeton
- Helen Chen (BSc research) → Amazon
- Ryan Shar (BSc Honors thesis) → MSc CMU
- Mauricio Soroco (BSc research) → PhD student at SFU
- Joel Hempel (BSc research)
- Eric Xiong (BSc Honors thesis) → MSc (research) University of Alberta
- Frederick Shpilevskiy (BSc Honors thesis, with Margo Seltzer) → PhD student at UBC
- Shiqi He (MSc, with Ivan Beschastnikh) → PhD student at the University of Michigan
Research
I work on trustworthy Artificial Intelligence (AI) systems, with a focus on enforcing provable guarantees in models and their data ecosystems. My recent contributions tackle these challenges in four broad directions.
All Publications & Preprints
Also available on my resume or Google Scholar.
- Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret. ICML 2025. [arxiv]
- FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching. INFOCOM 2025. [arxiv]
- DPack: Efficiency-Oriented Privacy Budget Scheduling. EuroSys 2025. [arxiv]
- Training and Evaluating Causal Forecasting Models for Time-Series. Preprint 2024. [arxiv]
- Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences. Spotlight, NeurIPS 2024. [NeurIPS][arXiv]
- PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining. NeurIPS 2024. [NeurIPS][arXiv]
- Cookie Monster: Efficient On-Device Budgeting for Differentially-Private Ad-Measurement Systems. Distinguished Artifact Honorable Mention, SOSP 2024. [arxiv]
- PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining. TPDP workshop 2024. [arXiv full paper]
- Adaptive Randomized Smoothing for Certified Multi-Step Defence. MAT workshop @ CVPR 2024. [PDF][arXiv full paper]
- NetShaper: A Differentially Private Network Side-Channel Mitigation System. USENIX Security 2024. [arXiv]
- DP-AdamBC: Your DP-Adam Is Actually DP-SGD (Unless You Apply Bias Correction). Oral, AAAI 2024. [arXiv]
- Flowering Onset Detection: Traditional Learning vs. Deep Learning Performance in a Sparse Label Context. CCAI workshop at NeurIPS 2023. [PDF][site]
- Turbo: Effective Caching in Differentially-Private Databases. SOSP 2023. [PDF, appendix][arxiv (full version)]
- DP-Adam: Correcting DP Bias in Adam's Second Moment Estimation. RTML workshop at ICLR 2023. [arXiv]
- GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning. MLSys 2023. [arXiv][code]
- Fast Optimization of Weighted Sparse Decision Trees for use in Optimal Treatment Regimes and Optimal Policy Design. AIMLAI Workshop 2022. [arXiv full version]
- Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments. ICML 2022. [arXiv][code][demo]
- Sayer: Using Implicit Feedback to Optimize System Policies. SoCC 2021. [PDF]
- Privacy Budget Scheduling. OSDI 2021. [PDF][Long Version][Code]
- Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep Learning. Preprint 2021. [arXiv]
- Certified Robustness to Adversarial Examples with Differential Privacy. S&P 2019. [PDF][Code]
- Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform. SOSP 2019. [PDF]
- Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform. OSR 2019. [PDF]
- Enhancing Selectivity in Big Data. Invited paper, S&P Magazine, 2018. [PDF]
- Harvesting Randomness to Optimize Distributed Systems. HotNets 2017. [PDF]
- Pyramid: Enhancing selectivity in big data protection with count featurization. S&P 2017. [PDF][Long Version][Website]
- Improving the transparency of the sharing economy. WWW 2017. [PDF][Data][Blog post]
- Sunlight: Fine-grained Targeting Detection at Scale with Statistical Confidence. CCS'15. [PDF][Website][The Economist, Slate]
- Synapse: New Data Integration Abstractions for Agile Web Application Development. EuroSys 2015. [PDF][Website]
- XRay: Increasing the Web's Transparency with Differential Correlation. USENIX Security 2014. [PDF][Website][NYT Bits, MIT Technology Review]
Privacy preserving data systems
AI systems rely on large scale data collection and aggregation, which exposes sensitive information. I develop new theory, algorithms, and system mechanisms for Differential Privacy (DP), to make end-to-end privacy preserving systems more practical. Specifically, I have worked on new DP composition theory, system mechanisms for efficient privacy accounting and resource allocation, and new learning algoalgorithms to train DP models. Cookie Monster serves as the DP blueprint for the Privacy-Preserving Attribution (advertising measurement) API under standardization with the W3C.
- Cookie Monster: Efficient On-Device Budgeting for Differentially-Private Ad-Measurement Systems. Distinguished Artifact Honorable Mention, SOSP 2024. [paper][code]
- DP-AdamBC: Your DP-Adam Is Actually DP-SGD (Unless You Apply Bias Correction). Oral, AAAI 2024. [paper][code]
- Turbo: Effective Caching in Differentially-Private Databases. SOSP 2023. [paper]
- Privacy Budget Scheduling. OSDI 2021. [paper][code]
- Practical Privacy Filters and Odometers with Rényi Differential Privacy and Applications to Differentially Private Deep Learning. Preprint 2021. [paper]
- Privacy Accounting and Quality Control in the Sage Differentially Private ML Platform. SOSP 2019. [paper]
Explainability and transparency for AI systems
I develop techniques to better understand, explain, and audit AI systems under different threat models. For instance, PANORAMIA aims to quantify privacy leakage from AI models without access to, or control of, the training pipeline. I have also worked on measuring the impact of training data on the test-time behaviour of AI models. By framing this question as a causal inference question, I aim provide explanations that are faithful to model behaviour in manipulated settings.
- PANORAMIA: Privacy Auditing of Machine Learning Models without Retraining. NeurIPS 2024. [paper][code]
- Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments. ICML 2022. [paper][code, images]
Adversarial robustness
AI models show a worrying susceptibility to adversarial attacks, in which an attacker applies imperceptible changes to the input to arbitrarily influence a target model. My work laid the foundations for Randomized Smoothing (RS), the only technique to provably ensure robustness that scales to the largest AI models. Recently, I have extended RS to input-adaptive multi-step defences. RS is still an active area of research (my foundational paper has more than a thousand citations), and serves as a building block for other AI safety tools, such as to enforce fairness guarantees, or create robust watermarks and unlearnable examples.
- Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences. Spotlight, NeurIPS 2024. [paper][code]
- Certified Robustness to Adversarial Examples with Differential Privacy. S&P 2019. [paper][code]
Causal machine learning
AI models often inform downstream decisions, using repeated forecasts on different values of a controllable feature (e.g., the price of a good) to select the best outcome (e.g., the demand yielding the highest revenue). This decision procedure implicitly assumes that models will generalize to actions outside of the training distribution. This is often not the case, leading to poor performance. I develop and evaluate causal AI models, that generalize better when forecasting the effect of actions outside of the training distribution. This project is already deployed at a company.
- Training and Evaluating Causal Forecasting Models for Time-Series. Preprint 2024. [preprint]
Recent Invited Talks
- Adversarial Robustness and Privacy Measurements using Hypothesis-tests. PrivSec Lab, LATECE, UQAM, Montréal (May 2025).
- Adversarial Robustness and Privacy Measurements using Hypothesis-tests. International Laboratory and Learning Systems, Montréal (May 2025).
- Adversarial Robustness and Privacy Measurements using Hypothesis-tests. CrySP, University of Waterloo (Apr 2025).
- Adversarial Robustness and Privacy Measurements using Hypothesis-tests. CleverHans Lab for security and privacy of machine learning, Vector Institute and UoT (Apr 2025).
- Adversarial Robustness and Privacy Measurements using Hypothesis-tests. Joint SRI and Vector AI Safety Reading Group, Toronto (Apr 2025).
- Adaptive Randomized Smoothing: Certified Adversarial Robustness for Multi-Step Defences. Mathematics of Machine Learning, Canadian Mathematical Society Winter meeting (Dec 2024).
- Security and Privacy in the age of Foundation Models. MSRA Vancouver Talk Series (Aug 2024).
- PANORAMIA: Efficient Privacy Auditing of Machine Learning Models without Retraining. BIRS workshop on Statistical Aspects of Trustworthy Machine Learning (Feb 2024). [video].
- [Tutorial] Privacy as hypothesis testing: linking Differential Privacy, membership attacks, and privacy audits. Canadian AI, Responsible AI Track, Montréal (June 2023).
- DP-AdamBC: your DP-Adam is actually DP-SGD (unless you apply Bias Correction). Bridge the gap: Differential Privacy and Statistical Analysis, Amii Workshop, Edmonton (May 2023).
Service
- Program Committees: Security & Privacy (Oakland) (2026 Associate Chair, 2023, 2022), SaTML (2025 Distinguished Reviewer Award, 2024), ICML(2025, 2022, 2020), NeurIPS (2024, 2021), OSDI (2023), USENIX Security (2021, 2020), MLSys (2021), Eurosys (2020), SoCC (2019), Systems for ML workshop (2019, 2018).
- Reviewer: Journal of Machine Learning Research (2019), ACM Transactions on Internet Technology (2016).