**Apple’s Workshop on Privacy-Enhancing Machine Learning: Major Takeaways**
A few months back, Apple conducted the Workshop on Privacy-Enhanced Machine Learning, a two-day hybrid event that took place on March 20–21, 2025, concentrating on privacy, security, and the ethical development of machine learning. Recently, Apple shared the presentations from this workshop with the public, revealing key progress in the area. Below are three major takeaways from the event.
### Brief Note on Differential Privacy
A primary focus in several of the discussed papers is differential privacy, which has emerged as Apple’s chosen approach for safeguarding user information. This method entails injecting noise into user data prior to uploading, complicating efforts to link it back to a specific individual. Apple clarifies that this tactic permits valuable statistical insights while preserving user confidentiality.
### Three Research Studies Highlighted from the Event
#### 1. Local Pan-Privacy for Federated Analytics
Presented by Guy Rothblum from Apple, this research builds on a 2010 study aimed at safeguarding data even if an analytics system is breached. The new research emphasizes personal devices and illustrates that when a device undergoes multiple accesses, gathering usage data becomes increasingly difficult without jeopardizing privacy. The experts propose encrypted techniques that enable organizations to collect meaningful statistics while keeping individual actions concealed.
#### 2. Scalable Private Search with Wally
In this session by Rehan Rishi and Haris Mughees from Apple, the emphasis is on preserving user privacy while reducing costs for large-scale encrypted searches. The approach, dubbed Wally, utilizes differential privacy by dispatching authentic queries embedded in fabricated data. As more users access the server at the same time, the quantity of fake noise each participant transmits diminishes, allowing Apple to protect privacy while effectively managing millions of users.
#### 3. Differentially Private Synthetic Data via Foundation Model APIs
Presented by Sivakanth Gopi from Microsoft Research, this investigation delves into creating high-quality synthetic data that maintains the utility of actual user data without sacrificing privacy. The technique, termed Private Evolution (PE), enables API-only foundation models to generate synthetic counterparts of private datasets, achieving or surpassing the effectiveness of traditional model fine-tuning methods while considerably lowering privacy risks.
### Complete Studies List
In addition to the featured videos, Apple released links to all 25 studies showcased during the workshop, which encompass contributions from researchers at Apple, Microsoft, Google, and various academic institutions, including MIT and Carnegie Mellon.
This workshop demonstrates Apple’s dedication to promoting privacy-preserving technologies within machine learning, ensuring that user data remains protected while still allowing for valuable insights and innovations in the sector.