“Measures to Stop LinkedIn from Utilizing Your Data for AI Model Training”

"Measures to Stop LinkedIn from Utilizing Your Data for AI Model Training"

“Measures to Stop LinkedIn from Utilizing Your Data for AI Model Training”

# LinkedIn’s AI Data Collection: Essential Information and How to Opt Out

In a recent announcement, LinkedIn has acknowledged that it utilizes user data to train its artificial intelligence (AI) models without obtaining prior consent from its members. This disclosure has raised alarms regarding privacy and data handling on the platform, particularly since LinkedIn has restricted users’ ability to opt out of this data collection solely for future AI training. Here’s what you should understand about LinkedIn’s AI data practices, the associated privacy risks, and how to manage your data.

## LinkedIn’s AI Training and User Data

Effective November 20, 2024, LinkedIn will modify its user agreement and privacy policy to better explain how it leverages personal data for AI development on the platform. As stated by Blake Lawit, LinkedIn’s general counsel, the platform will now directly inform users that their personal information might be utilized for developing and training AI models. This data gathering happens whenever users engage with LinkedIn’s AI functionalities, such as writing posts, altering settings, or using the platform for any duration.

LinkedIn’s revised [privacy policy](https://www.linkedin.com/legal/privacy-policy) indicates that user data may be employed to “develop and train artificial intelligence (AI) models, create, supply, and customize our Services, and gain insights facilitated by AI, automated systems, and inferences, ensuring that our Services are more pertinent and beneficial to you and others.”

Nevertheless, LinkedIn’s AI models are not exclusively trained by the company itself. Some models come from external partners, including Microsoft, which provides AI models through its Azure OpenAI service. This situation raises concerns about how user data is shared and processed among various entities.

### Privacy Risks

A central issue stemming from LinkedIn’s latest AI data protocols is the risk of personal data exposure. Per LinkedIn’s [FAQ](https://www.linkedin.com/help/linkedin/answer/a5538339?hcppcid=search), users who input personal data into generative AI features could find that same data being outputted in unexpected ways. This scenario could result in sensitive information being unintentionally shared or utilized in manners not intended by users.

LinkedIn asserts that it employs “privacy-enhancing technologies” to minimize personal data within the datasets used for AI model training. However, the company has not clarified whether data already gathered can be removed from AI training datasets, leaving users with limited options regarding past data collection.

### Opting Out of AI Training

Although LinkedIn has automatically enrolled users in sharing their data for AI training, it does provide an option to opt out of future data collection. However, this opt-out pertains only to upcoming AI training and does not apply to data that has been previously collected and utilized.

To opt out of AI training on LinkedIn:

1. Access your account settings.
2. Go to the “Data privacy” section.
3. Disable the option permitting the collection of “data for generative AI improvement.”

This setting is turned on by default for most users, making it crucial to manually deactivate it if you prefer your data to remain unutilized for AI training.

### Exceptions for European Users

Users within the European Economic Area (EEA) and Switzerland are safeguarded by more stringent privacy regulations, such as the General Data Protection Regulation (GDPR). These regulations mandate that platforms acquire explicit consent prior to personal data collection or justify the data collection as a legitimate interest. Consequently, users in these areas were never automatically included in AI data collection and will not find an option to opt out.

Additionally, users can contest the use of their personal data for training AI models that do not generate LinkedIn content—like models used for personalization or content moderation—by filling out the [LinkedIn Data Processing Objection Form](https://www.linkedin.com/help/linkedin/ask/TS-DPRO).

## LinkedIn’s AI Principles and User Responsibilities

In light of rising concerns regarding AI, LinkedIn has earlier shared its [AI principles](https://www.linkedin.com/blog/member/trust-and-safety/responsible-ai-principles), vowing to take “significant measures to mitigate the potential risks of AI.” Nonetheless, the platform also holds users accountable for not disseminating misleading or harmful AI-generated content.

LinkedIn’s updated [user agreement](https://www.linkedin.com/legal/preview/user-agreement) cautions that AI-generated content, such as profile suggestions or post drafts, may be “inaccurate, incomplete, delayed, misleading, or not suitable for your needs.” Users are encouraged to review any AI-generated content thoroughly before depending on it or sharing it on the platform.

## The Future of AI and Privacy

The increasing deployment of AI across platforms like LinkedIn has brought forth crucial inquiries about privacy, consent, and data utilization. As AI becomes more woven into daily digital experiences, users are