LinkedIn has been using user data to train AI models without explicitly informing users or obtaining their consent.
An important example of why and how privacy settings are such an important part of the privacy landscape: LinkedIn has turned “on” its generative ai setting by default. https://t.co/OC6hRHt0qz
— Dr. Chelsea L. Horne (@CL_Horne) September 18, 2024
The social network recently updated its privacy policy, revealing that it uses personal data to improve and develop products, train AI models, provide personalized services, and gain insights. Users in the U.S. were subject to this data scraping, while those in the EU, EEA, or Switzerland were not affected due to stricter data privacy rules in those regions.
LinkedIn confirmed that the AI models trained using this data include those for writing suggestions and post recommendations.
Please enjoy two Glaswegians sitting in the sun, nattering about LinkedIn opting 🇬🇧 users into their generative AI model without their consent. Did you know they did that? They did that. https://t.co/AjX4FeeG0B
— Heather Burns (@WebDevLaw) September 18, 2024
To mitigate privacy concerns, LinkedIn stated that it employs “privacy enhancing techniques” like redacting and removing personal information from datasets used for AI training. Users wishing to opt out of data scraping can navigate to the “Data Privacy” section on LinkedIn’s desktop settings.
However, exercising this option does not affect any data already used for training. LinkedIn’s parent company, Microsoft, may also use the collected data to train its own AI models. The data encompasses user interactions, posts, language preferences, and feedback shared with LinkedIn.
"If you’re on LinkedIn, then you should know that the social network has, without asking, opted accounts into training generative AI models." https://t.co/f1oDI2ctEP
— Matt Dagley (@mattdagley) September 18, 2024
LinkedIn’s AI data privacy concern
Privacy activists have voiced concerns over LinkedIn’s decision to opt users into AI training without explicit consent. Mariano delli Santi, legal and policy officer at the UK-based privacy advocacy nonprofit Open Rights Group, criticized the opt-out model as “wholly inadequate to protect our rights.” He stressed that opt-in consent should be legally mandated.
The nonprofit Open Rights Group (ORG) has called for an investigation by the U.K.’s Information Commissioner’s Office (ICO) into LinkedIn and other social networks’ default practice of using user data for AI training. The ORG advocated for mandatory opt-in consent to protect user rights effectively. Ireland’s Data Protection Commission (DPC), responsible for GDPR compliance, mentioned that LinkedIn would issue clarifications to its global privacy policy and introduce an opt-out setting for users who do not want their data used for AI training.
This opt-out is not relevant to EU/EEA users, as LinkedIn is not utilizing their data for training these models. The trend of platforms repurposing user-generated content to train generative AI models is not unique to LinkedIn. Companies like Tumblr, Photobucket, Reddit, and Stack Overflow also license user data for AI model training, often making it challenging for users to opt out.
This broad usage of user data for AI underscores the growing demand for comprehensive regulatory scrutiny and user-centric consent mechanisms.