Editor’s Note: In a recent development, LinkedIn has attracted scrutiny for automatically enrolling users into its generative AI training initiatives without explicit consent. This decision echoes similar actions by other tech giants like Meta, signaling a growing trend in utilizing personal data to enhance AI capabilities. As privacy concerns intensify, the tech industry faces heightened public and legal pressure to ensure that user rights are respected. This article delves into the implications of LinkedIn’s actions and examines the broader debate surrounding data privacy, AI model training, and the effectiveness of opt-out consent mechanisms.
Content Assessment: Controversy Erupts Over LinkedIn's AI Data Usage Policies
Information - 94%
Insight - 92%
Relevance - 90%
Objectivity - 92%
Authority - 90%
92%
Excellent
A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "Controversy Erupts Over LinkedIn's AI Data Usage Policies."
Industry News – Artificial Intelligence Beat
Controversy Erupts Over LinkedIn’s AI Data Usage Policies
ComplexDiscovery Staff
LinkedIn has recently come under scrutiny for its decision to opt users into training generative AI models using their data, raising concerns among privacy advocates and legal experts. Without explicit consent, users’ personal data is being utilized to enhance LinkedIn’s AI capabilities, sparking debates within the tech community. This move follows similar actions by other tech giants such as Meta, indicating a broader trend in the industry.
According to reports by 404 Media, LinkedIn introduced a new privacy setting and opt-out form before updating its privacy policy to reflect that data collected from the platform would be used to train AI models. LinkedIn stated on its help page, “We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.” Users who wish to revoke this permission can do so by navigating to the Data privacy tab in their account settings and toggling off “Data for Generative AI Improvement.”
LinkedIn’s action aligns with that of Meta, which has also controversially admitted to using non-private user data for AI training. The timing has led to heightened awareness and criticism from the public and privacy groups. Mariano delli Santi, a legal and policy officer at the U.K.-based Open Rights Group, emphasized, “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.” He urged the U.K. privacy watchdog to “take urgent action against LinkedIn and other companies that think they are above the law.”
The AI models in use, as confirmed by LinkedIn, may be trained by LinkedIn itself or by other providers, such as Microsoft’s Azure OpenAI service. The company mentions that it leverages “privacy-enhancing technologies to redact or remove personal data” from its training datasets and assures users that it does not use data from individuals in the EU, EEA, or Switzerland for training content-generating AI models. This raises questions about the adequacy and transparency of such privacy measures.
Further complicating the matter, users must complete an additional LinkedIn Data Processing Objection Form to opt out of other machine learning applications beyond generative AI models. This layered approach to data privacy settings has drawn criticism for its lack of simplicity and accessibility.
This controversy unfolds at a time when LinkedIn’s influence as a professional networking site is significant. The platform boasts over one billion users worldwide, with many relying on it for job searches, professional networking, and career development. A comprehensive LinkedIn profile can increase the likelihood of getting a job interview by 71%, according to studies. This reliance underscores the importance of understanding how individuals’ data is utilized and ensuring that their privacy is respected.
LinkedIn’s situation is a part of a broader narrative involving tech companies leveraging user data for AI advancements. The implications of such practices extend to other platforms as well. Snapchat, for instance, has been noted for its “My Selfie” tool, which allows the creation of AI-generated selfies. These images can, with consent, be used in personalized advertisements, although users have reported seeing their likeness in ads without clear prior agreement.
While Snapchat assures that no third-party advertisers have access to users’ data, the default opt-in setting has prompted backlash. A user shared on Reddit that an AI-generated image resembling them appeared in an ad without their knowledge, demonstrating the potential for misuse.
The evolving discourse on data privacy and AI model training highlights a critical juncture at which legal frameworks and corporate practices must realign to protect user interests. Companies like LinkedIn, Meta, and Snapchat must navigate these ethical and legal challenges to maintain trust and compliance in an increasingly data-driven world.
News Sources
- LinkedIn is training AI models on your data
- LinkedIn Is Using Your Data To Train Microsoft And Its Own AI Models–Here’s How To Turn It Off
- 4 Ways To Optimize Your LinkedIn Profile To Get Noticed By Recruiters
- Snapchat’s AI selfie feature puts your face in personalized ads — here’s how to turn it off
- Snapchat can put users’ AI images of faces in ‘My Selfie’ ads
Assisted by GAI and LLM Technologies
Additional Reading
- OpenAI and Anthropic Collaborate with U.S. AI Safety Institute
- 56% of Security Professionals Concerned About AI-Powered Threats, Pluralsight Reports
Source: ComplexDiscovery OÜ