Editor’s Note: In a recent development, LinkedIn has attracted scrutiny for automatically enrolling users into its generative AI training initiatives without explicit consent. This decision echoes similar actions by other tech giants like Meta, signaling a growing trend in utilizing personal data to enhance AI capabilities. As privacy concerns intensify, the tech industry faces heightened public and legal pressure to ensure that user rights are respected. This article delves into the implications of LinkedIn’s actions and examines the broader debate surrounding data privacy, AI model training, and the effectiveness of opt-out consent mechanisms.


Content Assessment: Controversy Erupts Over LinkedIn's AI Data Usage Policies

Information - 94%
Insight - 92%
Relevance - 90%
Objectivity - 92%
Authority - 90%

92%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "Controversy Erupts Over LinkedIn's AI Data Usage Policies."


Industry News – Artificial Intelligence Beat

Controversy Erupts Over LinkedIn’s AI Data Usage Policies

ComplexDiscovery Staff

LinkedIn has recently come under scrutiny for its decision to opt users into training generative AI models using their data, raising concerns among privacy advocates and legal experts. Without explicit consent, users’ personal data is being utilized to enhance LinkedIn’s AI capabilities, sparking debates within the tech community. This move follows similar actions by other tech giants such as Meta, indicating a broader trend in the industry.

According to reports by 404 Media, LinkedIn introduced a new privacy setting and opt-out form before updating its privacy policy to reflect that data collected from the platform would be used to train AI models. LinkedIn stated on its help page, “We may use your personal data to improve, develop, and provide products and Services, develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.” Users who wish to revoke this permission can do so by navigating to the Data privacy tab in their account settings and toggling off “Data for Generative AI Improvement.”

LinkedIn’s action aligns with that of Meta, which has also controversially admitted to using non-private user data for AI training. The timing has led to heightened awareness and criticism from the public and privacy groups. Mariano delli Santi, a legal and policy officer at the U.K.-based Open Rights Group, emphasized, “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.” He urged the U.K. privacy watchdog to “take urgent action against LinkedIn and other companies that think they are above the law.”

The AI models in use, as confirmed by LinkedIn, may be trained by LinkedIn itself or by other providers, such as Microsoft’s Azure OpenAI service. The company mentions that it leverages “privacy-enhancing technologies to redact or remove personal data” from its training datasets and assures users that it does not use data from individuals in the EU, EEA, or Switzerland for training content-generating AI models. This raises questions about the adequacy and transparency of such privacy measures.

Further complicating the matter, users must complete an additional LinkedIn Data Processing Objection Form to opt out of other machine learning applications beyond generative AI models. This layered approach to data privacy settings has drawn criticism for its lack of simplicity and accessibility.

This controversy unfolds at a time when LinkedIn’s influence as a professional networking site is significant. The platform boasts over one billion users worldwide, with many relying on it for job searches, professional networking, and career development. A comprehensive LinkedIn profile can increase the likelihood of getting a job interview by 71%, according to studies. This reliance underscores the importance of understanding how individuals’ data is utilized and ensuring that their privacy is respected.

LinkedIn’s situation is a part of a broader narrative involving tech companies leveraging user data for AI advancements. The implications of such practices extend to other platforms as well. Snapchat, for instance, has been noted for its “My Selfie” tool, which allows the creation of AI-generated selfies. These images can, with consent, be used in personalized advertisements, although users have reported seeing their likeness in ads without clear prior agreement.

While Snapchat assures that no third-party advertisers have access to users’ data, the default opt-in setting has prompted backlash. A user shared on Reddit that an AI-generated image resembling them appeared in an ad without their knowledge, demonstrating the potential for misuse.

The evolving discourse on data privacy and AI model training highlights a critical juncture at which legal frameworks and corporate practices must realign to protect user interests. Companies like LinkedIn, Meta, and Snapchat must navigate these ethical and legal challenges to maintain trust and compliance in an increasingly data-driven world.

News Sources


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is a highly recognized digital publication focused on providing detailed insights into the fields of cybersecurity, information governance, and eDiscovery. Based in Estonia, a hub for digital innovation, ComplexDiscovery OÜ upholds rigorous standards in journalistic integrity, delivering nuanced analyses of global trends, technology advancements, and the eDiscovery sector. The publication expertly connects intricate legal technology issues with the broader narrative of international business and current events, offering its readership invaluable insights for informed decision-making.

For the latest in law, technology, and business, visit ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, DALL-E2, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.