In a move reflecting growing regulatory pressures, LinkedIn, the popular professional networking platform, has temporarily suspended the processing of U.K. users’ data to train its artificial intelligence (AI) models. This decision comes after concerns raised by the U.K. Information Commissioner’s Office (ICO) about the platform’s data handling practices, particularly around the use of personal information for AI training without explicit consent from users.
The ICO, which oversees data protection in the U.K., played a pivotal role in LinkedIn’s decision. Stephen Almond, the ICO’s executive director of regulatory risk, confirmed LinkedIn’s suspension of AI data processing in a public statement, saying, “We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”
Almond emphasized that the ICO will continue to monitor LinkedIn, as well as other major companies involved in generative AI, such as Microsoft (which owns LinkedIn), to ensure they adhere to appropriate data protection practices. “We intend to keep a close eye on businesses offering generative AI capabilities, ensuring they have the necessary safeguards in place to protect the information rights of U.K. users,” Almond added.
LinkedIn’s AI Practices Come Under Scrutiny
LinkedIn’s suspension comes after revelations that the company had been using user data to train its AI models without obtaining explicit consent from users, a practice introduced as part of an updated privacy policy that took effect on September 18, 2024. This revelation was first reported by 404 Media, highlighting how LinkedIn’s AI training efforts extended across its user base, including sensitive regions like the U.K., European Economic Area (EEA), and Switzerland. However, LinkedIn has now paused such practices in these regions.
According to a statement from LinkedIn, “At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice.” This means that users in the U.K. and across Europe will not have their data used for AI training, at least for the time being.
In an FAQ released separately, LinkedIn also highlighted efforts to minimize the inclusion of personal data in its training datasets. It stated that the company uses privacy-enhancing technologies to redact or remove personal information before the data is utilized for training AI models. Users residing outside of Europe, however, can still opt out of having their data used for AI training via their account settings by turning off the “Data for Generative AI Improvement” feature. It’s important to note that opting out only applies to future data use, not any data that may have already been used for AI training purposes.
LinkedIn’s controversial decision to initially opt all users into this AI training program without explicit notification raises significant privacy concerns, especially in light of the broader debate over how tech companies handle user data.
A Broader Issue in the Tech Industry
LinkedIn is not the only tech company facing criticism over its data handling practices. Just days before LinkedIn’s decision to pause AI training in the U.K., Meta (formerly Facebook) also admitted to scraping non-private user data for AI training purposes, a practice dating back to 2007. Despite public backlash, Meta resumed AI training on U.K. user data shortly after its admission.
Similarly, Zoom came under fire in August 2024 for plans to use customer content to train its AI models, a decision the company quickly reversed after users expressed concerns over privacy and data security. This wave of revelations underscores the increasing scrutiny that AI development is facing, particularly regarding the use of personal data without clear, informed consent from users.
Growing Scrutiny of AI Data Use
The suspension of LinkedIn’s AI training in the U.K. highlights a broader issue in the tech industry—how user data is used to fuel AI advancements. The debate extends beyond LinkedIn and has caught the attention of regulators worldwide, including the U.S. Federal Trade Commission (FTC). The FTC recently published a report accusing large tech companies, particularly social media and video streaming platforms, of engaging in widespread surveillance of users with minimal oversight.
The FTC’s report described how these companies collect vast amounts of personal data, often combining it with information gathered from artificial intelligence systems, tracking pixels, and third-party data brokers. This data is then used to create detailed consumer profiles, which are often sold to advertisers or other interested parties. “The companies collected and could indefinitely retain troves of data, including information from data brokers, and about both users and non-users of their platforms,” the report stated, criticizing the lax data minimization and retention policies of many companies.
Additionally, the report raised concerns over companies’ failure to fully delete user data even after users requested it, highlighting ongoing privacy risks. It accused tech giants of engaging in broad data-sharing practices, raising questions about the adequacy of their data protection measures and the lack of proper regulatory oversight.
What’s Next for LinkedIn?
As LinkedIn engages with the ICO to address the concerns surrounding its AI data practices, the company may be forced to implement more stringent privacy measures to protect users’ rights. While the suspension of AI training in the U.K. is a significant step, it remains to be seen whether this pause will lead to broader changes in how LinkedIn and other tech companies approach data use for AI development.
For now, U.K. users can rest assured that their data is not being used for AI training, but the ongoing debate about privacy, consent, and AI development is far from over. Companies like LinkedIn, Microsoft, and Meta will likely face increasing pressure from both regulators and the public to ensure that their AI practices respect user privacy and comply with data protection laws.
Follow us on (Twitter) for real time updates and exclusive content.