In Europe, the immediate scrutiny is running through privacy watchdogs that supervise how tech firms handle Europeans’ personal data like posts, images and interactions on social media. These authorities are responsible for checking if tech firms have the right to use their users’ personal data to feed AI.
A key question that regulators are still figuring out is the so-called legal basis to use data: valid grounds (of which there are six under the GDPR) that tech and social media companies rely on to process their data for AI purposes.
“The tech companies that are scraping the internet to feed their AI systems need a reality check: Consumers should always remain in control over their personal data,” European consumer rights association BEUC said in a comment.
“GDPR is there to steer innovation in the right direction. If it doesn’t respect people’s fundamental rights, you don’t have good innovation,” said Tobias Judin, a legal expert at Norway’s data protection authority.
For Big Tech, that means delays and difficulties. Meta, X and LinkedIn have all recently delayed their rollout of new artificial intelligence applications in Europe after an intervention by the Irish DPC. Google’s PaLM2 model is facing an inquiry by the same regulator, which also forced Google to pause the release of its Bard chatbot last year.
Those moves suggested a stark shift in strategy at the Irish authority, which under the leadership of former Commissioner Helen Dixon faced widespread criticism for being too slow to hold Big Tech to account for privacy violations. The authority saw a change of guard earlier this year, with two co-commissioners, Des Hogan and Dale Sunderland, taking over from Dixon. Hogan and Sunderland have sought to avoid criticism from their peers in Europe, taking a tough line against Big Tech that better fits how other regulators want Ireland to act.