CONNECT WITH US

Siri privacy settlement exposes Apple's gaps in AI governance

Bryan Chuang, Commentary; Sherri Wang, DIGITIMES Asia 0

Credit: AFP

Apple's US$95 million settlement over Siri privacy violations has brought AI governance and accountability into sharp focus. The case highlights the growing challenges of balancing AI innovation with privacy protection, as different regions adopt varying regulatory approaches. While the European Union implements strict controls, Taiwan works to establish a balanced framework that could shape the future of AI governance.

Since its launch in September 2014, Apple's Siri has generated at least US$705 billion in revenue. While the settlement amount falls under US$100 million - a negligible sum for the tech giant - the case's significance lies in holding Apple accountable for breaching its privacy commitments. Had the lawsuit proceeded, Apple could have faced penalties of up to US$1.5 billion for Siri's privacy violations and unauthorized surveillance. Though the financial impact may be minimal for Apple, the reputational damage proves far more significant.

The growing privacy risks of AI

Recent studies have revealed that AI technologies pose substantial risks regarding improper collection and exploitation of consumer data. Privacy advocates are increasingly concerned that voice-activated devices like Apple's Siri, Google Assistant, and Amazon Alexa may be collecting excessive personal data and sharing private conversations with third parties.

In response to these concerns, the EU has taken a proactive stance, declaring that corporate self-regulation is insufficient. The EU passed the world's most stringent AI regulatory framework in May 2024. Meanwhile, Taiwan's National Science and Technology Council (NSTC) introduced a draft "Artificial Intelligence Basic Law" in July 2024. However, following the consultation period, the NSTC shifted from an EU-style tiered system to a more flexible, categorized approach with lighter regulations. The NSTC maintains that Taiwan's framework will balance the US model of innovation encouragement with the EU's risk control emphasis.

Taiwan's legislative challenge ahead

The draft of Taiwan's "Artificial Intelligence Basic Law" will undergo review in the legislative session following the Lunar New Year in 2025. Given the current political divide between ruling and opposition parties, the final legislation may differ substantially from the NSTC's proposal. With industry stakeholders strongly supporting the NSTC's version, policy lobbying is expected to become contentious. Opposition legislators might push for stricter regulations aligned with the EU framework.

While many experts contend that different AI technologies carry varying risk levels and overregulation could hinder innovation, delaying regulatory frameworks due to technological uncertainty could allow AI applications to become deeply embedded in society, making future regulation more challenging.

Generative AI's rapid ascent: setting boundaries

Bloomberg forecasts the generative AI market could reach US$1.3 trillion by 2032. However, this growth will only prove sustainable within legal and ethical frameworks ensuring fairness, privacy protection, consumer rights, and copyright compliance.

Though the US lacks comprehensive AI regulation, its innovation-friendly approach doesn't indicate regulatory absence. The Siri privacy controversy demonstrates that unauthorized data collection and sharing still face public and legal scrutiny.

In November 2020, the White House released AI regulatory principles, directing federal agencies to follow ten core principles when drafting AI laws, including risk assessment, fairness, and non-discrimination. The overarching message suggests that society will accept AI innovations benefiting human welfare and creating business opportunities in healthcare, finance, education, transportation, and manufacturing - provided they avoid fraud, bias, discrimination, privacy violations, and other harmful outcomes.