As a SOC2 auditor and security, privacy, and compliance consultant for global companies, I often dig into Terms of Service ToS). These documents, often overlooked, are crucial for understanding how companies are addressing the privacy and security of user data in the age of artificial intelligence (AI).
With more companies integrating AI into their products, there are growing concerns around the governance of these tools, especially when it comes to user data. One key issue involves companies quietly adjusting their ToS to allow for the collection of user data to train AI models—often without users being fully aware.
Why This Matters:
Recent examples like Zoom and Adobe show how changes in ToS can create serious privacy concerns. Companies initially appeared to claim the right to use user data for AI training without explicit consent, leading to public backlash and the need for clarifications.

Even global players like Meta and X (formerly Twitter) faced regulatory scrutiny when they attempted to alter their privacy policies to allow for AI training on user data, forcing them to backtrack due to pressure from regulators like the UK Information Commissioner’s Office and Brazil’s Supreme Court.
Companies must understand that transparency is key. By proactively updating users about changes to the ToS and including clear guardrails around how data will be used, organizations can avoid user backlash and regulatory issues.
Building Governance into AI: Organizations should embed privacy-first measures into their AI systems and develop a clear framework for AI usage, ensuring that user data is handled responsibly. Legal agreements, including ToS, should be continually updated to reflect the evolving use of AI while maintaining a strong commitment to data protection.
Taking the time to design responsible AI systems and being transparent about ToS changes not only protects users but also safeguards your brand.
Author: Sebastian Burgemejster
Comments