TikTok recently announced that its users in the European Union will soon be able to turn off its embarrassing participation in the content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of a broader regional effort to regulate AI and digital services in line with human rights and values.
TikTok’s algorithm learns from users’ interactions — how long they watch, what they like, when they share a video — to create a more tailored and immersive experience that can shape their mental state, preferences, and behavior without their full knowledge or consent. An opt-out feature is a great step toward protecting cognitive freedom, the fundamental right to self-determination of our brains and mental experiences. Instead of being confined to algorithmically curated For You pages and live feeds, users can see trending videos in their region and language, or a “Following and Friends” feed that lists creators they follow on chronological order. It prioritizes popular content in their region rather than content chosen for its stickiness. The law also prohibits targeted advertisements to users between 13 and 17 years of age, and provides additional information and reporting options to flag illegal or harmful content.
In a world increasingly shaped by artificial intelligence, Big Data, and digital media, the urgent need to protect freedom of thought is gaining attention. The proposed EU AI Act offers some safeguards against mental manipulation. UNESCO’s approach to AI centers on human rights, the Biden Administration’s voluntary commitments from AI companies address fraud and fraud, and the Organization for Economic Cooperation and Development includes cognitive freedom in its principles for responsible management of new technologies. But as laws and proposals like these develop, they often focus on subsets of the problem, such as privacy by design or data minimization, rather than mapping out a clear, comprehensive approach to protect our ability to think freely. Without strong legal frameworks in place around the world, the developers and providers of these technologies can avoid accountability. This is why one-time changes are not enough. Policymakers and companies urgently need to change the business models on which the tech ecosystem is oriented.
A well-structured plan requires a combination of regulations, incentives, and commercial redesign that focuses on freedom of thought. Regulatory standards should govern user engagement models, information sharing, and data privacy. Strong legal safeguards must be in place against interference with mental privacy and manipulation. Companies must be transparent about how the algorithms they use, and have a duty to investigate, disclose, and adopt safeguards against undue influence.
As with corporate social responsibility guidelines, companies are also legally required to examine their technology for its impact on intellectual freedom, providing transparency in algorithms, data usage, content moderation practices. , and shaping thinking. Impact assessment efforts are already integral to legislative proposals around the world, including the EU’s Digital Services Act, the US’s proposed Algorithmic Accountability Act and the American Data Privacy and Protection Act, and voluntary mechanisms such as of the US National Institute of Standards and Technology’s 2023 Risk Management Framework. An impact assessment tool for freedom of thought will specifically measure the influence of AI on self-determination, mental privacy, and freedom of thought and decision-making, focusing on transparency, data practices, and manipulation of mental. The required data will include detailed descriptions of algorithms, data sources and collection, and evidence of the technology’s effects on user identification.
Tax and funding incentives can also encourage innovation in business practices and products to strengthen freedom of thought. Leading AI behavioral researchers stress that a safety-first organizational culture is essential to counter the many risks posed by large-scale language models. Governments can encourage this by offering tax breaks and funding opportunities, such as those included in the proposed Platform Accountability and Transparency Act, to companies that actively collaborate with educational institutions to develop AI safety programs. which can develop self-determination and critical thinking skills. Tax incentives can also support research and innovation for tools and techniques that show fraud in AI models.
Technology companies should also adopt design principles that embody freedom of thought. Options like adjustable settings on TikTok or greater control over notifications on Apple devices are steps in the right direction. Other features enable self-determination—including labeling content with “badges” that determine content as human- or machine-generated, or asking users to critically engage with one article before re-sharing it—should become the norm on digital platforms.
TikTok’s policy change in Europe is a victory, but it’s not the end game. We immediately need to update our digital rulebook, implementing new laws, regulations, and incentives that protect user rights and hold platforms accountable. We should not hand over control of our minds to technology companies alone; it is time for global action to prioritize freedom of thought in the digital age.
WIRED Opinion publishes articles by external contributors who represent a wide range of viewpoints. Read more opinions HERE. Submit an op-ed to ideas@wired.com.