The rules are in place. The infrastructure is in place. But who is monitoring the application? The EU wants to give direction to generative AI with the new AI Office. At the same time, China is showing how AI is not only technology, but also a geopolitical tool. In that context, implementation power is not a luxury but a necessity.
With the AI Act, Europe was the first to establish a legal framework for the use of AI. In the Netherlands we notice that companies and governments increasingly have to show how their AI works and where the data comes from. You also notice that projects have to run more often in Europe itself. Not only because of regulations, but also because customers and governments demand this for security and control from European territory.
In Part 1, we explained how those rules are risk-based. In Part 2, we looked at infrastructure, the underlying computing power that makes AI at all possible. But legislation and data centers do not yet make policy. That requires interpretation, enforcement, and choices about how to deal with AI. That task will fall to the new European AI Office. It must enforce the AI Act, especially for general purpose AI models like ChatGPT or Gemini. It must help set standards, draft codes of practice, and monitor systemic risks. On paper, a solid role. But in practice, the buildup is still fledgling, and expectations sky-high. The AI Office has been under the leadership of Lucilla Sioli since summer 2025, but it still has limited capacity and emphatically seeks collaboration with national regulators.
While Europe looks for implementation, China is showing another side of AI. Internal documents from the company GoLaxy, surfaced through Vanderbilt University and recently published by The New York Times, show how AI is already being put to full use in information operations. No science fiction: the company is developing a "Smart Propaganda System" that monitors social media on a large scale, tracks sentiments, and automatically generates content that resembles real, human utterances.
The technology is being used to weaken opposition in Hong Kong, influence elections in Taiwan, and even profile Western politicians. According to the documents, GoLaxy collected data on thousands of U.S. public figures. Although not all can be verified, U.S. intelligence agencies confirm the close ties between the company and the Chinese government.
Precisely this type of application shows why implementation matters. Rules without direction remain non-committal. The EU presents itself as the standard-setter for "trustworthy AI," but that requires more than good intentions. And there is the rub. The drafting of the first codes of conduct, guidelines for providers of generative AI, is in full swing. But according to Euractiv, this is precisely where the AI Office is in danger of losing its authority. The first Code of Practice for general purpose AI appeared in July 2025; companies can voluntarily apply it to demonstrate their compliance. But instead of leading the pen itself, it is considering bringing in consulting firms such as the Big Four. Possibly even in cooperation with the companies that will soon be supervised. This creates the risk of self-regulation through the back door.
Critics call it a "false start." Without clear direction and transparency, the AI Office risks becoming a mere spectator to the game it is supposed to be leading. At the same time, there are also proponents of a pragmatic approach. They point out that speed is required: the codes must be ready as early as 2025. Moreover, bringing in experts, including commercial ones, can help ensure feasibility and connection to practice.
For companies and institutions, this means paying attention. The coming months will determine how AI rules will soon work in practice. Those who participate now, through consultations, sector forums or direct contact with the Commission, can help shape the frameworks they will soon have to comply with. Important: the AI Office will also have powers to request information, perform model assessments, and even take models off the market in case of serious risks. So it won't be a paper tiger, provided it lives up to its role.
China is showing what AI can do as a strategic tool. Europe wants to contrast that with a different model. But that requires more than ideals. It requires an AI Office that is authoritative, independent and capable. The coming months will show whether Brussels is able to deliver on that promise.
Want to know what this means for your organization, or how you can provide input into this process? Contact Roel (roel@castro.brussels).