Germany greenlights the EU AI Act, triggering countdown for enterprise compliance
The German Federal Cabinet has approved a draft legislation to implement the EU’s AI Act, designating the Federal Network Agency (Bundesnetzagentur) as the country’s central AI supervisory authority.
Under the draft AI Market Surveillance and Innovation Promotion Act (KI-MIG), Germany will establish its national framework for regulating AI system development and deployment. The draft law will now head to the Bundestag (lower house of parliament) and Bundesrat (upper house of parliament) for parliamentary approval.
“With this law, we are implementing European requirements in a maximally innovation-friendly way and creating lean AI supervision with a clear focus on the needs of the economy,” Federal Digital Minister Karsten Wildberger said in a statement.
Distributed oversight model
Under the draft law, the Federal Network Agency will serve as the central coordinator, market surveillance authority, and the notifying body. The Bonn-based agency already coordinates Germany’s EU Digital Services Act implementation and supervises platforms including Facebook, Instagram, YouTube, TikTok, and X.
The draft law assigns AI oversight to established regulators, including the Federal Cartel Office, the Federal Financial Supervisory Authority (BaFin), and data protection authorities at federal and state levels, the statement added.
“The supervisory map has changed shape. It is no longer sensible to think in terms of a single regulator relationship for AI,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “Germany has chosen to anchor coordination inside the Federal Network Agency. That gives the system a centre of gravity. But it has not centralised enforcement power in one place.”
The distributed approach creates complexity for enterprises, Gogia said. A scoring model used in HR, credit, or embedded in a regulated device does not travel through the same supervisory channel. “That means enterprises need a classification and routing capability internally,” he said.
Germany’s approach reflects a broader EU pattern. France is moving toward coordinated decentralization, while Spain has invested in sandbox experimentation. Italy enacted a national AI law preserving sector supervision channels. “Structurally, central coordination plus sector execution is not unique to Germany. It is becoming the operating pattern,” Gogia said.
Industry pushes for EU-level reforms
Industry groups welcomed Germany’s implementation approach while calling for fundamental changes to the EU AI Act itself.
“We welcome the proposed structure, which gives the Federal Network Agency a significant coordinating role while retaining the expertise built up by the sectoral market surveillance authorities,” Sarah Bäumchen, managing director of the German Electrical and Digital Industries Association (ZVEI) told ComputerWorld. “However, since the AI Act is a European regulation, the pragmatic German implementation law is unable to address its severe shortcomings.”
The August 2026 deadline is a major concern for companies, Bäumchen said. “Key elements, such as harmonised European standards which specify how companies can comply with high-risk requirements, are not yet available. A postponement of the implementation deadline by 24 months is therefore necessary to prevent companies from delaying or even cancelling the introduction of AI features.”
ZVEI is calling for industrial AI to be excluded from the Act entirely. “The necessary guardrails for a safe use of AI in industrial contexts are already in place,” Bäumchen said, citing the Machinery Regulation and Medical Devices software regulations.
The AI Act creates legal uncertainties by not aligning with existing product safety laws, the Cyber Resilience Act, and the Data Act, she said. Neither AI regulatory sandboxes nor the AI Service Desk “can provide large-scale legal certainty” to keep compliance costs acceptable, Bäumchen added.
Enterprise compliance priorities
Under the EU AI Act, companies must assess AI system risk levels and implement corresponding transparency and security measures. The regulation prohibits AI programs that perform social behavior assessments and bans emotion recognition in workplaces and educational institutions. Companies developing or using high-risk AI systems must comply with requirements covering transparency, data governance, documentation, robustness, and cybersecurity when obligations take effect in the next six months.
For enterprises operating in Germany, the immediate priority is building what Gogia called “a functioning compliance operating system” ahead of the August 2026 deadline.
“Most enterprises still do not have a complete inventory of AI systems across internal builds, vendor-embedded features, and informal deployments across business units,” Gogia said. Vendor governance represents a critical pressure point, as enterprises must ensure suppliers can produce technical documentation and evidence of conformity assessment.
Financial services will see scrutiny of credit scoring and underwriting automation, while employment systems are likely to generate complaint-driven enforcement due to their direct impact on individuals, Gogia said. Germany’s implementation includes a central complaint intake pathway, meaning “enforcement does not rely solely on regulator initiative. It can be triggered externally.”
Germany missed the EU’s August 2, 2025 deadline for establishing national supervisory structures due to early federal elections. The Federal Network Agency established an AI Service Desk in July 2025 and published AI literacy guidance in June 2025. More than 1,000 change proposals were considered during drafting, the ministry statement said.
Original Link:https://www.computerworld.com/article/4131303/germany-greenlights-the-eu-ai-act-triggering-countdown-for-enterprise-compliance.html
Originally Posted: Thu, 12 Feb 2026 11:57:27 +0000












What do you think?
It is nice to know your opinion. Leave a comment.