AI × Legal × Compliance × Governance

The AI Transformation of Legal and Compliance Functions

We are witnessing a fundamental shift: from external standard solutions to company-specific, AI-powered applications. At the intersection of innovation speed and regulatory responsibility, a new paradigm for Corporate Legal and Compliance is emerging.

Dr. Nicolai Kruck
Dr. Nicolai Kruck, MLE — Compliance by AI
The Starting Point
Why Standing Still Is the Greatest Risk

The pace of AI development exceeds the adaptability of traditional corporate structures. 88% of all organizations already use AI in at least one function (McKinsey, 2025) — yet only 6% have fundamentally redesigned their workflows. The remaining 94% risk being overtaken by the next wave of disruption.

A paradigm shift is particularly evident in Compliance, Governance, Risk and Legal: companies are moving away from rigid, externally sourced standard solutions toward flexible, internally developed AI applications tailored to their specific regulatory requirements.

The Central Tension

Innovation Side
Speed
Automation
Competitive Advantage
Efficiency Gains
🛡️
Regulatory Side
GDPR / EU AI Act
IT Security
Auditability
Accountability

Massive Automation of Knowledge Work

McKinsey estimates that 22% of lawyer activities and 35% of legal assistant work are automatable with current AI. Goldman Sachs puts automatable legal tasks at 44%.

▸ Read deep dive
+
The Data: Where Does Automation Really Stand?

McKinsey's report "Agents, Robots, and Us" (November 2025) sharpened the debate: 57% of all U.S. work hours are potentially automatable with today's technology — nearly double the 30% estimate from 2023. For the legal sector, the numbers are particularly striking. Legal and Administrative Services rank among the occupations with the highest automation potential — together representing 40% of all U.S. wages in highly automatable roles.

Goldman Sachs' study "The Potentially Large Effects of AI on Economic Growth" (2023) remains a key reference: 44% of all legal tasks are automatable by generative AI — the highest rate among all knowledge professions, ahead of financial services (35%) and management (32%). The OECD confirms this trend: Legal professions rank globally among the top 5 most AI-exposed occupational groups.

What Gets Automated — and What Doesn't

Automation doesn't blanket "legal work" — it targets specific task types: document review and classification, regulatory research, standard contract drafting, compliance monitoring, and reporting. What remains non-automatable: strategic legal counsel, negotiation, judgment calls under uncertainty, novel legal questions, and ethical balancing of conflicts of interest. McKinsey emphasizes: Over 70% of skills demanded by employers today will remain relevant in an AI-dominated workplace — though applied differently.

"Everyone is experimenting, but almost nobody is transforming. Only 6% redesign workflows and win — the other 94% become footnotes."

— LawFuel, Analysis of McKinsey AI Report 2025
The Scaling Problem: Adoption Without Transformation

McKinsey's Global AI Survey 2025 (~2,000 executives): 88% of companies use AI, but only 39% report measurable EBIT impact. For most, the effect is below 5%. Only 6% of companies — McKinsey's "high performers" — fundamentally redesign their workflows. The rest apply AI to decades-old processes.

🔮 Outlook & Recommendations

Companies must act now: Conduct a systematic task analysis — which legal tasks are automatable, which require human judgment? Launch pilot projects with measurable KPIs, not enterprise-wide visions. Prioritize upskilling — McKinsey shows demand for "AI fluency" in job postings has grown sevenfold in two years. Organizations without AI-competent legal teams in 12 months won't be able to recruit them.

From Buy to Build: Companies Becoming Software Developers

Low-code platforms, foundation models, and agentic AI enable legal and compliance teams to build custom applications — without traditional IT projects, faster and more precisely than any off-the-shelf solution.

▸ Read deep dive
+
Why the Barrier to Entry Has Fallen

Foundation Models (OpenAI's GPT-4o, Anthropic's Claude, Meta's Llama, Mistral's Mixtral) provide expert-level language understanding via APIs — for pennies per query. Frameworks like LangChain and LlamaIndex orchestrate complex RAG workflows incorporating proprietary documents. Platforms like n8n, Make, and Microsoft Power Platform enable non-programmers to configure AI agents visually.

BCG's "Build for the Future" report (2025): Only 5% of companies are "Future-Built" generating scaled AI value. 60% report minimal results — often because they rely on generic solutions that don't address their specific governance requirements.

Practical Examples of Successful Internal Development

McKinsey "Lilli": Used by 75% of 43,000 employees monthly, converting 50,000+ consulting hours into higher-value analysis. PwC "ChatPwC": Generates compliance reports for 75,000+ trained employees. Klarna: The Swedish fintech replaced capabilities previously handled by Salesforce, Workday, and external firms — CEO Sebastian Siemiatkowski reported AI agents handling work equivalent to 700 customer service employees.

"Organizations with defined AI strategies are 2x more likely to experience revenue growth and 3.5x more likely to realize critical AI benefits."

— Steve Hasker, CEO Thomson Reuters, 2026
Data Sovereignty as Strategic Driver

For European companies, data sovereignty is the central build argument. GDPR, the EU AI Act, and sector-specific regulations require control over data flows. Gartner projects: By 2027, 35% of countries will be locked into region-specific AI platforms. European companies building internal competence now gain strategic independence.

🔮 Outlook & Recommendations

Pursue a hybrid build-buy strategy: Cover standardized tasks with established tools, but build internal competence for company-specific compliance applications. First step: A concrete pilot project in 4–6 weeks — e.g., a RAG-based policy chatbot. Deloitte's State of AI Report (2026): Only 1 in 5 companies has a mature AI governance model. Building capability now creates sustainable competitive advantage.

Agentic AI: From Assistants to Autonomous Agents

Gartner predicts that by 2026, 40% of all enterprise applications will integrate AI agents (today: under 5%). In legal, agents will independently research, review, and prepare compliance decisions.

▸ Read deep dive
+
What Distinguishes Agentic AI from Current Tools

Current AI tools work reactively: humans ask, AI responds. Agentic AI marks a paradigm shift — systems that autonomously plan, decide, act, and learn from results. Gartner's Anushree Verma describes five stages: From AI assistants (2025) through task-specific agents (2026) and collaborative multi-agent systems (2027–2028) to the "new normal" (2029), where 50%+ of knowledge workers create and govern agents.

McKinsey's 2025 Global AI Survey confirms: 62% of companies are already testing AI agents, with adoption fastest in IT, knowledge management, and customer service.

Agentic AI in Legal: Concrete Developments

Thomson Reuters CoCounsel launches agent-based legal workflows in early 2026 with autonomous document review and "Deep Research." LexisNexis Protégé deploys four specialized agents collaborating on complex workflows. Harvey AI, used by Allen & Overy and PwC Legal, develops specialized legal agents for contract drafting, regulatory analysis, and litigation support. Gartner predicts zero-touch contracting for low-risk agreements and 95% accuracy in surgical redlining for 2026.

"AI agents will evolve rapidly, progressing from task-specific agents to agentic ecosystems — transforming enterprise applications into platforms enabling seamless autonomous collaboration."

— Anushree Verma, Sr Director Analyst, Gartner (August 2025)
The Governance Challenge: Who Controls the Agents?

Gartner warns: Over 40% of agentic AI projects will be canceled by end of 2027 due to escalating costs or unclear business value. Additionally, over 2,000 "death by AI" legal claims are expected by end of 2026. The EU AI Act requires human oversight (Art. 14) for high-risk systems — including agent-based ones.

🔮 Outlook & Recommendations

Prepare now: Build an Agent Governance Framework — which decisions can agents make autonomously, where is human-in-the-loop mandatory? Implement audit trails for every agent action. Build agent management competence internally: Gartner expects 50%+ of knowledge workers to create agents "on demand" by 2029. Legal departments that don't build this capability become the bottleneck of the enterprise AI strategy.

EU AI Act: Compliance Becomes Mandatory

From August 2026, the core obligations of the EU AI Act take effect. High-risk AI systems — including in the legal sector — require risk management, technical documentation, and human oversight. Penalties up to €35M or 7% of global revenue.

▸ Read deep dive
+
The Regulatory Timeline

The EU AI Act (Regulation 2024/1689) is enforced in three stages: Since February 2025, prohibitions on unacceptable AI practices apply. From August 2025, transparency obligations for general-purpose AI (GPAI) take effect. From August 2026, core obligations for high-risk systems become enforceable — the most relevant stage for legal departments.

AI systems in administration of justice and democratic processes (Annex III, Point 8) are classified as high-risk. Obligations include: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), logging (Art. 12), transparency (Art. 13), human oversight (Art. 14), and accuracy/robustness/cybersecurity requirements (Art. 15).

International Regulatory Pressure

The Colorado AI Act takes effect June 2026. The Illinois AI in Employment Act has been in effect since January 2026. ABA Formal Opinion 512 (July 2024) requires lawyers to have "reasonable understanding" of AI tools. Gartner projects: By 2026, 80% of organizations will formalize AI policies addressing ethical, brand, and PII risks.

"2026 marks the emergence of a new divide among organisations: those that adopt an AI strategy and those that do not."

— Steve Hasker, CEO Thomson Reuters
Conformity Assessment: Practical Steps

Compliance with the EU AI Act requires: AI inventory, risk analysis per system, technical documentation, human oversight mechanisms, and audit trails. ISO/IEC 42001:2023 (AI Management Systems) provides a compatible international framework. Organizations implementing this standard simultaneously build the foundation for EU AI Act compliance.

🔮 Outlook & Recommendations

Less than 6 months until August 2026. Organizations serving the EU market must act now: create AI inventories, assign risk categories, build governance structures. Penalties — up to €35M or 7% of global revenue — make inaction an existential risk. Yet compliance is a strategic enabler: Organizations with mature AI governance can scale faster. Thomson Reuters shows: Organizations with defined AI strategies are 2x more likely to achieve revenue growth and 3.5x more likely to realize critical AI benefits.

The In-House Power Shift

52% of in-house counsel actively use GenAI (ACC/Everlaw, 2025) — doubling from the previous year. 64% use AI specifically to reduce dependence on outside counsel.

▸ Read deep dive
+
The Numbers Behind the Power Shift

The ACC/Everlaw GenAI Survey 2025 documents one of the fastest adoption waves in legal market history: GenAI usage in legal departments more than doubled in one year — from 23% to 52%. Notably, 64% of in-house teams expect to depend less on outside counsel through internally built AI capabilities. Routine work traditionally outsourced to law firms is increasingly handled internally.

What Drives the In-House Power Shift

Three factors converge: Availability of powerful tools (CoCounsel, Harvey AI, Luminance), cost pressure (internal AI review costs a fraction of external hourly rates), and data control (internal solutions avoid sensitive data transfers to third parties).

Forrester tempers the hype: Their 2026 predictions declare the "AI hype period over," projecting enterprises will defer 25% of planned AI spending into 2027 due to ROI concerns. Only 15% of AI decision-makers reported EBITDA improvements in the past 12 months.

"The gap between inflated vendor promises and value delivered is widening, forcing market correction."

— Sharyn Leaver, Chief Research Officer, Forrester (2026 Predictions)
New Competencies: The GC as Technology Strategist

The power shift fundamentally changes the General Counsel's profile. McKinsey confirms: Companies with active C-suite participation in AI initiatives achieve measurable value 2.6x more often. Gartner's "Predicts 2026: AI and Agentic AI Will Enable Legal Self-Service" forecasts that agentic AI will transform legal departments through higher lawyer productivity, internal self-service, and automated routine contracts.

🔮 Outlook & Recommendations

The in-house power shift is no longer a prediction — it's happening. Legal departments must position themselves as strategic technology functions: Double legal-tech budgets, build interdisciplinary teams (lawyers, data scientists, process experts), and offer AI governance as internal advisory. Law firms without demonstrable AI capabilities and transparency will lose market share — the question is not whether, but how fast.

Strategic Action Areas
Three Levers of AI Transformation
The key factors that will determine the success and relevance of legal and compliance functions in the years ahead.
🏗️

Build Instead of Buy

Rigid standard solutions with long implementation cycles are being replaced by in-house AI applications. Foundation models like GPT-4, Claude or Llama make it possible to develop domain-specific legal and compliance tools in weeks rather than years — adapted to your own governance, data and risk landscape.

Build vs. Buy • Sovereignty • Time-to-Value
🤖

Agentic AI & Low-Code

The next generation of AI no longer just reacts to prompts but plans, decides and acts autonomously. Combined with low-code/no-code platforms, AI agents emerge that independently conduct compliance reviews, contract analyses and due diligence processes — under human oversight, but with speed and consistency far superior to manual processes.

AI Agents • No-Code • Automation
🛡️

Governance by Design

Innovation without a control framework is negligent. The EU AI Act, GDPR and industry-specific regulation require that AI governance be built into the architecture from the start — not retrofitted. Only 1 in 5 companies has a mature governance model for autonomous AI systems (Deloitte, 2026). Those who invest early gain room to maneuver.

EU AI Act • GDPR • Risk Management

88%
of organizations use AI in at least one function (McKinsey 2025)
5%
sind "Future-Built" — generieren skalierten KI-Wert. 60% berichten minimale Ergebnisse (BCG 2025)
40%
of enterprise apps will integrate AI agents by 2026 (Gartner)
35M€
maximum penalty under the EU AI Act — or 7% of global annual revenue

Sources: McKinsey Global AI Survey 2025 · BCG Build for the Future 2025 · Gartner Top Tech Trends 2025 · EU AI Act (Regulation 2024/1689) · ACC/Everlaw GenAI Report 2025 · Deloitte State of AI 2026


In-Depth Analyses
Six Perspectives on AI Transformation
Each article addresses a core question of AI transformation in legal and compliance — with concrete data, practical examples and further sources.
01 — TRANSFORMATION

Why AI Is a Structural Shift — Not a Short-Term Trend

Overview: How AI is fundamentally changing the work of legal and compliance departments
▸ Read article
+

The introduction of artificial intelligence in legal and compliance departments is often understood as a technological upgrade — faster research, more efficient document review, automated standard processes. This perspective falls short. What is currently taking place is a structural transformation of the entire legal value chain, comparable to the digitization of the financial industry two decades ago.

The Dimensions of Change

AI is not just changing individual tasks but three fundamental dimensions simultaneously. First, task structure: activities that have traditionally constituted the daily work of lawyers and compliance officers — contract analysis, regulatory research, due diligence reviews, reporting — are increasingly automatable. According to a Goldman Sachs study (2023), 44% of legal tasks can be automated by generative AI — the highest rate of all knowledge professions.

Second, competency profiles: legal education has not fundamentally changed in decades. Yet the demands on in-house counsel are shifting: alongside legal expertise, technological literacy, data affinity and the ability for human-machine collaboration are increasingly expected. The ACC Foundation found in 2025 that 52% of in-house counsel already actively use GenAI — up from 23% a year earlier.

Third, the strategic role: legal and compliance departments are evolving from pure control functions to strategic enablers. Those who master AI governance become internal advisors for the entire company — not just on legal questions, but on responsible AI implementation across all business areas.

Three Levels of AI Integration

It is important to distinguish between three qualitatively different levels of AI integration that are frequently conflated in public discussion:

  • Automation fully replaces repetitive, rule-based tasks (e.g., automatic contract categorization, deadline monitoring, standardized compliance checks).
  • Assistance supports human decisions through suggestions, summaries and analyses — the final assessment remains with the human (e.g., AI-assisted contract drafting, research summaries).
  • Decision support goes further: AI systems assess risks, predict outcomes and recommend courses of action — humans decide based on AI-generated insights (e.g., predictive litigation analysis, M&A risk assessment).

Most companies are currently in the transition phase between levels 1 and 2. The strategic challenge is to prepare the organization for level 3 — while simultaneously ensuring that human control and responsibility are maintained.

Core thesis: Legal and compliance departments that view AI purely as an efficiency measure miss the strategic dimension. The decisive question is not "How do we save 30% review time?" but "How do we position ourselves as a strategic partner for the AI transformation of the entire company?" Those who master AI governance — for themselves and as internal advisors — become an indispensable function.

02 — USE CASES

Practical Experience and Real-World AI Use in Legal & Compliance

Real-world use cases, common mistakes and lessons learned from implementation
▸ Read article
+

The discussion about AI in the legal sector often oscillates between two extremes: the utopian vision of a fully automated legal department and the skeptical defense that AI can never replace "real" legal work. Both positions miss reality. What matters are concrete practical experiences — from real implementations, with measurable results and documented learning curves.

Contract Analysis and Review

The most mature use case. AI systems analyze contracts in seconds, extract relevant clauses, identify risks and match against internal policies. Leading CLM platforms like Ironclad, Icertis and Sirion already integrate agent-based AI. Results show review time reductions of up to 90% with simultaneously higher consistency. The key lies in training data quality: companies that incorporate their own contract standards as references achieve significantly better results than those relying on generic models.

Compliance Monitoring and Regulatory Change Management

Rule-based compliance checks are being supplemented by AI systems that can interpret natural language regulations and map them to internal processes. EY deploys AI systems that support over 80,000 tax experts in more than 3 million compliance processes annually. Particularly effective: automated regulatory change management that identifies new regulations, assesses their relevance to the company and flags affected internal policies.

Next-Generation Legal Research

Thomson Reuters CoCounsel and LexisNexis Protégé+ already deploy agent-based workflows: autonomous research across multiple sources, source comparison, contradiction detection and structured summaries with citations. Westlaw Precision uses AI-driven relevance algorithms that demonstrably reduce research time by 40–60%. Harvey AI offers specialized models for legal reasoning, used by leading international firms like Allen & Overy and PwC Legal.

Data Protection Management and GDPR Compliance

The privacy sector is particularly suited for AI automation: automated data protection impact assessments (DPIAs), privacy chatbots for employee and data subject requests, AI-assisted records of processing activities and automated cookie consent management. The structured, rule-based nature of data protection makes it ideal for AI-supported processes.

Common Mistakes and Unrealistic Expectations

  • Mistake 1 — "Plug and Play": AI tools are purchased without preparing the organization. Without clear processes, responsibilities and quality assurance, usage remains superficial.
  • Mistake 2 — Ignoring hallucination: LLMs produce plausible-sounding but incorrect statements. Without systematic verification and human-in-the-loop processes, this is a significant liability risk.
  • Mistake 3 — Scaling too fast: Prototypes often work, but the transition to production fails due to data quality, IT integration and change management. McKinsey reports that 88% of companies use AI, but only 39% achieve measurable value.
  • Mistake 4 — Not involving lawyers: AI projects in the legal sector fail when driven exclusively by IT — without domain-specific knowledge of what constitutes "good" legal work.

Lesson Learned: The most successful AI implementations start not with technology but with the question: "What specific problem are we solving, and how do we measure success?" Companies that start with a clearly defined pilot project, define measurable KPIs and expand iteratively achieve better results than those rolling out enterprise-wide AI strategies without practical validation.

03 — MARKET OVERVIEW

AI Tools for Lawyers: What the Market Offers — and Where the Limits Are

Overview of available tools, evaluation criteria and an honest assessment
▸ Read article
+

The market for legal tech AI has exploded in the last two years. Hundreds of tools promise to make legal work faster, cheaper and better. But quality differences are enormous — and marketing promises regularly exceed actual capabilities. A sober overview is therefore essential.

Contract Drafting and Review (CLM)

The most mature market within legal AI. Leading platforms include Ironclad (end-to-end CLM with AI assist for drafting and review), Icertis (enterprise CLM with particular strength in procurement contracts), Sirion (CLM focused on post-signature management and obligation tracking) and Juro (collaborative contract platform for mid-market). Newer providers like Spellbook (AI drafting assistant, integrated into MS Word) and Robin AI (AI-powered contract review focused on speed) specifically leverage generative AI. Evaluation criterion: How well can the AI be trained on your own contract standards and playbooks?

Fact-Finding and Legal Research

Thomson Reuters CoCounsel (GPT-4-based, integrated into Westlaw, Practical Law and Drafting) is currently the industry standard for AI-powered research. LexisNexis Protégé+ offers similar capabilities with multi-jurisdiction research. vLex Vincent AI excels in European and international law. Harvey AI positions itself as a specialized legal LLM, used by major firms like Allen & Overy (A&O Shearman). Evaluation criterion: source transparency — does the system show where information comes from and enable verification?

Due Diligence and Document Analysis

Kira Systems (now part of Litera) is the established player for M&A due diligence with trainable AI. Luminance relies on proprietary LLMs that don't use external data — an argument for data-sensitive clients. Evisort (now part of Workday) combines contract analysis with ERP integration. For eDiscovery, Relativity (with aiR, the new AI review assistant) and Everlaw are market leaders. Evaluation criterion: How quickly does the system learn from corrective feedback?

Compliance Tools and GRC

In the GRC space (Governance, Risk, Compliance), established platforms are increasingly integrating AI capabilities: ServiceNow GRC with AI-powered risk assessment, OneTrust for privacy and AI governance, Diligent for board governance and ESG, Riskonnect for enterprise risk management. Specialized AI compliance tools like Ascent RegTech (automated regulatory analysis) and FiscalNote (regulatory intelligence) complete the picture.

Evaluation Criteria for Tool Selection

  • Effectiveness and reliability: How high is accuracy? Are there benchmarks? How transparent is the provider about error rates and hallucination rates?
  • Data protection and security: Where is data processed? Are inputs used for model training? Are on-premise or private cloud options available? GDPR compliance?
  • Control mechanisms: Are there source citations? Human-in-the-loop options? Audit trails? How can AI decisions be traced and documented?
  • Integration: Can the tool be integrated into existing IT landscape (DMS, ERP, CRM)? Are there APIs? What is the implementation effort?
  • Risks: Bias in training data, hallucinations, vendor lock-in, liability issues with faulty AI outputs, dependence on US cloud providers.

Honest assessment: The market is fragmented, fast-moving and characterized by exaggerated promises. Most tools deliver good results for simple, well-defined tasks — contract classification, standard research, document comparison. For complex legal assessments, strategic advice and discretionary decisions, human judgment remains indispensable. The biggest mistake: evaluating tools based on demos rather than real workflows.

04 — MAKE OR BUY

Developing Your Own AI Solutions — When It Pays Off and When It Doesn't

The make-or-buy decision for legal and compliance departments: strategies, risks, practical examples
▸ Read article
+

The compliance function is at a turning point. For decades, procuring external software was the standard: GRC platforms, CLM systems, whistleblower tools — all were purchased, configured and integrated into existing IT landscapes. This approach was understandable in a world where software development was expensive, time-consuming and highly specialized. That world no longer exists.

Why Build Is Realistic Today

The availability of powerful foundation models (GPT-4, Claude, Llama, Mistral), combined with low-code platforms, RAG architectures (Retrieval-Augmented Generation) and modular API structures, has dramatically lowered the entry barrier. What used to be a twelve-month IT project with a six-figure budget can now be realized by an interdisciplinary team of legal, compliance and IT in a few weeks as a functional prototype.

Anthropic's Claude, OpenAI's Assistants API and Microsoft's Azure AI Foundry make it possible to build specialized agents that access your own documents, policies and processes — without having to train a single model yourself. Tools like LangChain, LlamaIndex and n8n orchestrate complex workflows with minimal code.

Arguments for Build

  • Tailored governance: External solutions map generic compliance requirements. Internal tools can integrate specific policies, risk matrices and escalation paths from day one.
  • Data sovereignty: In regulated industries and under the GDPR, the question of where data is processed is non-negotiable. Internal solutions on own infrastructure offer structural advantages.
  • Speed: The regulatory landscape changes faster than release cycles of external providers. Internal teams can react in days instead of waiting for the next vendor update.
  • Strategic knowledge building: Those who build internally build competence. Those who buy externally remain dependent. Long-term, internal AI competence is a competitive advantage.

Arguments for Buy

  • Maturity and validation: Established platforms (Ironclad, Relativity, ServiceNow) bring years of development, customer feedback and validated workflows.
  • Maintenance and support: In-house developments require ongoing maintenance. Models become outdated, APIs change, security gaps must be closed.
  • Compliance certification: Established providers are often SOC2, ISO 27001 or C5 certified. For internal tools, this compliance must be ensured independently.

Practical Examples: Who Builds In-House

McKinsey's internal AI platform "Lilli" is used by 75% of its 43,000 employees monthly and has converted over 50,000 consulting hours into higher-value analytical work. PwC's "ChatPwC" creates compliance reports and improves audit transparency. At Volkswagen AG, several AI-based in-house developments were deployed in data protection: a privacy chatbot, an automated documentation system and an AI-assisted contract analysis tool — development time: weeks, not months.

The Hybrid Strategy

The most sensible answer in most cases is neither pure build nor pure buy, but a hybrid model: covering standard processes (eDiscovery, CLM, research) with established tools while simultaneously building internal competence for company-specific applications — particularly those involving sensitive data or unique governance requirements. The EU AI Act requires governance compliance regardless of whether an AI system was purchased or developed internally.

Strategic recommendation: Start with a clearly defined internal pilot project — e.g., a policy chatbot or an automated compliance check for a specific regulatory framework. Use existing foundation models via APIs, combined with your own data via RAG. Measure results rigorously. And simultaneously build an internal competence team that combines legal expertise with technical understanding. Deloitte's State of AI Report (2026) shows: Only 1 in 5 companies has a mature governance model for AI — this is where the strategic opportunity lies.

05 — REGULATION

EU AI Act, GDPR and Compliance Obligations for AI Systems

What legal and compliance departments need to know about AI regulation — and what role they play
▸ Read article
+

The regulation of artificial intelligence is no longer a distant future topic. The EU AI Act — the world's first comprehensive AI regulation — takes full effect from August 2026. National regulations worldwide are tightening in parallel. For legal and compliance departments, a dual challenge emerges: they must apply AI regulation (to their company's AI systems) and simultaneously comply with it themselves (in their own AI applications).

EU AI Act: Risk-Based Approach

The EU AI Act classifies AI systems into four risk categories. Particularly relevant for legal and compliance is that AI systems used in administration of justice and democratic processes are classified as "High Risk" (Annex III, Point 8). This includes AI-assisted legal advice, automated legal interpretation and systems influencing administrative decisions. The obligations for high-risk systems are extensive:

  • Risk management system (Art. 9): Ongoing identification, assessment and mitigation of risks throughout the entire lifecycle.
  • Data governance (Art. 10): Requirements for training, validation and test data — relevance, representativeness, accuracy.
  • Technical documentation (Art. 11): Detailed description of the system, its capabilities, limitations and risks.
  • Human oversight (Art. 14): The system must be designed so that humans can effectively monitor it and intervene when necessary.
  • Accuracy, robustness, cybersecurity (Art. 15): Demonstrable performance standards and protection against manipulation.

Sanctions are significant: Up to €35 million or 7% of global annual revenue for the most serious violations. The phased implementation has been running since February 2025 (prohibition of unacceptable practices) and will be fully enforced by August 2027 (high-risk systems).

GDPR and AI: The Existing Foundation

Independent of the AI Act, the GDPR remains the regulatory foundation for any AI deployment processing personal data. Particularly relevant are Art. 22 (right not to be subject to solely automated decisions), Art. 35 (data protection impact assessment obligation for high risk), Art. 25 (data protection by design and by default) and the general principles of purpose limitation and data minimization. The combination of AI Act and GDPR creates a dense regulatory framework requiring legal expertise in implementation.

International Developments

The EU is not alone. The Colorado AI Act takes effect in June 2026, regulating "High-Risk AI Systems" with transparency and reporting obligations. The Illinois AI in Employment Act (January 2026) prohibits discriminatory AI in HR. In Canada, the Artificial Intelligence and Data Act (AIDA) is pending. China has already created its own framework with the Interim Measures for the Management of Generative AI. For globally operating companies, a complex web of overlapping requirements emerges.

The Role of Lawyers: Architects and Users

For legal and compliance professionals, a unique dual role emerges. As architects, they define internal AI governance frameworks, interpret regulatory requirements and advise the company on legally compliant AI implementation. As users, they themselves use AI tools and must ensure these meet regulatory requirements. This dual role requires a new kind of competence: legal expertise combined with technical understanding and governance design capabilities.

Tension: The central challenge lies in balancing innovation and responsibility. Companies that forgo AI out of regulatory caution don't become safer — they become slower and less competitive. The solution: "Governance by Design" — AI compliance is integrated into the architecture from the start, not retrofitted. Those who view AI governance as a strategic asset rather than a regulatory burden can scale faster because compliance questions are already answered.

06 — ORGANIZATION & CULTURE

Warum KI keine "Plug and Play"-Lösung ist — und was es wirklich braucht

Organizational prerequisites, cultural change and the art of continuous adaptation
▸ Read article
+

The technological side of AI transformation gets the most attention: which tool? Which model? On-premise or cloud? But the real bottleneck is rarely technology — it's the organization. McKinsey's 2025 AI Survey shows: The biggest barriers to successful AI adoption are not technical but concern missing talent (35%), unclear governance (33%) and insufficient leadership support (28%). The technology is ready. Most organizations are not.

Why Traditional Legal Departments Are Poorly Prepared for AI

Legal and compliance departments are among the most structurally conservative areas in companies — and for good reason. Legal work is based on precision, precedent and reliability. Errors have real consequences: liability, reputational damage, regulatory sanctions. This culture of risk minimization stands in tension with the nature of AI systems, which operate probabilistically, produce errors and require continuous experimentation.

Additionally, legal education does not prepare for technological transformation. Neither university studies nor practical legal training systematically teach technological literacy, data affinity or agile working methods. The result: many lawyers approach AI with a mixture of fascination and uncertainty — and reflexively reach for what they know: external consultants, standard processes, waiting.

What Is Needed Instead: Five Prerequisites

1. Continuous education — not as an event but as a system. A one-off workshop on "AI for Lawyers" is not enough. Systematic competence building is needed, encompassing technological basics, prompt engineering, data literacy and AI ethics. Companies like Allen & Overy have built their own AI training programs; PwC has trained over 75,000 employees in AI competencies — with measurable productivity gains.

2. Interdisciplinary collaboration. AI projects in the legal sector fail when operated in isolation within a single department. Successful implementations require mixed teams of lawyers (domain knowledge), IT/data science (technical implementation), operations (process integration) and compliance (governance). This interdisciplinary approach is uncharted territory for many legal departments — and requires deliberate organizational decisions.

3. Iterative approaches instead of waterfall projects. The classic legal method — gather facts, analyze, produce final opinion — is linear. AI projects require iteration: rapid prototypes, testing, feedback, adjusting, testing again. The concept of the "Minimum Viable Product" (MVP) is foreign to lawyers — but essential for successful AI implementation. Start small, test fast, scale what works.

4. Culture of openness and critical reflection. AI systems make mistakes — that's not the problem. The problem is when organizational culture evaluates mistakes as failure rather than learning opportunities. Successful AI adoption requires a culture where employees can openly discuss AI errors, where experimentation is welcome and where "I don't know if this works — let's test it" is an acceptable statement.

5. Leadership as enabler. Without active support from department leadership or the General Counsel, AI initiatives fail due to institutional inertia. Leaders don't need to become AI experts themselves — but they must set the strategic direction, provide resources, create psychological safety and model change. McKinsey shows: companies with active C-suite participation in AI initiatives achieve measurable value 2.6 times more often.

Core message: The AI transformation of legal and compliance departments is 30% a technology topic and 70% an organization and culture topic. Those who only invest in tools without transitioning the organization to continuous learning, interdisciplinary collaboration and iterative work will at best achieve isolated efficiency gains — but miss the strategic transformation. Technology evolves exponentially. The decisive question is whether the organization can keep pace.

✍️ Real Voice
My Own Thoughts
In this section you will find texts written exclusively by me — Dr. Nicolai Kruck — personally. No AI, no automation, no generated content. Just my own reflections, experiences and perspectives.

Why this section? This website is deliberately an experiment. Design, structure and all other content on this site were created entirely by Artificial Intelligence – from the layout to the texts to the source research. I wanted to test what is possible today when you let AI create an entire web presence.

This "Real Voice" section is the deliberate exception: here I write myself. Authentic, unpolished, human. Because in the age of AI, the real, personal voice becomes more valuable than ever.

🤖 All other content on this website was generated by AI

THE FUTURE IS NOW – AND TOMORROW IT WILL BE DIFFERENT

Dr. Nicolai Kruck · February 2026
✦ Click to read
+

Hello dear visitors,
in this section of my website theailawyer.org you will find exclusively content created by me personally – AI-generated content has no place here.

This creates a space on this site where I can share my personal views and commentary. I also use this section to explain the background and purpose of this project.

Why did I create theailawyer.org?

As a lawyer, my involvement with IT issues was for a long time primarily that of a requirements provider – defining what I needed in terms of legal and compliance applications and presenting those requirements to an IT department or an external vendor. I would then wait (often for weeks) for results. I was generally glad when my computer worked and I could simplify my professional and personal life with a bit of IT support. I had never shown any particular interest in the technical side of things. That world seemed too cumbersome and abstract to me, and I assumed it would stay that way forever.

When I took on my current role in data protection in 2018, I inevitably engaged more deeply with the topic of data and IT systems. But a genuine interest in IT subjects still did not develop.

When OpenAI released ChatGPT based on GPT-3.5 in November 2022, news of the AI revolution reached me too, and I began looking more closely at the topic in early 2023.

As was probably the case for most of us, I was deeply impressed and fascinated by the results these language models were producing. The spark was lit. I found it particularly exciting to think about and discuss how this new technology could be meaningfully integrated into my day-to-day work and into large corporate organisations, and I ran several projects on this with my team.

Then came January 2026: I heard about Claude Code through my information channels and did not hesitate to sign up for the somewhat more expensive account. This experience has fundamentally changed my perspective on the subject. Since mid-January, I have been spending a great deal of time running all kinds of projects with Claude.

The website theailawyer.org is one of the outcomes of my efforts to become productive with Claude Code.

If someone had told me a few weeks ago that I could build and run my own website, I would simply have laughed at them. Today I know that ANYONE (reasonably tech-open and curious) with a computer and internet access can build and run a website.

With this website I therefore pursue two goals: on the one hand, it serves as an experimental playground to discover and demonstrate what is possible with AI in February 2026. On the other hand, I want this website to be a platform for examining the use of AI in a legal context. That aspect has two dimensions for me: 1. How can lawyers use AI directly to work more effectively? 2. How can lawyers use AI to independently develop and operate the tools they need?

My Experimental Playground

This website and its content (with the explicit exception of this section) are 100% AI-generated. I did not write a single word of the content myself, and I did not write a single line of code for it. The website currently visible is the result of many prompts and several conversations with Claude to overcome technical hurdles. I will continuously evolve the website as I develop new ideas and find ways to implement them.

How can others do the same? Simply ask Claude (or any other AI of your choice)!

An Information and Exchange Platform

This website is not intended for AI experts. Rather, I want to publish information and perspectives here on the meaningful and effective use of AI in the legal field and in particular within large corporations.

I am of course aware of the irony of having an AI shed light on the question of what impact the AI revolution will have on the work of lawyers. It will be fascinating to see whether and how the AI assesses its own role, and what future scenarios and visions it presents to us. I want to make clear that the content is generated by an AI. My intervention will initially consist of influencing the strategic direction of the content when I feel that is necessary. I would also intervene if objectively incorrect information were to be presented. When I want to add or change substantive topics, I always craft my prompts so that the AI is guided by verified information and sources and applies academic standards. There should always be evidence and references wherever possible.

My First Experiences

After just the first few hours of my tentative experiments with Claude Code, it was clear to me that the AI revolution has now genuinely arrived and it is not an exaggeration to speak of a revolution. I would even go so far as to call it a genuine liberation. With these new tools it is possible to independently design and deploy applications. There seem to be almost no limits to creativity.

As lawyers, we were always (only) the requirements providers for IT systems. We had to explain to IT colleagues or external service providers which tools, workflows, upload fields and buttons we needed in order to, for example, digitalise a data protection management system or a business partner due diligence process. Weeks later we would see results, and the next release would then be many months away again.

Those days are probably over. I am firmly convinced of that. Compliance and legal applications can be brought to at least an MVP stage with good prompting (Vibe Coding) in a manageable amount of time – and it is actually great fun. In just a few hours I created a family app (shared shopping list, shared calendar, shared expense tracking, chat function) and got it running synchronously on our smartphones. In just a few hours I prompted the basic framework for a data protection management platform. Creating and launching this website was also accomplished in just a few working hours. I have not hit any limits so far, and the AI has made no promises about feasibility that ultimately could not be kept.

That is why I am so full of enthusiasm and drive. With this technology I am reaching an entirely new dimension of effectiveness, productivity and creativity. There seem to be no limits, and opportunities are opening up that I would never have dared to dream of. It feels as though an insurmountable barrier has suddenly fallen.

Perhaps the assessments of Matt Schumer ("Something big is happening") are infused with a generous dose of Bay Area hype. But I do see a substantial core to them.

Also very fascinating are the developments around OpenClaw (https://openclaw.ai); here you can set up an agent that independently handles digital tasks (from managing your inbox to maintaining your social media presence). At present, this approach raises significant security concerns, as the agent must take on extensive permissions of the user in order to carry out these tasks. I will report promptly once I have explored the topic further.

What Does This Mean for Legal and Compliance Departments in Companies?

AI-driven support for lawyers in transactional legal work, in capturing and summarising facts, in researching case law and commentary, or in drafting submissions and opinions should by now have reached in-house counsel too. Numerous providers have positioned themselves and are offering these services, some with slightly different features and capabilities. We will see which providers survive the competition that is now emerging.

Even more interesting from my perspective is the question of the future of developers and vendors of legal and compliance software such as EQS, Proxora or OneTrust. If the trends that are clearly taking shape continue to confirm themselves, compliance and legal departments will soon be able to independently design and operationalise this kind of software. This would bring not only cost advantages for companies. Companies would also be able to build a tool precisely tailored to their needs and could respond very flexibly to requirements for adjustment, without being dependent on slow-moving external forces. They would also have complete sovereignty over their data.

Companies that master this approach will have an enormous competitive advantage.

Naturally, creating a first "theoretical" tool or an MVP is only the beginning, and there are several hurdles to clear before end users within a company can operationally use a newly self-developed tool. The focus will likely be on IT security questions, documentation and IT compliance. But questions such as quality assurance, comprehensive and documented testing, and ongoing maintenance also need to be resolved.

The demands placed on IT departments in supporting such processes are likely to change significantly, and the success or failure of such projects will depend largely on a functioning symbiotic collaboration between requirements providers, internal IT/AI experts and other stakeholders.

What I perceive above all as a major challenge is the breathtaking pace of technological development and the possibilities that come with it. Tasks the AI could not perform – or performed only poorly – six months ago work very well today. Approaches that were state of the art a year ago are already ruthlessly outdated. This rapid pace will accelerate further, and the great skill will be in identifying the truly relevant changes and implementing them accordingly.

What does this mean for large corporations? If the future changes (too) quickly, there is ultimately little choice but to set yourself up as flexibly as possible in order to respond swiftly to new technologies. Large companies are not good at setting themselves up flexibly. Large organisations require operating in fixed processes and structures so that many small transactions always follow the same path and lead to comparable outcomes. Furthermore, in established companies that have grown over decades, we encounter very diversified IT landscapes. This leads to less flexibility and makes connecting the necessary data very resource-intensive.

I do not have a definitive answer to these challenges either. What seems important to me is that companies gather practical experience as quickly as possible and understand the AI revolution as an ongoing process rather than a one-off disruption to be worked through. Another interesting question will be whether it is sufficient to support existing processes with AI, or whether the processes themselves must be adapted to the capabilities of AI in order to harness its full potential.

I am very curious to see how large corporations will face up to this challenge, and I will report back once I have had further experience to share.

Nicolai Kruck, February 2026


Security & Regulation
Enabling Innovation Safely
Three dimensions that every AI project in legal and compliance must address from day one.
🔒

Data Protection (GDPR)

AI systems processing personal data are subject to the full requirements of GDPR — including Art. 22 (automated individual decisions), Art. 35 (DPIA) and the principles of data minimization.

  • Privacy by Design & by Default
  • Data protection impact assessment for AI
  • Transparency on algorithmic decisions
  • Right to human review
🏛️

EU AI Act & Regulation

The EU AI Act classifies AI systems by risk. Legal AI often falls under "High Risk" with extensive obligations for providers and deployers alike.

  • Risk classification & conformity assessment
  • Technical documentation & audit trails
  • Human oversight & escalation paths
  • Quality management system for AI
🛡️

IT Security & Sovereignty

AI systems expand the attack surface. Especially with autonomous agents, the need for cybersecurity governance grows exponentially — from prompt injection to data exfiltration.

  • On-premise or controlled cloud environment
  • Encryption, access control, logging
  • ISO/IEC 42001 for AI risk management
  • Sovereign AI: Data sovereignty in your own jurisdiction

Knowledge & Studies
Curated Source Collection
Studies, reports and analyses from leading consultancies, regulatory institutions and think tanks on AI transformation in legal, compliance and governance.
Consultancies & Analysts
Regulation & Legal Framework
Legal Tech & Agentic AI
Consultancies: Strategic AI Reports
BCGSep 2025

The Widening AI Value Gap — Build for the Future

1,250 companies worldwide: Only 5% generate scaled AI value. 60% report minimal results despite high investments. Agentic AI already delivers 17% of AI value.

Strategy
BCGDez 2025

Targets Over Tools: The Mandate for AI Transformation

Why boards need to move AI from a digital side project to a core performance agenda. From pilots to P&L impact.

Board Governance
AccentureJan 2026

The New Rules of Platform Strategy in the Age of Agentic AI

Companies with aligned AI, platform and business strategy achieve 2.2× revenue growth and 37% EBITDA lift. 94% of executives expect profound changes.

Platform Strategy
Accenture2025

Six Key Insights to Maximize ROI from Agentic AI

C-Suite report: When to build internal agentic systems, when to buy? The next 3 years will define the competitive landscape of the next decade.

C-Suite Strategy
Accenture2025

Technology Vision 2025: AI — A Declaration of Autonomy

4,000 executives from 28 countries: 69% see urgent need for reinvention. AI-driven software development democratizes code and enables tailored enterprise solutions.

Technology Vision
Squire Patton Boggs2025

The Agentic AI Revolution: Managing Legal Risks

EU Product Liability Directive includes software and AI as "product". New liability frameworks for autonomous agents. ICO report on data protection implications.

Legal Analysis
NAVEX2025

AI Governance, Risk & Compliance — Preparing for the Future

Only 18% of organizations have an enterprise-wide council for responsible AI governance (McKinsey). Why continuous monitoring becomes mandatory.

GRC
Deloitte / Harvard LawApr 2025

Strategic Governance of AI: A Roadmap for the Future

AI Governance Roadmap for Boards: End-to-end framework from risk assessment through compliance structures to strategic steering.

Governance Framework
Profile
Dr. Nicolai Kruck — The AI Lawyer

With over 17 years of experience in compliance, data protection, antitrust and corporate law in the automotive industry, I combine deep legal expertise with a clear vision for technological transformation. My path — from international law firms like Clifford Chance and Noerr, through in-house positions at Infineon Technologies and MAN SE, to leading compliance and privacy teams at Porsche and Volkswagen AG — has given me a unique perspective on the future of legal work.

As Head of Group Privacy International at Volkswagen AG, I lead a team of nine data protection specialists and actively drive the use of AI to optimize legal processes: privacy chatbots, automated documentation, AI-assisted contract review. I am convinced that the future of in-house legal work lies at the intersection of legal excellence, technological competence and strategic leadership.

"Compliance and legal departments will reach a new level of efficiency, quality and speed through the targeted use of Artificial Intelligence — if they have the courage to actively shape the change."

— Dr. Nicolai Kruck
  • AI in Legal OperationsChatbots, automated documentation, contract tools, process automation
  • Data Protection & PrivacyInternational GDPR compliance, AI governance, technical data protection
  • Compliance LeadershipAnti-corruption, business partner due diligence, antitrust law
  • Digital TransformationSoftware selection, implementation & optimization of compliance processes
  • Team Building & LeadershipHigh-performance teams, trust-based management, innovation culture
  • Automotive IndustryVW, Porsche, MAN, Infineon — deep industry knowledge

Career
Milestones of a Leadership Career
2023 — Present

Head of Group Privacy International, Divisional Support

Volkswagen AG

9 specialists. AI tools for privacy: chatbot, automated documentation, contract review. International data protection, AI governance, M&A privacy.

2020 — 2022

Head of Technical & International Data Protection

Volkswagen AG

5 specialists. Strategic alignment with international requirements. Chair of Group Steering Committee.

2018 — 2020

Senior Member, Agile Task Force — US Diesel Monitorship

Volkswagen AG

Central interface between US monitor and compliance organization.

2016 — 2018

Head of Legal and Compliance

Porsche Middle East & Africa FZE

Establishment of local compliance program. Legal services for 15+ markets.

2012 — 2016

Consultant Compliance (Team Lead)

MAN SE

Third-party due diligence, antitrust law, EU truck cartel proceedings.

2008 — 2012

In-House Counsel & Associate

Infineon Technologies · Noerr · Clifford Chance

Antitrust law, compliance, contract drafting. International law firm foundations.


Outlook
Next Development Stages
01

Whitepaper Series

In-depth analyses on build vs. buy, AI governance frameworks and industry-specific implementation strategies — available for download.

02

Thought-Leadership-Blog

Regular commentary on regulatory developments, new AI tools and strategic implications for legal & compliance.

03

Speaking & Advisory

Keynotes, panel discussions and strategic consulting for companies looking to make their legal and compliance function AI-ready.

Contact
Shaping the Future Together

Interested in AI transformation in legal & compliance, keynotes or strategic exchange?

Or write directly

Mit dem Absenden erklären Sie sich mit der Verarbeitung Ihrer Angaben zum Zweck der Contactaufnahme gemäß unserer Privacy Policy einverstanden. Ihre Daten werden ausschließlich zur Bearbeitung Ihrer Anfrage verwendet und nicht an Dritte weitergegeben. Es werden keine Cookies gesetzt.

✓ Thank you! Your message has been sent successfully.