Capstone Project Proposal
Navigating the AI Regulatory Patchwork: A Multi-Stakeholder Governance Framework for Responsible AI
Focus Area: AI Industry Regulation & Public-Private Partnerships
Intern Information
| Field | Details |
|---|---|
| Name | Isabel Budenz |
| Program | LLM International Commercial Arbitration, University of Stockholm (2025-2026) |
| Background | LLB International and European Law, University of Groningen (2022-2025) |
| Languages | German (Native), Spanish (Native), English (C2), French (B1) |
| Relevant Experience | Legal Researcher, A for Arbitration (2019-2025); Clifford Chance Antitrust Global Virtual Internship |
| Relevant Coursework | Introduction to AI and the EU AI Act; International Commercial Arbitration |
Executive Summary
The global AI regulatory landscape is fragmenting rapidly. The EU AI Act established the world’s first comprehensive framework, while the US pursues a deregulatory federal approach that conflicts with state-level initiatives. Meanwhile, public-private partnerships like the Partnership on AI and standards bodies like NIST and ISO are developing soft law frameworks that increasingly influence compliance expectations.
This project will develop a practical governance framework for AI companies navigating this complex multi-jurisdictional environment, with particular focus on public-private partnership models that can bridge regulatory gaps and build the trust necessary for AI adoption.
This regulatory and governance focus leverages Isabel’s International and European Law background and EU AI Act coursework, positioning her as an expert in cross-border AI compliance and multi-stakeholder governance.
Problem Statement
The Regulatory Fragmentation Challenge
EU AI Act Timeline (Now in Effect)
| Date | Milestone |
|---|---|
| August 1, 2024 | Entered into force |
| February 2, 2025 | Prohibited AI practices banned; AI literacy requirements effective |
| August 2, 2025 | GPAI obligations; AI Office operational; national authorities designated |
| August 2, 2026 | Full application including high-risk AI systems |
| August 2, 2027 | Safety components compliance |
Penalties: Up to EUR 35 million or 7% of global annual turnover
US Federal-State Tension
| Date | Development |
|---|---|
| January 2025 | Executive Order 14179 revoked Biden AI executive order |
| July 2025 | “Preventing Woke AI” order established federal procurement requirements |
| December 2025 | “National AI Policy Framework” order signaled federal preemption of state laws |
The December 2025 order:
- Established AI Litigation Task Force to challenge state AI laws
- Directed Commerce Department evaluation of state laws within 90 days
- Specifically targeted Colorado AI Act
- Ties federal funding to state AI policy compliance
However: 36 state AGs sent bipartisan letter opposing preemption; Senate voted 99-1 against penalizing states.
The Trust Gap
- AI enterprise adoption surged 115% (2023-2024)
- Only 62% of business leaders believe AI is deployed responsibly
- Only 39% of companies have adequate AI governance frameworks
- Estimated $4.8 trillion unrealized value by 2033 without trustworthy AI governance
Business Need: [Company Name] requires a comprehensive framework to navigate multi-jurisdictional compliance, engage effectively with regulators and standards bodies, and demonstrate responsible AI practices that build stakeholder trust.
Project Objectives
Primary Objectives
- Map the global AI regulatory landscape across EU, US (federal + key states), UK, and international frameworks
- Analyze public-private partnership models in AI governance and identify effective practices
- Develop a multi-stakeholder governance framework for AI companies operating across jurisdictions
- Create practical compliance tools mapping EU AI Act and state law requirements to operational practices
Secondary Objectives
- Assess federal preemption risks for state AI laws and develop contingency guidance
- Evaluate standards alignment opportunities (NIST AI RMF, ISO 42001, EU AI Act)
- Propose engagement strategy for standards bodies and multi-stakeholder initiatives
Research Foundation
Key Regulatory Frameworks
EU AI Act
- World’s first comprehensive AI legal framework
- Risk-based approach (prohibited, high-risk, limited risk, minimal risk)
- General Purpose AI (GPAI) model obligations
- Technical documentation, transparency reports, copyright compliance required
US Federal Landscape
- Executive order-driven (subject to change)
- December 2025 order signals preemption intent but cannot override statutes
- NIST AI Risk Management Framework remains canonical guidance
- Sector-specific regulation (FDA, FTC, financial regulators)
State-Level Innovation
- Colorado AI Act (targeted by federal order)
- California AI transparency requirements
- Illinois Biometric Information Privacy Act
- New York City automated employment decision tools law
International Standards | Framework | Issuer | Status | |———–|——–|——–| | AI Risk Management Framework | NIST | Published; Generative AI Profile (July 2024) | | ISO/IEC 42001 | ISO | Certifiable AI governance standard | | AI Framework Convention | Council of Europe | First legally binding AI treaty (2024) | | AI Ethics Recommendation | UNESCO | Global standard for 194 member states |
Public-Private Partnership Models
Partnership on AI (PAI)
- 129 organizations across 16 countries
- Responsible Practices for Synthetic Media (Adobe, BBC, OpenAI, TikTok)
- Guidance cited by NIST, OECD as policy inputs
- AI Policy Forum convened for UN engagement
Standards Development Organizations
- NIST: Crosswalks aligning AI RMF with OECD and ISO 42001
- IEEE: 7000-2021 ethical system design standard
- ISO: 42001 certification scheme
Industry Consortiums
- AI Alliance (IBM, Meta, others)
- Frontier Model Forum (Anthropic, Google, Microsoft, OpenAI)
- World Economic Forum AI Governance Alliance
Scope
In Scope
| Area | Details |
|---|---|
| Jurisdictions | EU (Germany, France, Spain, Netherlands), US (federal + CA, CO, NY, IL), UK, international |
| Frameworks | EU AI Act, state AI laws, NIST AI RMF, ISO 42001, Council of Europe Convention |
| PPP Models | Partnership on AI, standards bodies, industry consortiums, regulatory sandboxes |
| Company Types | AI developers, AI deployers, GPAI model providers |
Out of Scope
- Detailed sector-specific regulation (healthcare, financial services)
- Technical AI safety research
- Individual company compliance audits
- Lobbying strategy development
Deliverables
| # | Deliverable | Description | Format | Due |
|---|---|---|---|---|
| 1 | Global AI Regulatory Landscape Map | Comprehensive overview of AI regulations across target jurisdictions | Interactive Report (40 pages) + Visual Map | Week 4 |
| 2 | Public-Private Partnership Analysis | Assessment of governance models, effectiveness, and engagement opportunities | Research Report (25 pages) | Week 6 |
| 3 | Federal-State Preemption Risk Assessment | Analysis of preemption likelihood and contingency planning guidance | Legal Memo (15 pages) + Decision Tree | Week 7 |
| 4 | Multi-Stakeholder Governance Framework | Proposed framework for AI companies incorporating regulatory and soft law requirements | Framework Document (30 pages) + Implementation Guide | Week 10 |
| 5 | Compliance Mapping Tools | Practical tools mapping EU AI Act and state law requirements to operations | Excel/Interactive Tools + Checklists | Week 11 |
| 6 | Standards Engagement Strategy | Recommendations for participating in standards development and PPP initiatives | Strategy Memo (10 pages) + Presentation | Week 12 |
Methodology
Phase 1: Regulatory Landscape Mapping (Weeks 1-4)
Week 1-2: EU Framework Deep Dive
- Analyze EU AI Act obligations by risk category
- Research member state implementation approaches (leveraging multilingual capabilities)
- Map GPAI model provider obligations
- Identify AI Office guidance and enforcement priorities
Week 3-4: US and International Analysis
- Document federal executive orders and agency guidance
- Analyze key state laws (CO, CA, NY, IL)
- Review UK AI regulatory approach
- Assess international frameworks (UNESCO, Council of Europe)
- Produce Global AI Regulatory Landscape Map
Phase 2: Governance Models Analysis (Weeks 5-7)
Week 5-6: Public-Private Partnership Research
- Analyze Partnership on AI structure, outputs, and influence
- Review NIST stakeholder engagement model
- Examine ISO 42001 certification ecosystem
- Assess industry consortium effectiveness
- Interview/survey PPP participants where possible
- Produce Public-Private Partnership Analysis
Week 7: Preemption Risk Assessment
- Analyze December 2025 executive order legal authority
- Review constitutional preemption doctrine
- Assess litigation prospects and timeline
- Develop contingency planning guidance
- Produce Federal-State Preemption Risk Assessment
Phase 3: Framework Development (Weeks 8-10)
Week 8-9: Framework Design
- Synthesize regulatory and soft law requirements
- Identify common principles across frameworks
- Design governance structure incorporating multiple stakeholder interests
- Develop implementation methodology
Week 10: Framework Documentation
- Draft comprehensive framework document
- Create implementation guide
- Develop assessment criteria
- Produce Multi-Stakeholder Governance Framework
Phase 4: Practical Tools & Strategy (Weeks 11-12)
Week 11: Compliance Tools Development
- Build EU AI Act obligation mapping tool
- Create state law compliance checklists
- Develop risk classification decision trees
- Produce Compliance Mapping Tools
Week 12: Engagement Strategy & Presentation
- Develop standards body engagement recommendations
- Create PPP participation strategy
- Prepare executive presentation
- Produce Standards Engagement Strategy
Timeline
Week 1-2 ████████░░░░░░░░░░░░░░░░ EU AI Act & Member State Analysis
Week 3-4 ████████░░░░░░░░░░░░░░░░ US/International Analysis → Landscape Map
Week 5-6 ░░░░░░░░████████░░░░░░░░ PPP Research → Partnership Analysis
Week 7 ░░░░░░░░░░░░░░░░████░░░░ Preemption Risk Assessment
Week 8-10 ░░░░░░░░░░░░░░░░████████ Framework Development
Week 11 ░░░░░░░░░░░░░░░░░░░░████ Compliance Tools
Week 12 ░░░░░░░░░░░░░░░░░░░░░░██ Engagement Strategy & Presentation
Multilingual Research Advantage
Isabel’s language capabilities enable comprehensive EU member state analysis:
| Language | Jurisdictions | Regulatory Bodies |
|---|---|---|
| German | Germany, Austria | BfDI, DSK, RTR |
| Spanish | Spain | AEPD, Ministry of Digital Transformation |
| French | France, Belgium, Luxembourg | CNIL, APD, CNPD |
| English | UK, Ireland, Netherlands, EU institutions | ICO, DPC, AP, AI Office |
This enables analysis of how member states are implementing EU AI Act requirements differently—critical intelligence for companies operating across the EU.
Resources Required
Access
- EUR-Lex and member state legal databases
- US state legislation databases
- NIST, ISO standards documentation
- Partnership on AI publications and resources
- Academic databases (SSRN, journal access)
Subject Matter Expert Support
| Role | Purpose | Time |
|---|---|---|
| Primary Mentor | Weekly guidance | 2 hrs/week |
| Regulatory Affairs Lead | EU AI Act expertise | 4 hrs total |
| US Policy Counsel | Federal-state dynamics | 3 hrs total |
| Standards Participation Expert | PPP engagement | 2 hrs total |
Budget
| Item | Estimated Cost |
|---|---|
| Standards documents (ISO) | $500 |
| Conference/webinar access | $400 |
| Research database access | Existing subscription |
| Total | $900 |
Success Criteria
Deliverable Quality
- All 6 deliverables completed on schedule
- Regulatory map covers 10+ jurisdictions comprehensively
- PPP analysis includes primary research (interviews/surveys)
- Framework validated by regulatory affairs team
- Compliance tools tested and refined based on feedback
Business Impact
- Framework adopted by compliance function
- Tools deployed for active compliance monitoring
- Client advisory applications identified (3+)
- Standards engagement recommendations implemented
Thought Leadership
- Research informs company regulatory submissions
- Framework shared with industry partners
- Publication/presentation opportunity identified
Risks and Mitigation
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Regulatory changes during project | High | Medium | Build flexibility; establish monitoring protocol; focus on principles |
| Federal preemption litigation outcomes uncertain | High | Medium | Scenario planning; contingency guidance for multiple outcomes |
| PPP participation access limited | Medium | Low | Focus on public materials; identify accessible stakeholders |
| Framework complexity overwhelming for users | Medium | Medium | Tiered implementation guide; prioritization methodology |
Career Positioning Value
This project positions Isabel as an expert in cross-border AI governance and multi-stakeholder regulation:
- Regulatory Expertise: Deep knowledge of EU AI Act and US regulatory dynamics
- Policy Translation: Ability to convert complex regulations into practical compliance guidance
- Multi-Stakeholder Navigation: Understanding of how soft law and standards interact with regulation
- International Perspective: Multilingual analysis capability rare among regulatory specialists
- Industry Relevance: Framework immediately applicable to AI company operations
Career Paths Enabled:
- AI Policy Counsel at technology company
- Regulatory Affairs Specialist
- Standards Development Participant
- Think Tank Policy Researcher
- Government Affairs / Public Policy Role
Alignment with Industry Trends
This project addresses critical 2025-2026 developments:
| Trend | Project Relevance |
|---|---|
| EU AI Act full application (August 2026) | Compliance mapping tools directly applicable |
| US federal-state regulatory tension | Preemption analysis provides strategic guidance |
| AI trust gap ($4.8T unrealized value) | Governance framework addresses trust building |
| PPP influence on AI policy | Engagement strategy enables meaningful participation |
| Standards convergence (NIST-ISO crosswalks) | Framework incorporates multiple standards |
Stakeholders
| Stakeholder | Role | Engagement |
|---|---|---|
| Primary Mentor | Day-to-day guidance | Weekly 1:1 |
| Regulatory Affairs Lead | Domain expertise | Bi-weekly check-ins |
| Compliance Team | End users of tools | Feedback at Weeks 4, 8, 11 |
| Policy/Government Affairs | Engagement strategy | Week 10-12 collaboration |
| External Advisors | Validation | Ad hoc consultation |
Approval
Intern Acknowledgment
I have reviewed this proposal and commit to delivering the outlined project within the specified timeline and quality standards.
Intern Signature: _________ Date: _____
Isabel Budenz
Mentor Approval
Mentor Signature: _________ Date: _____
Executive Sponsor Approval
Sponsor Signature: _________ Date: _____
| *Proposal Version 1.0 | Focus: AI Industry Regulation & Public-Private Partnerships | January 2026* |