Capstone Project Proposal Comparison
Isabel Budenz - Three Project Options
January 16, 2026
Executive Comparison
| Dimension | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| Title | AI Arbitration Governance Framework | Navigating the AI Regulatory Patchwork | Responsible AI Integration in Legal Practice |
| Duration | 12 weeks | 12 weeks | 12 weeks |
| Primary Focus | How arbitral institutions adopt & govern AI | Cross-border compliance & multi-stakeholder governance | Hands-on AI tool proficiency & implementation |
| Output Type | Research + Model Rules | Frameworks + Compliance Tools | Playbooks + Training Programs |
| Learning Style | Research & Analysis | Research & Synthesis | Learning by Doing |
Strategic Positioning
| Aspect | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| Career Trajectory | AI governance specialist for dispute resolution | Cross-border AI compliance & policy expert | Legal engineer / AI implementation lead |
| Differentiation | Niche expertise (arbitration + AI) | Broad regulatory knowledge | Practical technical competence |
| Market Demand | Growing (institutional transformation) | High (EU AI Act compliance) | Very High (79% firms adopted AI) |
| Competition | Low (few combine arbitration + AI) | Medium (many policy analysts) | Medium-Low (few lawyers have hands-on skills) |
| Thought Leadership | High (model rules contribution) | High (framework development) | Medium (practical vs. theoretical) |
Alignment with Isabel’s Background
| Background Element | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| LLM International Commercial Arbitration | ★★★★★ Direct alignment | ★★☆☆☆ Tangential | ★★☆☆☆ Tangential |
| LLB International & European Law | ★★★☆☆ Supports analysis | ★★★★★ Core foundation | ★★★☆☆ Ethical framework |
| EU AI Act Coursework | ★★★☆☆ Regulatory context | ★★★★★ Central focus | ★★★☆☆ Compliance context |
| A for Arbitration Experience | ★★★★★ Direct relevance | ★★☆☆☆ Research skills | ★★☆☆☆ Research skills |
| Multilingual (DE/ES/EN/FR) | ★★★★☆ Institutional research | ★★★★★ EU member state analysis | ★★☆☆☆ Limited application |
| Clifford Chance Internship | ★★★☆☆ Firm context | ★★★★☆ Regulatory exposure | ★★★☆☆ Firm context |
| Legend: ★★★★★ = Perfect fit | ★☆☆☆☆ = Minimal relevance |
Deliverables Comparison
Institutional Innovation (6 deliverables)
| # | Deliverable | Pages/Format | Week |
|---|---|---|---|
| 1 | Institutional Guidelines Comparative Analysis | 30-35 pages | 4 |
| 2 | Due Process Assessment Framework | 20 pages + Tool | 6 |
| 3 | Model AI Disclosure Protocol | Protocol + Templates | 8 |
| 4 | Enforceability Analysis Memo | 15-20 pages | 9 |
| 5 | Proposed Model Rules | 10-15 pages + Commentary | 11 |
| 6 | Executive Presentation & Training | 25 slides + Guide | 12 |
Regulatory & PPP (6 deliverables)
| # | Deliverable | Pages/Format | Week |
|---|---|---|---|
| 1 | Global AI Regulatory Landscape Map | 40 pages + Visual Map | 4 |
| 2 | Public-Private Partnership Analysis | 25 pages | 6 |
| 3 | Federal-State Preemption Risk Assessment | 15 pages + Decision Tree | 7 |
| 4 | Multi-Stakeholder Governance Framework | 30 pages + Implementation Guide | 10 |
| 5 | Compliance Mapping Tools | Excel/Interactive + Checklists | 11 |
| 6 | Standards Engagement Strategy | 10 pages + Presentation | 12 |
Technical Skills (6 deliverables + 2 certifications)
| # | Deliverable | Pages/Format | Week |
|---|---|---|---|
| 1 | AI Tool Proficiency Log | 30+ pages (ongoing) | 10 |
| 2 | AI Tool Evaluation Framework | 20 pages + Scorecard | 5 |
| 3 | Prompt Engineering Playbook | 40+ pages + Prompt Library | 8 |
| 4 | ABA Opinion 512 Compliance Checklist | Checklist + Guide | 9 |
| 5 | Firm-Wide AI Policy Templates | Templates + Adoption Guide | 11 |
| 6 | Training Curriculum & Materials | Curriculum + Slides + Exercises | 12 |
| + | Clio Legal AI Fundamentals Cert | Certificate | 2 |
| + | Prompt Engineering for Law Cert | Certificate | 6 |
Skills Developed
| Skill Category | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| Legal Research | ★★★★★ | ★★★★★ | ★★★☆☆ |
| Comparative Analysis | ★★★★★ | ★★★★★ | ★★☆☆☆ |
| Policy Development | ★★★★★ | ★★★★☆ | ★★★☆☆ |
| Technical AI Understanding | ★★☆☆☆ | ★★☆☆☆ | ★★★★★ |
| Hands-on Tool Proficiency | ★☆☆☆☆ | ★☆☆☆☆ | ★★★★★ |
| Prompt Engineering | ★☆☆☆☆ | ★☆☆☆☆ | ★★★★★ |
| Compliance Implementation | ★★★☆☆ | ★★★★★ | ★★★★☆ |
| Training Delivery | ★★★☆☆ | ★★☆☆☆ | ★★★★★ |
| Stakeholder Engagement | ★★★★☆ | ★★★★★ | ★★★☆☆ |
| Framework Design | ★★★★☆ | ★★★★★ | ★★★★☆ |
Key Research Sources by Proposal
Institutional Innovation
- AAA-ICDR AI Arbitrator documentation
- ICC Commission Task Force materials
- CIArb, SCC, VIAC guidelines
- White & Case 2025 International Arbitration Survey
- New York Convention case law
- UNESCO Guidelines on AI in Courts
Regulatory & PPP
- EU AI Act (full text + AI Office guidance)
- US Executive Orders (Dec 2025 preemption order)
- State AI laws (CO, CA, NY, IL)
- NIST AI Risk Management Framework
- ISO/IEC 42001 standards
- Partnership on AI publications
Technical Skills
- ABA Formal Opinion 512
- State bar AI guidance (NY, CA, PA)
- Legal AI tool documentation
- Coursera/Clio certification materials
- Industry reports on legal AI adoption
- Prompt engineering literature
Risk Comparison
| Risk Type | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| Regulatory Change | Medium (ICC guidance pending) | High (active policy shifts) | Low (stable ethical framework) |
| Scope Creep | Medium | High | Medium |
| Access/Resources | Low (public materials) | Low (public materials) | Medium (tool subscriptions) |
| Learning Curve | Low (legal research focus) | Medium (multi-framework) | High (technical skills) |
| Stakeholder Complexity | Medium | High | Medium |
| Deliverable Ambiguity | Low (clear outputs) | Medium (framework scope) | Low (concrete artifacts) |
Overall Risk Level:
- Institutional Innovation: Low-Medium
- Regulatory & PPP: Medium
- Technical Skills: Medium-High (but highest reward)
Budget Comparison
| Item | Institutional | Regulatory | Technical |
|---|---|---|---|
| Database access | Existing | Existing | Existing |
| Standards/certifications | - | $500 (ISO) | $100 (Coursera) |
| Tool subscriptions | - | - | $500 |
| External consultation | $1,000 | - | - |
| Conference/events | $300 | $400 | - |
| Materials | - | - | $100 |
| Total | $1,300 | $900 | $700 |
Timeline Comparison
Week-by-Week Overview
| Week | Institutional Innovation | Regulatory & PPP | Technical Skills |
|---|---|---|---|
| 1 | Data collection | EU AI Act deep dive | Clio cert + conceptual learning |
| 2 | Literature review | EU member state analysis | Ethics deep dive |
| 3 | Comparison framework | US federal analysis | Initial tool exploration |
| 4 | Institutional Report | Regulatory Landscape Map | Legal research tools |
| 5 | Due process research | PPP research | Evaluation Framework + contracts |
| 6 | Due Process Framework | PPP Analysis | Prompt engineering cert |
| 7 | Disclosure protocol draft | Preemption Assessment | Prompt playbook development |
| 8 | Disclosure Protocol | Framework design | Prompt Playbook |
| 9 | Enforceability Memo | Framework documentation | Compliance Checklist |
| 10 | Model rules drafting | Governance Framework | Proficiency Log + policy draft |
| 11 | Model Rules | Compliance Tools | Policy Templates |
| 12 | Presentation + Training | Engagement Strategy | Training Curriculum |
Employer Value Proposition
What Each Proposal Demonstrates to Employers
| Proposal | Key Demonstration | Employer Benefit |
|---|---|---|
| Institutional | “I can shape industry standards” | Thought leadership, institutional credibility |
| Regulatory | “I can navigate complex multi-jurisdictional compliance” | Risk mitigation, global operations support |
| Technical | “I can implement AI tools responsibly” | Immediate productivity, training capability |
Ideal Employer Types
| Employer Type | Institutional | Regulatory | Technical |
|---|---|---|---|
| AI Company (Anthropic, OpenAI) | ★★★☆☆ | ★★★★★ | ★★★★☆ |
| Big Law Firm | ★★★★★ | ★★★★☆ | ★★★★★ |
| Arbitral Institution (ICC, LCIA) | ★★★★★ | ★★★☆☆ | ★★☆☆☆ |
| Think Tank (GovAI, FPF) | ★★★★☆ | ★★★★★ | ★★☆☆☆ |
| In-House Legal (Tech Company) | ★★★☆☆ | ★★★★★ | ★★★★★ |
| Legal Tech Company | ★★★☆☆ | ★★★☆☆ | ★★★★★ |
| Government/Regulator | ★★★☆☆ | ★★★★★ | ★★☆☆☆ |
Recommendation Matrix
Choose Institutional Innovation If:
- ✅ You want to leverage your LLM specialization directly
- ✅ You’re interested in dispute resolution careers long-term
- ✅ You want to contribute to emerging industry standards
- ✅ You prefer research-intensive work
- ✅ You want lower-risk, clearly-scoped deliverables
Choose Regulatory & PPP If:
- ✅ You want broad exposure to AI governance landscape
- ✅ You’re interested in policy/government affairs careers
- ✅ You want to maximize use of multilingual capabilities
- ✅ You’re comfortable with ambiguity and evolving requirements
- ✅ You want to understand multi-stakeholder dynamics
Choose Technical Skills If:
- ✅ You want to differentiate from other legal professionals
- ✅ You’re interested in legal tech or implementation roles
- ✅ You learn best by doing rather than reading
- ✅ You want certifications to credential your AI knowledge
- ✅ You’re comfortable with a steeper learning curve
Hybrid Approach Option
If the internship allows flexibility, consider combining elements:
Recommended Hybrid: Institutional + Technical (Lite)
| Phase | Focus | Weeks |
|---|---|---|
| 1 | Tool proficiency building + Clio cert | 1-2 |
| 2 | Institutional comparative analysis | 3-6 |
| 3 | Due process framework + disclosure protocol | 7-9 |
| 4 | Model rules + prompt playbook for arbitration | 10-12 |
This combines Isabel’s arbitration expertise with practical AI skills, producing both thought leadership deliverables and demonstrable technical competence.
Summary Decision Framework
| If Your Priority Is… | Choose |
|---|---|
| Leveraging LLM specialization | Institutional Innovation |
| Broadest career applicability | Regulatory & PPP |
| Standing out from other candidates | Technical Skills |
| Lowest execution risk | Institutional Innovation |
| Highest learning growth | Technical Skills |
| Multilingual advantage maximization | Regulatory & PPP |
| Immediate employer value | Technical Skills |
| Long-term thought leadership | Institutional Innovation |
Comparison prepared January 2026
Proposal 1: AI Arbitration Governance Framework
Analyzing Due Process and Transparency Requirements for Algorithmic Dispute Resolution
Focus Area: Institutional Innovation in Law
Intern Information
| Field | Details |
|---|---|
| Name | Isabel Budenz |
| Program | LLM International Commercial Arbitration, University of Stockholm (2025-2026) |
| Background | LLB International and European Law, University of Groningen (2022-2025) |
| Languages | German (Native), Spanish (Native), English (C2), French (B1) |
| Relevant Experience | Legal Researcher, A for Arbitration (2019-2025); Clifford Chance Antitrust Global Virtual Internship |
| Relevant Coursework | Introduction to AI and the EU AI Act; International Commercial Arbitration |
Executive Summary
International arbitration is experiencing a paradigm shift. In November 2025, the AAA-ICDR launched the first AI-native arbitrator from a major institution, while the ICC, CIArb, SCC, and VIAC have all issued guidance on AI use in proceedings. This project will develop a comprehensive governance framework for AI in arbitration, analyzing due process requirements and proposing model rules that balance innovation with procedural fairness.
This institutional innovation focus leverages Isabel’s LLM specialization in International Commercial Arbitration and positions her as an expert in how arbitral institutions are transforming through AI adoption.
Problem Statement
The rapid adoption of AI in international arbitration has outpaced governance frameworks:
| Date | Development |
|---|---|
| 2024 | SVAMC and SCC issue first AI guidelines |
| March 2025 | AAA-ICDR and CIArb release AI guidance |
| April 2025 | VIAC publishes AI note |
| November 2025 | AAA-ICDR launches AI-native arbitrator |
| 2026 | ICC Task Force expected to issue recommendations |
Critical Questions Remain Unanswered:
- Do AI arbitrators satisfy due process requirements across jurisdictions?
- What transparency obligations should apply to algorithmic decision-making?
- How should parties disclose AI use in proceedings?
- What standards ensure AI-assisted awards remain enforceable under the New York Convention?
Business Need: [Company Name] requires a comprehensive framework to advise clients on AI in arbitration, evaluate institutional AI offerings, and contribute to industry standards development.
Project Objectives
Primary Objectives
- Conduct comprehensive comparative analysis of AI guidelines from 8+ arbitral institutions
- Develop due process assessment framework for evaluating AI arbitrators and AI-assisted proceedings
- Create model disclosure protocols for parties and arbitrators using AI tools
- Analyze enforceability implications of AI-assisted awards under the New York Convention
Secondary Objectives
- Assess “high-risk” AI classification implications under EU AI Act for judicial/arbitral systems
- Propose harmonized standards for AI governance in international arbitration
- Develop training materials on AI arbitration for dispute resolution practitioners
Research Foundation
Key Institutional Developments
AAA-ICDR AI Arbitrator (November 2025)
- First AI-native arbitrator from major institution
- Trained on 1,500+ real construction arbitration awards
- Available for document-only construction disputes under $100,000
- Projected 30-50% cost reduction for parties
- Expansion planned for 2026+
Institutional Guidelines Comparison
| Institution | Document | Key Features |
|---|---|---|
| SVAMC | Guidelines on AI Use (2024) | Pioneering framework for Silicon Valley disputes |
| SCC | Guide to AI in SCC Cases (2024) | Nordic approach to AI governance |
| AAA-ICDR | Guidance on Arbitrators’ AI Use (March 2025) | Pre-cursor to AI arbitrator launch |
| CIArb | Guidelines on AI in Arbitration (March 2025) | Professional body perspective |
| VIAC | Note on AI in Proceedings (April 2025) | Central European approach |
| ICC | Task Force (announced Sept 2024) | Global harmonization effort |
Regulatory Context
- UNESCO Guidelines on AI in Courts (December 2025): 15 principles for judicial AI
- EU AI Act: Potential “high-risk” classification for AI in judicial contexts
- California: First U.S. state generative AI rules for courts (September 2025)
Scope
In Scope
| Area | Details |
|---|---|
| Institutions | ICC, AAA-ICDR, LCIA, SIAC, HKIAC, DIAC, SCC, VIAC, CIArb, SVAMC |
| AI Applications | AI arbitrators, AI-assisted drafting, document review, case management, predictive analytics |
| Legal Issues | Due process, transparency, party autonomy, enforceability, confidentiality |
| Jurisdictions | New York Convention states, EU (AI Act), US, UK, Singapore, UAE |
Out of Scope
- Technical AI model development or evaluation
- Domestic court AI adoption (except for comparative context)
- Commercial AI vendor product reviews
- Mediation and other non-arbitration ADR
Deliverables
| # | Deliverable | Description | Format | Due |
|---|---|---|---|---|
| 1 | Institutional Guidelines Comparative Analysis | Side-by-side analysis of AI policies from 10 institutions | Report (30-35 pages) | Week 4 |
| 2 | Due Process Assessment Framework | Methodology for evaluating AI arbitrators against procedural fairness standards | Framework Document (20 pages) + Assessment Tool | Week 6 |
| 3 | Model AI Disclosure Protocol | Template disclosure requirements for parties and arbitrators | Protocol Document + Templates | Week 8 |
| 4 | Enforceability Analysis Memo | New York Convention implications for AI-assisted awards | Legal Memo (15-20 pages) | Week 9 |
| 5 | Proposed Model Rules | Draft harmonized standards for AI in international arbitration | Model Rules (10-15 pages) + Commentary | Week 11 |
| 6 | Executive Presentation & Training Module | Summary for leadership + practitioner training | PowerPoint (25 slides) + Training Guide | Week 12 |
Methodology
Phase 1: Institutional Landscape Mapping (Weeks 1-4)
Week 1-2: Data Collection
- Gather all published AI guidelines, rules, and announcements from target institutions
- Conduct literature review of academic commentary and practitioner perspectives
- Review White & Case 2025 International Arbitration Survey AI findings
- Identify key contacts at institutions for potential clarification
Week 3-4: Comparative Analysis
- Develop comparison framework (scope, disclosure requirements, restrictions, governance)
- Analyze areas of convergence and divergence
- Identify gaps in current guidance
- Produce Institutional Guidelines Comparative Analysis
Phase 2: Due Process Framework Development (Weeks 5-6)
Week 5: Legal Standards Research
- Research due process requirements across major arbitration jurisdictions
- Analyze human oversight requirements in UNESCO Guidelines and EU AI Act
- Review case law on procedural fairness in arbitration
- Examine “right to be heard” implications for algorithmic decisions
Week 6: Framework Construction
- Develop assessment criteria for AI arbitrators
- Create evaluation methodology for AI-assisted proceedings
- Build practical assessment tool
- Produce Due Process Assessment Framework
Phase 3: Practical Guidance Development (Weeks 7-9)
Week 7-8: Disclosure Protocol
- Analyze existing disclosure obligations in institutional rules
- Research confidentiality implications of AI tool use
- Draft model disclosure requirements for:
- Party use of AI in submissions
- Arbitrator use of AI in analysis and drafting
- AI-native arbitrator proceedings
- Produce Model AI Disclosure Protocol
Week 9: Enforceability Analysis
- Research New York Convention requirements (Article V grounds)
- Analyze “public policy” exception implications for AI awards
- Review recent enforcement decisions
- Consider jurisdictional variations
- Produce Enforceability Analysis Memo
Phase 4: Standards Development & Knowledge Transfer (Weeks 10-12)
Week 10-11: Model Rules Drafting
- Synthesize findings into proposed harmonized standards
- Draft model rules with commentary
- Align with existing institutional frameworks
- Incorporate stakeholder feedback
- Produce Proposed Model Rules
Week 12: Presentation & Training
- Prepare executive summary presentation
- Develop practitioner training module
- Present to dispute resolution leadership
- Deliver pilot training session
Timeline
Week 1-2 ████████░░░░░░░░░░░░░░░░ Data Collection & Literature Review
Week 3-4 ████████░░░░░░░░░░░░░░░░ Comparative Analysis → Institutional Report
Week 5-6 ░░░░░░░░████████░░░░░░░░ Due Process Framework Development
Week 7-8 ░░░░░░░░░░░░░░░░████████ Disclosure Protocol & Templates
Week 9 ░░░░░░░░░░░░░░░░░░░░████ Enforceability Analysis
Week 10-11 ░░░░░░░░░░░░░░░░░░░░████ Model Rules Drafting
Week 12 ░░░░░░░░░░░░░░░░░░░░░░██ Presentation & Training
Key Milestones
| Week | Milestone | Checkpoint |
|---|---|---|
| 4 | Institutional Comparative Analysis complete | Stakeholder review |
| 6 | Due Process Framework delivered | Legal team validation |
| 8 | Disclosure Protocol finalized | Practice group feedback |
| 9 | Enforceability Memo complete | Partner review |
| 11 | Model Rules drafted | External expert consultation |
| 12 | Project complete | Final presentation |
Multilingual Research Advantage
Isabel’s language capabilities enable access to primary sources across major arbitration jurisdictions:
| Language | Sources | Value |
|---|---|---|
| German | DIS rules and commentary, German arbitration scholarship, VIAC materials | Central European perspective |
| Spanish | Spanish Arbitration Act, Latin American institutional developments | Civil law tradition insights |
| French | ICC primary materials, French arbitration doctrine, Swiss scholarship | Global arbitration hub perspective |
| English | Common law jurisdictions, international materials, academic literature | Comprehensive coverage |
Resources Required
Access
- Kluwer Arbitration Database
- Institutional rules and guidelines (publicly available + subscription)
- Academic journal access (Journal of International Arbitration, Arbitration International)
- Case law databases (New York Convention enforcement decisions)
Subject Matter Expert Support
| Role | Purpose | Time |
|---|---|---|
| Primary Mentor | Weekly guidance | 2 hrs/week |
| Arbitration Partner | Strategic input, model rules review | 4 hrs total |
| Technology Counsel | AI regulatory consultation | 3 hrs total |
| External Arbitrator | Practitioner perspective validation | 2 hrs total |
Budget
| Item | Estimated Cost |
|---|---|
| Database access | Existing subscription |
| External expert consultation | $1,000 |
| Conference attendance (virtual) | $300 |
| Total | $1,300 |
Success Criteria
Deliverable Quality
- All 6 deliverables completed on schedule
- Comparative analysis covers 10+ institutions
- Due process framework validated by arbitration practitioners
- Model rules aligned with existing institutional approaches
- Multilingual sources incorporated in analysis
Business Impact
- Framework adopted by dispute resolution practice
- At least one client advisory application
- Training delivered to 15+ team members
- Positive feedback from stakeholders (>4.2/5)
Thought Leadership Potential
- Publication-ready content identified
- Conference presentation opportunity explored
- Contribution to ICC Task Force considered
Risks and Mitigation
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| ICC Task Force issues guidance during project | Medium | Medium | Build flexibility for incorporation; position as complementary analysis |
| Limited access to institutional decision-making rationale | Medium | Low | Focus on public materials; supplement with practitioner interviews |
| Rapid evolution of AI arbitrator offerings | Medium | Medium | Establish monitoring protocol; scope to framework principles |
| Due process standards vary significantly by jurisdiction | Low | Medium | Focus on common principles; note jurisdictional variations |
Career Positioning Value
This project positions Isabel as an expert in AI governance for international arbitration:
- Niche Specialization: Few professionals combine arbitration LLM training with AI governance expertise
- Institutional Relationships: Research creates connections with major arbitral institutions
- Thought Leadership: Model rules development demonstrates policy contribution capability
- Practical Application: Framework immediately applicable to client advisory work
- Publication Potential: Comparative analysis suitable for academic or practitioner publication
Stakeholders
| Stakeholder | Role | Engagement |
|---|---|---|
| Primary Mentor | Day-to-day guidance | Weekly 1:1 |
| Arbitration Partner | Executive sponsor | Bi-weekly check-ins |
| Dispute Resolution Team | End users | Feedback at Weeks 4, 8 |
| Technology/Innovation Team | AI expertise | Ad hoc consultation |
| External Arbitrators | Practitioner validation | Week 10 review |
Approval
Intern Acknowledgment
I have reviewed this proposal and commit to delivering the outlined project within the specified timeline and quality standards.
Intern Signature: _________ Date: _____
Isabel Budenz
Mentor Approval
Mentor Signature: _________ Date: _____
Executive Sponsor Approval
Sponsor Signature: _________ Date: _____
| *Proposal Version 1.0 | Focus: Institutional Innovation in Law | January 2026* |
Proposal 2: Navigating the AI Regulatory Patchwork
A Multi-Stakeholder Governance Framework for Responsible AI
Focus Area: AI Industry Regulation & Public-Private Partnerships
Intern Information
| Field | Details |
|---|---|
| Name | Isabel Budenz |
| Program | LLM International Commercial Arbitration, University of Stockholm (2025-2026) |
| Background | LLB International and European Law, University of Groningen (2022-2025) |
| Languages | German (Native), Spanish (Native), English (C2), French (B1) |
| Relevant Experience | Legal Researcher, A for Arbitration (2019-2025); Clifford Chance Antitrust Global Virtual Internship |
| Relevant Coursework | Introduction to AI and the EU AI Act; International Commercial Arbitration |
Executive Summary
The global AI regulatory landscape is fragmenting rapidly. The EU AI Act established the world’s first comprehensive framework, while the US pursues a deregulatory federal approach that conflicts with state-level initiatives. Meanwhile, public-private partnerships like the Partnership on AI and standards bodies like NIST and ISO are developing soft law frameworks that increasingly influence compliance expectations.
This project will develop a practical governance framework for AI companies navigating this complex multi-jurisdictional environment, with particular focus on public-private partnership models that can bridge regulatory gaps and build the trust necessary for AI adoption.
This regulatory and governance focus leverages Isabel’s International and European Law background and EU AI Act coursework, positioning her as an expert in cross-border AI compliance and multi-stakeholder governance.
Problem Statement
The Regulatory Fragmentation Challenge
EU AI Act Timeline (Now in Effect)
| Date | Milestone |
|---|---|
| August 1, 2024 | Entered into force |
| February 2, 2025 | Prohibited AI practices banned; AI literacy requirements effective |
| August 2, 2025 | GPAI obligations; AI Office operational; national authorities designated |
| August 2, 2026 | Full application including high-risk AI systems |
| August 2, 2027 | Safety components compliance |
Penalties: Up to EUR 35 million or 7% of global annual turnover
US Federal-State Tension
| Date | Development |
|---|---|
| January 2025 | Executive Order 14179 revoked Biden AI executive order |
| July 2025 | “Preventing Woke AI” order established federal procurement requirements |
| December 2025 | “National AI Policy Framework” order signaled federal preemption of state laws |
The December 2025 order:
- Established AI Litigation Task Force to challenge state AI laws
- Directed Commerce Department evaluation of state laws within 90 days
- Specifically targeted Colorado AI Act
- Ties federal funding to state AI policy compliance
However: 36 state AGs sent bipartisan letter opposing preemption; Senate voted 99-1 against penalizing states.
The Trust Gap
- AI enterprise adoption surged 115% (2023-2024)
- Only 62% of business leaders believe AI is deployed responsibly
- Only 39% of companies have adequate AI governance frameworks
- Estimated $4.8 trillion unrealized value by 2033 without trustworthy AI governance
Business Need: [Company Name] requires a comprehensive framework to navigate multi-jurisdictional compliance, engage effectively with regulators and standards bodies, and demonstrate responsible AI practices that build stakeholder trust.
Project Objectives
Primary Objectives
- Map the global AI regulatory landscape across EU, US (federal + key states), UK, and international frameworks
- Analyze public-private partnership models in AI governance and identify effective practices
- Develop a multi-stakeholder governance framework for AI companies operating across jurisdictions
- Create practical compliance tools mapping EU AI Act and state law requirements to operational practices
Secondary Objectives
- Assess federal preemption risks for state AI laws and develop contingency guidance
- Evaluate standards alignment opportunities (NIST AI RMF, ISO 42001, EU AI Act)
- Propose engagement strategy for standards bodies and multi-stakeholder initiatives
Research Foundation
Key Regulatory Frameworks
EU AI Act
- World’s first comprehensive AI legal framework
- Risk-based approach (prohibited, high-risk, limited risk, minimal risk)
- General Purpose AI (GPAI) model obligations
- Technical documentation, transparency reports, copyright compliance required
US Federal Landscape
- Executive order-driven (subject to change)
- December 2025 order signals preemption intent but cannot override statutes
- NIST AI Risk Management Framework remains canonical guidance
- Sector-specific regulation (FDA, FTC, financial regulators)
State-Level Innovation
- Colorado AI Act (targeted by federal order)
- California AI transparency requirements
- Illinois Biometric Information Privacy Act
- New York City automated employment decision tools law
International Standards | Framework | Issuer | Status | |———–|——–|——–| | AI Risk Management Framework | NIST | Published; Generative AI Profile (July 2024) | | ISO/IEC 42001 | ISO | Certifiable AI governance standard | | AI Framework Convention | Council of Europe | First legally binding AI treaty (2024) | | AI Ethics Recommendation | UNESCO | Global standard for 194 member states |
Public-Private Partnership Models
Partnership on AI (PAI)
- 129 organizations across 16 countries
- Responsible Practices for Synthetic Media (Adobe, BBC, OpenAI, TikTok)
- Guidance cited by NIST, OECD as policy inputs
- AI Policy Forum convened for UN engagement
Standards Development Organizations
- NIST: Crosswalks aligning AI RMF with OECD and ISO 42001
- IEEE: 7000-2021 ethical system design standard
- ISO: 42001 certification scheme
Industry Consortiums
- AI Alliance (IBM, Meta, others)
- Frontier Model Forum (Anthropic, Google, Microsoft, OpenAI)
- World Economic Forum AI Governance Alliance
Scope
In Scope
| Area | Details |
|---|---|
| Jurisdictions | EU (Germany, France, Spain, Netherlands), US (federal + CA, CO, NY, IL), UK, international |
| Frameworks | EU AI Act, state AI laws, NIST AI RMF, ISO 42001, Council of Europe Convention |
| PPP Models | Partnership on AI, standards bodies, industry consortiums, regulatory sandboxes |
| Company Types | AI developers, AI deployers, GPAI model providers |
Out of Scope
- Detailed sector-specific regulation (healthcare, financial services)
- Technical AI safety research
- Individual company compliance audits
- Lobbying strategy development
Deliverables
| # | Deliverable | Description | Format | Due |
|---|---|---|---|---|
| 1 | Global AI Regulatory Landscape Map | Comprehensive overview of AI regulations across target jurisdictions | Interactive Report (40 pages) + Visual Map | Week 4 |
| 2 | Public-Private Partnership Analysis | Assessment of governance models, effectiveness, and engagement opportunities | Research Report (25 pages) | Week 6 |
| 3 | Federal-State Preemption Risk Assessment | Analysis of preemption likelihood and contingency planning guidance | Legal Memo (15 pages) + Decision Tree | Week 7 |
| 4 | Multi-Stakeholder Governance Framework | Proposed framework for AI companies incorporating regulatory and soft law requirements | Framework Document (30 pages) + Implementation Guide | Week 10 |
| 5 | Compliance Mapping Tools | Practical tools mapping EU AI Act and state law requirements to operations | Excel/Interactive Tools + Checklists | Week 11 |
| 6 | Standards Engagement Strategy | Recommendations for participating in standards development and PPP initiatives | Strategy Memo (10 pages) + Presentation | Week 12 |
Methodology
Phase 1: Regulatory Landscape Mapping (Weeks 1-4)
Week 1-2: EU Framework Deep Dive
- Analyze EU AI Act obligations by risk category
- Research member state implementation approaches (leveraging multilingual capabilities)
- Map GPAI model provider obligations
- Identify AI Office guidance and enforcement priorities
Week 3-4: US and International Analysis
- Document federal executive orders and agency guidance
- Analyze key state laws (CO, CA, NY, IL)
- Review UK AI regulatory approach
- Assess international frameworks (UNESCO, Council of Europe)
- Produce Global AI Regulatory Landscape Map
Phase 2: Governance Models Analysis (Weeks 5-7)
Week 5-6: Public-Private Partnership Research
- Analyze Partnership on AI structure, outputs, and influence
- Review NIST stakeholder engagement model
- Examine ISO 42001 certification ecosystem
- Assess industry consortium effectiveness
- Interview/survey PPP participants where possible
- Produce Public-Private Partnership Analysis
Week 7: Preemption Risk Assessment
- Analyze December 2025 executive order legal authority
- Review constitutional preemption doctrine
- Assess litigation prospects and timeline
- Develop contingency planning guidance
- Produce Federal-State Preemption Risk Assessment
Phase 3: Framework Development (Weeks 8-10)
Week 8-9: Framework Design
- Synthesize regulatory and soft law requirements
- Identify common principles across frameworks
- Design governance structure incorporating multiple stakeholder interests
- Develop implementation methodology
Week 10: Framework Documentation
- Draft comprehensive framework document
- Create implementation guide
- Develop assessment criteria
- Produce Multi-Stakeholder Governance Framework
Phase 4: Practical Tools & Strategy (Weeks 11-12)
Week 11: Compliance Tools Development
- Build EU AI Act obligation mapping tool
- Create state law compliance checklists
- Develop risk classification decision trees
- Produce Compliance Mapping Tools
Week 12: Engagement Strategy & Presentation
- Develop standards body engagement recommendations
- Create PPP participation strategy
- Prepare executive presentation
- Produce Standards Engagement Strategy
Timeline
Week 1-2 ████████░░░░░░░░░░░░░░░░ EU AI Act & Member State Analysis
Week 3-4 ████████░░░░░░░░░░░░░░░░ US/International Analysis → Landscape Map
Week 5-6 ░░░░░░░░████████░░░░░░░░ PPP Research → Partnership Analysis
Week 7 ░░░░░░░░░░░░░░░░████░░░░ Preemption Risk Assessment
Week 8-10 ░░░░░░░░░░░░░░░░████████ Framework Development
Week 11 ░░░░░░░░░░░░░░░░░░░░████ Compliance Tools
Week 12 ░░░░░░░░░░░░░░░░░░░░░░██ Engagement Strategy & Presentation
Multilingual Research Advantage
Isabel’s language capabilities enable comprehensive EU member state analysis:
| Language | Jurisdictions | Regulatory Bodies |
|---|---|---|
| German | Germany, Austria | BfDI, DSK, RTR |
| Spanish | Spain | AEPD, Ministry of Digital Transformation |
| French | France, Belgium, Luxembourg | CNIL, APD, CNPD |
| English | UK, Ireland, Netherlands, EU institutions | ICO, DPC, AP, AI Office |
This enables analysis of how member states are implementing EU AI Act requirements differently—critical intelligence for companies operating across the EU.
Resources Required
Access
- EUR-Lex and member state legal databases
- US state legislation databases
- NIST, ISO standards documentation
- Partnership on AI publications and resources
- Academic databases (SSRN, journal access)
Subject Matter Expert Support
| Role | Purpose | Time |
|---|---|---|
| Primary Mentor | Weekly guidance | 2 hrs/week |
| Regulatory Affairs Lead | EU AI Act expertise | 4 hrs total |
| US Policy Counsel | Federal-state dynamics | 3 hrs total |
| Standards Participation Expert | PPP engagement | 2 hrs total |
Budget
| Item | Estimated Cost |
|---|---|
| Standards documents (ISO) | $500 |
| Conference/webinar access | $400 |
| Research database access | Existing subscription |
| Total | $900 |
Success Criteria
Deliverable Quality
- All 6 deliverables completed on schedule
- Regulatory map covers 10+ jurisdictions comprehensively
- PPP analysis includes primary research (interviews/surveys)
- Framework validated by regulatory affairs team
- Compliance tools tested and refined based on feedback
Business Impact
- Framework adopted by compliance function
- Tools deployed for active compliance monitoring
- Client advisory applications identified (3+)
- Standards engagement recommendations implemented
Thought Leadership
- Research informs company regulatory submissions
- Framework shared with industry partners
- Publication/presentation opportunity identified
Risks and Mitigation
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Regulatory changes during project | High | Medium | Build flexibility; establish monitoring protocol; focus on principles |
| Federal preemption litigation outcomes uncertain | High | Medium | Scenario planning; contingency guidance for multiple outcomes |
| PPP participation access limited | Medium | Low | Focus on public materials; identify accessible stakeholders |
| Framework complexity overwhelming for users | Medium | Medium | Tiered implementation guide; prioritization methodology |
Career Positioning Value
This project positions Isabel as an expert in cross-border AI governance and multi-stakeholder regulation:
- Regulatory Expertise: Deep knowledge of EU AI Act and US regulatory dynamics
- Policy Translation: Ability to convert complex regulations into practical compliance guidance
- Multi-Stakeholder Navigation: Understanding of how soft law and standards interact with regulation
- International Perspective: Multilingual analysis capability rare among regulatory specialists
- Industry Relevance: Framework immediately applicable to AI company operations
Career Paths Enabled:
- AI Policy Counsel at technology company
- Regulatory Affairs Specialist
- Standards Development Participant
- Think Tank Policy Researcher
- Government Affairs / Public Policy Role
Alignment with Industry Trends
This project addresses critical 2025-2026 developments:
| Trend | Project Relevance |
|---|---|
| EU AI Act full application (August 2026) | Compliance mapping tools directly applicable |
| US federal-state regulatory tension | Preemption analysis provides strategic guidance |
| AI trust gap ($4.8T unrealized value) | Governance framework addresses trust building |
| PPP influence on AI policy | Engagement strategy enables meaningful participation |
| Standards convergence (NIST-ISO crosswalks) | Framework incorporates multiple standards |
Stakeholders
| Stakeholder | Role | Engagement |
|---|---|---|
| Primary Mentor | Day-to-day guidance | Weekly 1:1 |
| Regulatory Affairs Lead | Domain expertise | Bi-weekly check-ins |
| Compliance Team | End users of tools | Feedback at Weeks 4, 8, 11 |
| Policy/Government Affairs | Engagement strategy | Week 10-12 collaboration |
| External Advisors | Validation | Ad hoc consultation |
Approval
Intern Acknowledgment
I have reviewed this proposal and commit to delivering the outlined project within the specified timeline and quality standards.
Intern Signature: _________ Date: _____
Isabel Budenz
Mentor Approval
Mentor Signature: _________ Date: _____
Executive Sponsor Approval
Sponsor Signature: _________ Date: _____
| *Proposal Version 1.0 | Focus: AI Industry Regulation & Public-Private Partnerships | January 2026* |
Proposal 3: Responsible AI Integration in Legal Practice
Building Technical Competence in AI Tooling and Applications
Focus Area: Technical AI Skills Development
Intern Information
| Field | Details |
|---|---|
| Name | Isabel Budenz |
| Program | LLM International Commercial Arbitration, University of Stockholm (2025-2026) |
| Background | LLB International and European Law, University of Groningen (2022-2025) |
| Languages | German (Native), Spanish (Native), English (C2), French (B1) |
| Relevant Experience | Legal Researcher, A for Arbitration (2019-2025); Clifford Chance Antitrust Global Virtual Internship |
| Relevant Coursework | Introduction to AI and the EU AI Act; International Commercial Arbitration |
Executive Summary
Legal professionals who can bridge the gap between law and technology are increasingly valuable. 79% of law firms have adopted AI tools, yet few lawyers have formal AI training. This project focuses on building Isabel’s hands-on technical competence with AI tools while producing practical resources that help legal practitioners integrate AI responsibly.
Unlike the other project proposals that emphasize legal analysis, this project prioritizes learning by doing—working directly with AI tools, understanding their technical capabilities and limitations, and developing practical implementation guidance that meets ABA ethical standards.
This technical skills focus transforms Isabel from a legal professional who understands AI policy into one who can implement, evaluate, and govern AI systems in practice.
Problem Statement
The Legal AI Skills Gap
Adoption vs. Competence
- 79% of law firms have adopted AI tools (2024)
- Few lawyers have formal AI training
- 52% of law firm managers have shifted hiring criteria due to AI advances
- 66% of in-house legal managers seek different skills due to automation
Ethical Framework Without Practical Guidance
ABA Formal Opinion 512 (July 2024) requires lawyers to:
- Understand AI capabilities and limitations (Rule 1.1 Competence)
- Protect client information when using AI (Rule 1.6 Confidentiality)
- Keep clients informed about AI use (Rule 1.4 Communication)
- Verify AI-generated citations (Rules 3.1, 3.3 Candor)
- Establish firm-wide AI policies (Rules 5.1, 5.3 Supervision)
But: Opinion 512 provides principles, not practical implementation guidance.
State Requirements Accelerating
- New York: 2 annual CLE credits in AI competency (Q3 2025)
- Pennsylvania: Mandatory AI disclosure in court submissions
- California: Multi-jurisdictional compliance for AI cloud tools
Business Need: [Company Name] needs team members who understand AI tools practically—not just legally—to evaluate products, advise clients, and implement responsible AI practices.
Project Objectives
Primary Objectives
- Develop hands-on proficiency with 5+ legal AI tools across research, contract analysis, and drafting
- Create a comprehensive AI tool evaluation framework aligned with ABA Opinion 512 requirements
- Build a prompt engineering playbook for legal tasks with tested prompts and quality control protocols
- Develop firm-wide AI policy templates and training curriculum
Secondary Objectives (Skills Development)
- Earn AI-related certifications (Clio Legal AI Fundamentals, Coursera Prompt Engineering)
- Understand technical AI concepts (NLP, LLMs, hallucinations, bias) at practitioner level
- Build portfolio of technical artifacts demonstrating cross-disciplinary competence
Technical Learning Objectives
Conceptual Understanding
| Topic | Learning Objective |
|---|---|
| Machine Learning Basics | Understand supervised/unsupervised learning, training data, model outputs |
| Natural Language Processing | Comprehend how AI processes legal text, entity recognition, semantic analysis |
| Large Language Models | Understand transformer architecture at high level, context windows, token limits |
| AI Limitations | Deeply understand hallucinations, bias, confidentiality risks, accuracy boundaries |
| Prompt Engineering | Master techniques for effective, consistent AI outputs in legal contexts |
Practical Tool Proficiency
| Tool Category | Specific Platforms | Competency Target |
|---|---|---|
| Legal Research | Lexis+ AI, CoCounsel (Casetext) | Conduct research, verify citations, compare outputs |
| Contract Analysis | Harvey, Luminance, Ironclad | Review contracts, identify issues, generate summaries |
| Document Drafting | Claude, GPT-4, legal-specific tools | Draft legal documents with appropriate oversight |
| E-Discovery | Relativity AI, Reveal | Understand document review acceleration |
| General AI | Claude, ChatGPT, Gemini | Evaluate capabilities, understand limitations |
Certification Goals
| Certification | Provider | Timeline |
|---|---|---|
| Legal AI Fundamentals | Clio (Free) | Week 2 |
| Prompt Engineering for Law | Coursera/Vanderbilt | Week 6 |
| AI and the Law (if available) | Harvard Executive Ed | Post-project |
Research Foundation
Current Legal AI Landscape
Market Impact (2024-2025)
- 9% increase in legal research AI usage
- 17% increase in contract analysis (in-house)
- 34% jump in case law summarization
- 65% reduction in review time reported
- 85% decrease in human error
- 40% cost reduction
Leading Tools
| Category | Tool | Key Features |
|---|---|---|
| Research | Lexis+ AI | Natural language queries, citation verification |
| Research | CoCounsel | GPT-4 powered, deposition prep, timeline creation |
| Contracts | Harvey | Generative AI for law firms, M&A due diligence |
| Contracts | Luminance | ML document review, anomaly detection |
| Contracts | Ironclad | CLM with AI assistant, redline generation |
| E-Discovery | Relativity | AI-powered review, privilege detection |
| General | Claude | Long context, nuanced analysis, safety focus |
Ethical Requirements
ABA Opinion 512 Core Requirements
- Competence: Understand capabilities AND limitations
- Confidentiality: Assess data handling, opt out of training where possible
- Communication: Inform clients of AI use in their matters
- Candor: Independently verify all AI outputs
- Supervision: Establish policies, train staff, monitor use
Key Risk Areas
- Hallucinations (fabricated citations, false facts)
- Confidentiality breaches (data used for training)
- Bias in outputs (training data limitations)
- Over-reliance (failure to verify)
- Unauthorized practice (AI providing legal advice)
Scope
In Scope
| Area | Details |
|---|---|
| Tools | 5+ legal AI platforms across research, contracts, drafting |
| Tasks | Legal research, contract review, document drafting, due diligence |
| Frameworks | ABA Opinion 512, state-specific requirements (NY, CA, PA) |
| Outputs | Evaluation framework, prompt playbook, policy templates, training |
Out of Scope
- AI tool development or coding
- Deep technical ML/AI research
- Vendor negotiations or procurement
- Client-facing AI implementation
Deliverables
| # | Deliverable | Description | Format | Due |
|---|---|---|---|---|
| 1 | AI Tool Proficiency Log | Documented hands-on experience with 5+ tools, including outputs and assessments | Portfolio Document (30+ pages) | Ongoing → Week 10 |
| 2 | AI Tool Evaluation Framework | Criteria and methodology for assessing legal AI tools against ethical requirements | Framework (20 pages) + Scorecard Template | Week 5 |
| 3 | Prompt Engineering Playbook | Tested prompts for common legal tasks with quality control protocols | Playbook (40+ pages) + Prompt Library | Week 8 |
| 4 | ABA Opinion 512 Compliance Checklist | Practical checklist mapping ethical requirements to operational practices | Checklist + Implementation Guide | Week 9 |
| 5 | Firm-Wide AI Policy Templates | Model policies for AI use, data handling, disclosure, supervision | Policy Templates + Adoption Guide | Week 11 |
| 6 | Training Curriculum & Materials | Complete training program for legal professionals on responsible AI use | Curriculum + Slides + Exercises | Week 12 |
Certification Deliverables
| Certification | Evidence | Timeline |
|---|---|---|
| Clio Legal AI Fundamentals | Certificate | Week 2 |
| Prompt Engineering for Law | Certificate | Week 6 |
Methodology
Phase 1: Foundation Building (Weeks 1-3)
Week 1: Conceptual Learning
- Complete Clio Legal AI Fundamentals certification
- Study ML/NLP basics through curated resources
- Understand LLM architecture at practitioner level
- Document learning in proficiency log
Week 2: Ethics Deep Dive
- Analyze ABA Opinion 512 comprehensively
- Review state-specific AI requirements
- Study documented AI failures in legal contexts
- Begin drafting evaluation framework criteria
Week 3: Initial Tool Exploration
- Obtain access to target AI tools
- Conduct initial exploration of each platform
- Document capabilities, interfaces, limitations
- Begin systematic testing protocol
Phase 2: Hands-On Tool Mastery (Weeks 4-6)
Week 4: Legal Research Tools
- Deep dive into Lexis+ AI and CoCounsel
- Test with real-world research scenarios
- Compare outputs, verify accuracy
- Document hallucination rates, citation accuracy
- Update proficiency log with detailed findings
Week 5: Contract Analysis Tools
- Explore Harvey, Luminance, or Ironclad
- Test contract review capabilities
- Assess issue identification accuracy
- Evaluate redline and summary features
- Complete AI Tool Evaluation Framework
Week 6: Prompt Engineering Mastery
- Complete Coursera Prompt Engineering certification
- Develop and test prompts for common legal tasks:
- Legal research queries
- Contract review instructions
- Document drafting prompts
- Due diligence checklists
- Document effective techniques and failures
Phase 3: Framework Development (Weeks 7-9)
Week 7-8: Prompt Playbook Development
- Compile tested prompts into organized playbook
- Develop quality control protocols for each task type
- Create prompt templates with variables
- Document edge cases and failure modes
- Complete Prompt Engineering Playbook
Week 9: Compliance Implementation
- Map ABA Opinion 512 to practical operations
- Develop checklist for each ethical requirement
- Create workflow integration guidance
- Complete ABA Opinion 512 Compliance Checklist
Phase 4: Policy & Training Development (Weeks 10-12)
Week 10: Policy Template Creation
- Draft firm-wide AI use policy
- Develop data handling and confidentiality protocols
- Create disclosure templates (client, court)
- Build supervision and monitoring framework
- Finalize AI Tool Proficiency Log
Week 11: Policy Refinement
- Review policies with mentor and legal team
- Incorporate feedback
- Develop adoption roadmap
- Complete Firm-Wide AI Policy Templates
Week 12: Training Program Development
- Design training curriculum structure
- Create presentation materials
- Develop hands-on exercises
- Pilot training session
- Complete Training Curriculum & Materials
Timeline
Week 1 ████░░░░░░░░░░░░░░░░░░░░ Foundation: Clio cert + conceptual learning
Week 2 ████░░░░░░░░░░░░░░░░░░░░ Ethics deep dive + evaluation criteria
Week 3 ████░░░░░░░░░░░░░░░░░░░░ Initial tool exploration
Week 4 ░░░░████░░░░░░░░░░░░░░░░ Legal research tools mastery
Week 5 ░░░░████░░░░░░░░░░░░░░░░ Contract tools + Evaluation Framework
Week 6 ░░░░░░░░████░░░░░░░░░░░░ Prompt engineering cert + testing
Week 7-8 ░░░░░░░░░░░░████████░░░░ Prompt Playbook development
Week 9 ░░░░░░░░░░░░░░░░████░░░░ Compliance Checklist
Week 10-11 ░░░░░░░░░░░░░░░░░░░░████ Policy Templates + Proficiency Log
Week 12 ░░░░░░░░░░░░░░░░░░░░░░██ Training Curriculum + Delivery
Skills Development Tracking
Technical Skills Matrix
| Skill | Starting Level | Target Level | Assessment Method |
|---|---|---|---|
| ML/NLP Concepts | Novice | Practitioner | Quiz + explanation exercise |
| Prompt Engineering | Novice | Proficient | Playbook quality + cert |
| Tool Proficiency (Research) | Novice | Proficient | Task completion + accuracy |
| Tool Proficiency (Contracts) | Novice | Intermediate | Task completion + evaluation |
| AI Risk Assessment | Intermediate | Advanced | Framework quality |
| Training Delivery | Intermediate | Proficient | Pilot session feedback |
Weekly Skill Check-ins
Each week includes:
- Learning log: What was learned, what remains unclear
- Tool hours: Time spent with each AI tool
- Prompt experiments: Prompts tested, results documented
- Failure documentation: What didn’t work and why
Resources Required
Tool Access
| Tool | Access Type | Priority |
|---|---|---|
| Claude Pro | Subscription | Week 1 |
| Lexis+ AI | Firm subscription | Week 3 |
| CoCounsel/Casetext | Trial or subscription | Week 3 |
| Harvey | Demo access | Week 5 |
| Luminance | Trial | Week 5 |
Learning Resources
| Resource | Provider | Cost |
|---|---|---|
| Legal AI Fundamentals | Clio | Free |
| Prompt Engineering for Law | Coursera | ~$50 |
| AI and the Law readings | Various | Provided |
| ABA Opinion 512 + commentary | ABA | Free |
Subject Matter Expert Support
| Role | Purpose | Time |
|---|---|---|
| Primary Mentor | Weekly guidance | 2 hrs/week |
| Technology Counsel | AI tool expertise | 4 hrs total |
| Training Specialist | Curriculum development | 3 hrs total |
| IT/Security | Data handling review | 2 hrs total |
Budget
| Item | Estimated Cost |
|---|---|
| Tool subscriptions/trials | $500 |
| Certification courses | $100 |
| Learning materials | $100 |
| Total | $700 |
Success Criteria
Skills Acquisition
- Clio Legal AI Fundamentals certification earned
- Prompt Engineering certification completed
- 50+ hours logged with AI tools
- Proficiency demonstrated in 5+ platforms
- Can explain ML/NLP concepts accurately
Deliverable Quality
- All 6 deliverables completed on schedule
- Prompt playbook contains 30+ tested prompts
- Evaluation framework validated by technology counsel
- Policy templates approved by compliance team
- Training pilot receives >4/5 feedback
Business Impact
- Framework adopted for tool evaluation
- Policies implemented firm-wide
- Training delivered to 20+ professionals
- At least 2 tool recommendations accepted
Risks and Mitigation
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Tool access delays | Medium | High | Identify alternatives; prioritize widely available tools |
| Learning curve steeper than expected | Medium | Medium | Build buffer time; focus on breadth over depth |
| Rapid tool evolution during project | Medium | Low | Focus on principles; note tool-specific vs. generalizable learnings |
| Certification scheduling conflicts | Low | Low | Complete early; identify alternatives |
| Confidentiality concerns with testing | Medium | High | Use synthetic/public data only; follow firm protocols |
Career Positioning Value
This project transforms Isabel into a legally-trained AI practitioner:
Differentiators
| Traditional Legal Professional | Isabel After This Project |
|---|---|
| Understands AI regulation | Can evaluate and implement AI tools |
| Reads about AI capabilities | Has hands-on proficiency with platforms |
| Knows ethical requirements exist | Can operationalize ABA Opinion 512 |
| Aware of prompt engineering | Has tested prompt library for legal tasks |
| Understands training needs | Can deliver AI training programs |
Career Paths Enabled
- Legal Engineer: Bridge law and technology teams
- AI Implementation Lead: Guide firm AI adoption
- Legal Tech Product Counsel: Advise AI tool development
- In-House AI Governance: Oversee responsible AI use
- Consultant: Help firms implement legal AI
Portfolio Assets
- Certifications: Demonstrable AI competence
- Prompt Playbook: Practical, tested resource
- Evaluation Framework: Methodology for tool assessment
- Policy Templates: Ready-to-implement governance
- Training Materials: Delivery capability demonstrated
Alignment with Industry Trends
| Trend | Project Relevance |
|---|---|
| 79% law firm AI adoption | Proficiency makes Isabel immediately valuable |
| ABA Opinion 512 compliance pressure | Checklist and policies address urgent need |
| NY AI CLE requirement (2025) | Training curriculum directly applicable |
| Prompt engineering as “21st-century legal skill” | Playbook demonstrates mastery |
| Legal engineer role emergence | Technical + legal competence combination |
Stakeholders
| Stakeholder | Role | Engagement |
|---|---|---|
| Primary Mentor | Day-to-day guidance | Weekly 1:1 |
| Technology Counsel | Tool expertise, evaluation validation | Bi-weekly |
| Training/Professional Development | Curriculum review | Weeks 10-12 |
| IT/Security | Data handling, tool vetting | Ad hoc |
| Legal Teams | Policy feedback, training participants | Weeks 9-12 |
Approval
Intern Acknowledgment
I have reviewed this proposal and commit to delivering the outlined project within the specified timeline and quality standards. I understand this project emphasizes hands-on technical skill development alongside traditional legal analysis.
Intern Signature: _________ Date: _____
Isabel Budenz
Mentor Approval
Mentor Signature: _________ Date: _____
Executive Sponsor Approval
Sponsor Signature: _________ Date: _____
| *Proposal Version 1.0 | Focus: Technical AI Skills Development | January 2026* |