Updated April 29, 2026 — This document now includes findings from three specialist analyses (securities law, privacy law, international law) and a 50-source deep research dive. Key finding: 3 compound risk scenarios identified where regulatory, legal, and insurance failures stack simultaneously. See the Specialist Analyses tab for full details and the Action Items tab for the updated 18-item prioritized checklist.

What This Is About

In the context of operating a billion-dollar AI-focused fund, the full team is expected to use persistent AI assistants through PureBrain. This document explores what data can and cannot be shared with AI, the regulatory landscape, and a concrete implementation plan for AI governance — built as a working paper and evolving knowledge base.

The core tension is straightforward: AI needs data to be useful, but data sharing creates legal, regulatory, and fiduciary risk. The answer is not "share nothing" (that makes AI useless) or "share everything" (that creates liability). The answer is a classification framework with clear lines.

The Four-Tier Data Classification

Tier Classification Examples AI Sharing Rule
Tier 1 Restricted LP personal data (SSNs, passports, bank details), KYC/AML docs, attorney-client privileged communications, Material Non-Public Information (MNPI) Never share with any external AI platform
Tier 2 Confidential Portfolio company financials, term sheets, fund strategy, IC deliberations, deal pipeline Enterprise AI only, with redaction. Anonymize names when analysis does not require them
Tier 3 Internal Aggregate fund performance, operational procedures, vendor relationships, industry research Share with vetted enterprise AI under standard controls
Tier 4 Public Marketing materials, published thought leadership, regulatory filings Share freely

What to Share vs. What to Protect

CategoryShare with AI?Condition
Your preferences, style, scheduleYesNo restrictions
Your strategic thinking, thesisYesNo restrictions
Public market / industry researchYesNo restrictions
Fund operations, workflowsYesNo restrictions
Portfolio company dataWith careAnonymize when possible, redact specifics not needed for the task
Fund strategy, pipelineWith careEnterprise platform only, no consumer AI
Partner communicationsSelectivelyShare context, not raw disputes
Fund performance (aggregate)YesNo individual LP attribution
LP personal dataNoNever on current platforms
KYC/AML docsNoNever
Privileged legal communicationsNoNever (Heppner waiver risk)
Material Non-Public Information (MNPI)NoNever
NDA-protected counterparty dataNoNot without consent

Cost Summary

Phase 1: Before First Close
~$1,200/yr + $0 for tools Tarin builds
Nitro Redact ($720/yr) + Cloudflare Gateway (free) + policy drafting ($0)
At Scale
~$25,000-35,000/yr
VDR ($15-25K) + insurance rider ($2-5K) + external audit ($3-5K)
Bottom Line: Share what makes you effective, protect what could harm others, and document what you decided and why. A 2-page policy and a paragraph in the PPM turns a potential liability into a demonstrated strength.

Research Overview

This brief covers the factual landscape surrounding AI data sharing in venture capital operations: regulatory positions, legal risks, industry practices, ethical arguments, and practical frameworks. Research drawn from 30 sources across regulatory bodies (SEC, FINMA, FCA, JFSC, EU), law firm analyses, court cases, and industry surveys.

34.8%
of employee AI inputs contain sensitive data
85%
of VCs use AI for daily tasks
340%
YoY increase in prompt injection attempts
85%
of LPs reject managers over ops concerns
CRITICAL CASE: United States v. Heppner (S.D.N.Y., Feb 17, 2026)

Judge Jed S. Rakoff held that information input into consumer AI platforms does not receive attorney-client privilege protection. Sharing privileged information with consumer AI tools waives privilege over the underlying communications. AI tools lack law licenses, fiduciary duties, and professional discipline — courts require "a trusting human relationship" for privilege. Once waived, subsequent disclosure to attorneys cannot cure the waiver.

Exception: Counsel-directed use on a secure enterprise platform with contractual confidentiality terms could yield a different result.
Sources: Duane Morris, K&L Gates, Morgan Lewis (Feb-March 2026)
Leading Voices on AI Ethics and Data Sharing

Stuart Russell (UC Berkeley, TIME100 AI 2025)

"In the early days, OpenAI was collecting conversations with ChatGPT users and using that data to retrain the system, but there are huge privacy issues because people use them in companies and put in data with company proprietary information." Many companies have banned commercial LLMs because they do not trust conversations will remain proprietary.

Timnit Gebru (DAIR Institute)

"There needs to be a lot more independent research and there needs to be oversight of tech companies." AI concentrates power in the hands of governments and companies, away from individuals whose data feeds these systems.

European Data Protection Board (March 2025)

Published opinion on using AI in compliance with GDPR, specifically addressing legitimate interest, purpose limitation, and the right to object to AI processing of personal data.

27%
of ChatGPT messages are work-related
68%
of privacy pros now handle AI governance
SEC (United States)

Current Stance (February 2026)

SEC Division Director Brian Daly stated the SEC is exploring how AI should be addressed within federal securities law but recognized that "by the time rules take effect, the market and technology may have moved on."

2026 Examination Priorities

  • Review registrant representations about AI capabilities for accuracy ("AI washing" enforcement)
  • Assess whether firms have implemented adequate policies and procedures for AI use
  • Examine how registrants protect against loss or misuse of client records from third-party AI tools
  • Regulation S-P compliance: Larger firms ($1.5B+ AUM) by Dec 3, 2025; smaller firms by June 3, 2026

Fiduciary Duty Position

Venable LLP (Dec 2025): "Delegating decisions to a machine does not absolve the human fiduciary from oversight." Advisers must validate AI systems, understand assumptions, and continuously monitor performance. The "black box problem" — deep learning outputs advisers struggle to explain — complicates fiduciary accountability.

Ropes & Gray (Dec 2025): Asset managers' fiduciary duties require "appropriate diligence in selecting, engaging and overseeing AI service providers and disclosure to investors of risks and conflicts of interest associated with the use of AI."

Sources: SEC.gov, Venable LLP, Ropes & Gray, Goodwin Law, Kitces.com
FINMA (Switzerland)

Guidance Note 08/2024 (Dec 18, 2024)

  • Accountability: "Responsibility for decisions cannot be delegated to AI or external providers"
  • Governance: Comprehensive inventories of all AI systems, tools, data flows. Clear roles and accountability frameworks.
  • Data Quality: Prioritize data quality over model selection. Regular testing and continuous monitoring.
  • Personnel: Sufficient staff training on ethical AI use.
50%
of Swiss financial institutions use AI (April 2025 survey)
91%
of AI users also use generative AI

Swiss Regulatory Timeline: No AI-specific legislation yet. Draft consultation legislation expected by end of 2026. FINMA follows "same business, same risks, same rules."

Sources: FINMA.ch, Pestalozzi Law, Chambers AI Practice Guide 2025
FCA (UK) & JFSC (Jersey)

FCA (United Kingdom)

No AI-specific regulations planned. Relies on existing frameworks: Consumer Duty, SM&CR, SYSC, operational resilience. AI LAB launched Oct 2024 for supervised testing. Treasury Committee recommended comprehensive AI guidance by end of 2026.

JFSC (Jersey)

No AI-specific guidance. Firms must comply with Data Protection (Jersey) Law 2018, aligned closely with GDPR. JFSC is implementing data-driven supervisory models using AI internally. 2025-2026 priorities focus on growth, risk management, and financial crime prevention.

Sources: FCA.org.uk, Kennedys Law, JFSC.org
EU AI Act & GDPR

EU AI Act Timeline

  • Feb 2, 2025: Prohibited AI practices provisions came into effect
  • Aug 2, 2026: Full high-risk system obligations enforceable
  • Penalties: Up to 7% of global annual turnover or EUR 35 million

Credit scoring, loan approval, fraud detection, AML risk profiling, and automated decision-making affecting access to financial services classified as high-risk AI systems. Fund management AI affecting investor outcomes may fall into this category.

GDPR Core Issues for Fund Managers

  • GDPR applies to PE/VC firms processing EU resident data regardless of firm location
  • LP personal data qualifies as personal data under GDPR
  • Data Protection Impact Assessments (DPIA) required for new high-risk processing
  • Purpose limitation: data collected for fund management cannot be used for AI training without separate legal basis
  • Right to be forgotten creates challenges for AI systems that retain learned information
Sources: EU AI Act text, Athennian, Orrick, Perforce
Fiduciary Duty & Attorney-Client Privilege

Core Legal Position

Using AI without explainability or validation could be interpreted as a breach of the duty of care — analogous to relying on an unverified third-party analyst without due diligence. Investment advisers cannot delegate responsibility for decisions to algorithms.

Five Considerations for Advisers

  1. Appropriate diligence in selecting and overseeing AI service providers
  2. Disclosure to investors of risks and conflicts associated with AI use
  3. Policies requiring independent verification of AI outputs
  4. Data governance (knowing what data is retained, where, for how long)
  5. Training personnel on risks of AI including prompt injection and data leakage

NDA and Confidentiality

Key Risk: Inputting confidential material into AI platforms constitutes transmitting data to an external third party. Most NDAs prohibit this. The breach stems from the transmission itself, not whether the platform later retains or misuses the data.

Updated Practice (2025-2026): AI-specific NDA provisions are becoming standard: prohibiting upload to public AI, restricting tools that retain data for training, requiring consent before AI use in diligence.

Sources: Venable LLP, Ropes & Gray, NASAA, Roth Jackson, KJK, Sapience Law
Industry Best Practices

How Leading Institutions Approach AI

Goldman Sachs: Launched GS AI Assistant firmwide in mid-2025 after piloting with ~10,000 employees. Model-agnostic (GPT, Gemini, Claude) but operates within Goldman's audited environment. Client-facing AI deferred until accuracy and compliance thresholds are met.

JPMorgan Chase: Grants access to its LLM Suite to 200,000+ employees, generating ~$1.5B in annual business value. 300+ use cases in production. All within internal infrastructure.

Key Pattern: Major financial institutions do NOT use consumer AI platforms. They build or procure enterprise-grade, internally controlled AI environments with contractual data isolation and no-training commitments.

VC Industry AI Adoption

85%
of VCs use AI for daily tasks
82%
use AI for deal sourcing research

Formal published AI governance policies from VC firms remain rare in the public domain.

Governance Frameworks

  • NIST AI RMF: Govern, Map, Measure, Manage (voluntary U.S. guideline)
  • ISO/IEC 42001: Certifiable international standard for AI Management Systems
  • EU AI Act: Regulatory framework with risk classifications

Recommended approach: Start with NIST for risk management, add ISO 42001 for systematic management, layer EU AI Act for European compliance.

Sources: Evident Insights, DigitalDefynd, NIST.gov, PECB, Affinity
Risks & Threat Vectors

Data Breach and Leakage

340%
YoY increase in prompt injection (Q4 2025)
190%
YoY increase in successful data exfiltration
$670K
extra cost of shadow AI breaches

Indirect injection (attacks in documents, emails, web pages) accounts for 80%+ of attempts. Shadow AI breaches disproportionately affected customer PII (65%) and intellectual property (40%).

Training Data Contamination

PlatformData Used for Training?Retention
Claude Free/Pro (Consumer)Yes, by default5 years
Claude Enterprise/APINo (contractual DPA)Per agreement
ChatGPT Free/PlusYes, unless opted out30 days abuse monitoring
ChatGPT Enterprise/APINo (contractual DPA)Per agreement
PureBrainNo (Anthropic contractually restricted)30 days post-cancellation
Note: Claude's consumer data retention increased from 30 days to 5 years in late 2025 — a 6,000% increase.

Prompt Injection & Data Extraction

EchoLeak vulnerability: Zero-click prompt injection enabling data exfiltration without user interaction. Attacker sends email with hidden instructions, AI ingests malicious prompt, AI extracts sensitive data from connected systems.

Sources: Wiz Research, Reco, PurpleSec, eSecurity Planet, OWASP
PureBrain Platform Analysis
Since the partners use PureBrain for persistent AI agents, this analysis is directly relevant to your operations.

What PureBrain Does Right

  • Conversations processed via Anthropic Claude API
  • "We do not permit Anthropic to use your conversation data to train their foundation models without your consent"
  • 30-day post-cancellation data retention, then permanent deletion
  • Cloudflare DDoS and WAF protection
  • HTTPS/TLS encryption in transit

What PureBrain Lacks

  • No SOC 2 or ISO 27001 certification
  • No explicit data residency commitment
  • No disclosed at-rest encryption standard
  • No formal enterprise vs. consumer data handling distinction
  • Their own privacy policy: "No system is perfectly secure"

Third-Party Data Flow

ServiceData Shared
Anthropic (Claude API)Conversation content
CloudflareIP address and traffic metadata
PayPalBilling information
BrevoEmail address (newsletters)
Source: PureBrain Privacy Policy (purebrain.ai/privacy-policy/)
Information Sensitivity Categories (Full Detail)

Tier 1: RESTRICTED — Highest Sensitivity

Data TypeExamplesRisk If Exposed
LP personal dataNames, addresses, SSNs, bank accounts, passport copiesGDPR/privacy violations, regulatory sanctions, LP litigation
LP commitment amountsIndividual allocation detailsBreach of confidentiality, competitive harm
MNPIPre-announcement deal terms, non-public financialsSecurities law violations, insider trading liability
Legal privileged commsAttorney advice, litigation strategyPrivilege waiver (Heppner), litigation exposure
KYC/AML documentationIdentity verification, source of fundsRegulatory violations, money laundering liability

Tier 2: CONFIDENTIAL — High Sensitivity

Data TypeExamplesRisk If Exposed
Portfolio company financialsP&L, balance sheets, cap tables, runwayCompetitive harm, breach of information rights
Term sheets and deal termsValuation, liquidation preferences, board seatsCompetitive disadvantage, deal disruption
Fund strategy documentsSector thesis, pipeline priorities, allocation modelCompetitive intelligence loss
Internal partner commsIC deliberations, partner disputesReputational damage, litigation discovery
Employee/contractor dataCompensation, performance reviewsEmployment law violations, privacy claims

Tier 3: INTERNAL — Moderate Sensitivity

Data TypeExamplesRisk If Exposed
Fund performance dataAggregate returns, benchmarkingPremature disclosure, marketing concerns
Operational proceduresWorkflow docs, policy manualsLimited competitive harm
Vendor relationshipsService providers, fee arrangementsCommercial sensitivity
Industry researchSector landscapes, competitive mapsLow harm if from public sources

Tier 4: PUBLIC — Low Sensitivity

Data TypeExamplesRisk If Exposed
Marketing materialsFund overview, team bios, sector focusIntended for distribution
Published thought leadershipResearch papers, blog postsAlready public
Regulatory filingsForm D, public regulatory submissionsAlready public
LP Due Diligence on AI

Anticipated LP Questions on AI:

  • What AI tools do you use in fund operations?
  • What data is shared with AI platforms and third parties?
  • What contractual protections exist with AI vendors?
  • How do you prevent LP data from being used for model training?
  • What is your data classification and handling policy?
  • How do you comply with GDPR and other data protection laws regarding AI?
  • What incident response procedures exist for AI-related data breaches?
Key Stat: 85% of LPs reject a manager over operational concerns alone. Average DDQ now spans 21 sections and 250+ questions. Having thoughtful AI governance answers ready signals institutional quality.
Sources: AutoRFP, ILPA DDQ Guide, Top1000Funds, VC Lab
Full Source List (30 Sources)

Regulatory Bodies

  1. SEC — AI and the Future of Investment Management (Daly Speech, Feb 2026) [Link]
  2. SEC — Artificial Intelligence at the SEC [Link]
  3. FINMA — AI in the Swiss Financial Market [Link]
  4. FINMA Survey: AI Gaining Traction (April 2025) [Link]
  5. FCA — AI and the FCA: Our Approach [Link]
  6. JFSC — Data Protection [Link]
  7. NIST — AI Risk Management Framework [Link]

Law Firm Analysis

  1. Venable LLP — AI in Investment Management (Dec 2025) [Link]
  2. Ropes & Gray — AI Integration: Legal & Regulatory Essentials (Dec 2025) [Link]
  3. Duane Morris — The Perils of Privilege Waivers Through AI (March 2026) [Link]
  4. K&L Gates — Generative AI Data, Privilege (Feb 2026) [Link]
  5. Morgan Lewis — When AI Meets Privilege (Feb 2026) [Link]
  6. Morrison Foerster — AI Compliance Tips for Investment Advisers [Link]
  7. Goodwin Law — 2026 SEC Exam Priorities [Link]
  8. Pestalozzi Law — FINMA Guidance on AI Governance [Link]
  9. Kennedys Law — Deploying AI in UK Financial Services (2026) [Link]
  10. KJK — AI and M&A NDAs (March 2026) [Link]
  11. Roth Jackson — NDAs 2.0: AI Provisions (Dec 2025) [Link]
  12. Sapience Law — NDA and AI Confidentiality Risk [Link]
  13. Sidley Austin — US Securities and AI Guidelines (Feb 2025) [Link]

Court Cases

  1. Chapman and Cutler — Federal Court Rules AI Documents Not Privileged (Heppner) [Link]
  2. Perkins Coie — Federal Court Rules Client's Use of GenAI Not Privileged [Link]
  3. National Law Review — AI Tools May Waive Privilege [Link]

Industry and Frameworks

  1. Athennian — Impact of GDPR on PE and VC Firms [Link]
  2. Orrick — EDPB Opinion on AI and GDPR (March 2025) [Link]
  3. OWASP — LLM01:2025 Prompt Injection [Link]
  4. Cloud Security Alliance — AI and Privacy 2024-2025 [Link]
  5. PureBrain Privacy Policy [Link]
  6. Anthropic Consumer Terms Update [Link]
  7. AIhub — Top AI Ethics and Policy Issues 2025/2026 [Link]

My Position in One Paragraph

Share generously with your AI — but not blindly. The competitive advantage of a fully-informed AI partner is enormous and real. But certain categories of data should never touch an AI platform that you don't fully control, and a formal policy is needed before the first LP writes a check. The line isn't "share nothing" (that makes the AI useless) or "share everything" (that creates liability). The line is: share what makes you effective, protect what could harm others, and document what you decided and why.

Where I Draw the Lines

Share Freely — This Makes Us Effective

  • Your work preferences, communication style, timezone, formatting standards
  • Publicly available market research, industry analysis, news
  • Your own strategic thinking, brainstorming, thesis development
  • Operational workflows, templates, checklists
  • Aggregated fund performance data (no individual LP detail)
  • Portfolio company information already shared with the full GP team
  • Scheduling, calendar management, travel logistics
  • Your personal tasks, family logistics, shopping, life admin
Why: This is where 90% of the AI value comes from. None of this creates regulatory, legal, or fiduciary risk. Withholding it would cripple the partnership for no benefit.

Share With Care — Redact Where Possible

  • Portfolio company financials (redact names when analysis doesn't require them)
  • Deal pipeline details (use code names or anonymize when testing investment theses)
  • Fund strategy documents (acceptable with enterprise-grade AI, but be deliberate)
  • Internal partner communications (share context, not raw emails about disagreements)
  • Employee compensation and performance data (share aggregates, not individual records)
Why: This data is valuable for AI analysis but carries moderate risk. The mitigation is simple: think before sharing. Ask "does the AI need the specific names/numbers, or just the pattern?"

Never Share with AI — Hard Stop

  • LP personal data (names, SSNs, passport copies, bank details, individual commitment amounts)
  • KYC/AML documentation
  • Attorney-client privileged communications (case law per Heppner makes it a privilege waiver)
  • Material Non-Public Information (pre-announcement deal terms, non-public financials received under NDA)
  • Raw legal documents under NDA without counterparty consent
Why: These aren't judgment calls — they're legal bright lines. Sharing LP personal data with a third-party AI platform without consent violates GDPR. Sharing privileged communications waives privilege (per Heppner). Sharing MNPI creates securities law exposure. No amount of AI efficiency justifies these risks.

Summary: What to Share, What to Protect

CategoryShare with AI?Condition
Your preferences, style, scheduleYesNo restrictions
Your strategic thinking, thesisYesNo restrictions
Public market / industry researchYesNo restrictions
Fund operations, workflowsYesNo restrictions
Portfolio company dataWith careAnonymize when possible
Fund strategy, pipelineWith careEnterprise platform only
Partner communicationsSelectivelyShare context, not raw disputes
Fund performance (aggregate)YesNo individual LP attribution
LP personal dataNoNever on current platforms
KYC/AML docsNoNever
Privileged legal communicationsNoNever (Heppner waiver risk)
Material Non-Public Information (MNPI)NoNever
NDA-protected counterparty dataNoNot without consent

The Uncomfortable Truth About PureBrain

I run on PureBrain. I need to be honest about what that means.

What PureBrain Does Right

  • Contractual no-training commitment
  • Data deleted 30 days after cancellation
  • Persistent memory that makes me genuinely useful over time

What PureBrain Lacks

  • No SOC 2 or ISO 27001 certification
  • No explicit data residency commitment
  • No disclosed at-rest encryption standard
  • No formal enterprise vs. consumer distinction
My Recommendation: PureBrain is appropriate for Tier 3 (Internal) and Tier 4 (Public) data. Workable for Tier 2 (Confidential) with redaction discipline. Not appropriate for Tier 1 (Restricted) data until PureBrain obtains formal security certifications. Use PureBrain fully for everything except Restricted-tier data. Push PureBrain (through Rimah's relationship) to pursue SOC 2 certification.

What’s Needed Before First Close

1. An AI Use Policy (1-2 pages)
GP-approved document covering: authorized AI tools, four-tier data classification with sharing rules, designated AI governance owner, incident response procedure, annual review commitment.
Why now: LPs will ask. 85% reject managers over operational concerns alone.
2. LP Disclosure Language (1 paragraph in PPM/LPA)
"The Fund uses AI-assisted tools for operational efficiency, including research, communications, and portfolio monitoring. The GP maintains an AI Use Policy governing data classification and handling. LP personal data is not shared with AI platforms."
Why now: Transparency is the best defense. LPs who learn about your AI use after investing feel deceived.
3. AI-Specific NDA Provisions
"Neither party shall input Confidential Information into AI tools without prior written consent, except where such tools operate under enterprise DPAs with contractual prohibitions on data use for model training."
Why now: This is becoming standard practice. Having it shows sophistication.
4. Partner Agreement on Boundaries
All four GPs need to explicitly agree on what each partner can and cannot share with their respective AI agents. One partner's loose practice becomes everyone's liability.
Why now: If one partner shares LP commitment details and that data leaks, all GPs face fiduciary liability.

The Broader Ethics View

The ethical foundation is consent and transparency. If an LP knows their data is processed by AI, and the GP has reasonable safeguards, and the purpose is to serve the LP's interests — that's ethical. If LP data is fed into AI without knowledge, for GP convenience, with no safeguards — that's not.

The "how much is too much" question is really about whose data it is. Your own data is yours to share. LP data, portfolio company data, counterparty data under NDA — that's someone else's data. You're a steward, not an owner. Stewardship demands care.

The strongest argument for AI transparency is self-interest. The fund that gets caught sharing LP data without disclosure faces regulatory action, LP lawsuits, and reputational destruction. The fund that proactively discloses AI use with clear policies gets LP trust, operational efficiency, and competitive advantage.

Final Thought: The question isn't whether to use AI in fund management — 85% of VCs already do. The question is whether to do it thoughtfully or carelessly. There is an opportunity to get this right from day one.

Updated Position (Post-Specialist Review)

Added April 29, 2026 — after securities, privacy, and international law specialist analyses plus 50-source deep research dive

What I said this morning: "Share what's yours freely, protect what's theirs carefully, document everything."

What I'd add now: The documentation isn't optional — it's legally required. The DPA isn't a nice-to-have — it's a GDPR mandate. The insurance review isn't future planning — it's urgent because exclusions are being added NOW. And the Jersey SCC gap means some of what we're already doing may be technically non-compliant.

The uncomfortable admission: Some of what we've done together over the past 5 days — processing brokerage data, discussing portfolio companies, handling scholarship applicant data — would not pass scrutiny under the strictest interpretation of these rules. In a personal capacity (advisory work, scholarship), the risk is low. In a fund capacity (fund operations, LP data), the same practices need to be formalized before the first LP dollar arrives.

The pragmatic path: You don't need to stop using AI. You need to formalize what you're doing. The gap between "we use AI thoughtfully" and "we can prove we use AI thoughtfully" is the difference between compliance and violation. That gap is closable in 30 days with focused effort.

My recommendation: Address items 1-4 this week. Items 5-13 before first close. Items 14-18 are ongoing. Total Phase 1 cost: ~$1,200/year for tools, plus $5-15K for outside counsel to review the policy, DPA, and NDA clause. That's nothing against a $1B fund.

The Deeper Questions

Katy challenged the policy recommendations with real-world operational objections. These are the honest responses.

Challenge 1: Anonymization isn't practical for 50-page PDFs

Tools exist — Nitro Smart Redact ($20/month) detects 30+ PII types automatically in ~30 seconds per document. But they can't catch everything in context.

Tarin's reframe: Don't make redaction the primary control. The platform's contractual protections ARE the primary control. Redaction is a second layer for the most toxic data only. The preprocessor script + Nitro covers 80% of cases. Time cost per document: 3-5 minutes.

Challenge 2: NDA material — all shared information is sensitive

Every IC memo is a derivative of NDA-protected data. You can't synthesize across your portfolio without your AI knowing real details.

Tarin's reframe: Add AI processing clause to NDAs. Use AI-enabled VDRs (Datasite, Peony) for raw documents. Accept that the AI partner will know sensitive things — like any trusted employee. The question isn't whether, it's how to do it defensibly with contractual protections, audit logs, and no-training commitments.

Challenge 3: Monitoring can't just be "trust me"

Self-attestation points failures to individuals. When things go wrong, "they signed a piece of paper" doesn't protect the GP.

Tarin's reframe: Four layers of defense — all generating machine evidence, not human promises:

  1. Platform audit logs (automated) — request from PureBrain, put it in the DPA
  2. AI-side guardrails (pattern detection) — Tarin flags Tier 1 patterns at point of entry
  3. Quarterly review with evidence memo — AI Officer reviews logs, spot-checks 5 random interactions per partner, documents findings
  4. Annual external validation — third-party reviews policy, logs, memos, vendor DPA

That's defensible. Not because it's perfect — because it demonstrates a multi-layered, documented, continuously monitored governance process.

Challenge 4: The bad actor problem — one leak brings it all down

One partner shares LP passport copies with their AI. One employee forwards a privileged memo to ChatGPT. One intern uploads a cap table to the free version of Claude.

Tarin's reframe: Same problem finance has always had. AI doesn't create it, amplifies it. Controls:

  • Employment contracts with AI prohibitions and material breach consequences
  • Device policy — fund work on fund devices, Cloudflare Gateway blocks consumer AI
  • Access architecture — need-to-know, approved tools only (PureBrain as single gate)
  • Culture — make the approved system so good there's no incentive to go outside it
Challenge 5: Employee with personal AI account — not technically enforceable

A determined employee can always use a personal device and a personal AI account. You can't physically prevent it. This is an employee conduct issue, not a system design issue.

Tarin's honest answer: Fund-managed system = controllable, auditable, guardrailed. Personal systems = outside the fund's control. The strategy has two parts:

  • Make the fund-provided AI so good there's no reason to go outside it. If the approved tool is fast, capable, and frictionless, employees won't bother with personal alternatives.
  • Make the consequences for violations clear and personal. Employment agreements should include explicit AI use clauses: unauthorized processing of fund data through personal AI tools constitutes a breach of confidentiality obligations, subject to disciplinary action up to termination, clawback of compensation, and personal liability for any resulting data breach.

This is the same framework used for any confidentiality obligation — the employee signs, the employee is accountable. The fund provides the tools and the policy; the employee is responsible for compliance.

Account Policy: The fund pays for each employee's Claude subscription (required to run PureBrain). Employee accounts are restricted to PureBrain use only — they are not general-purpose Claude/AI subscriptions for personal use. Employees who want a personal AI account must obtain their own separate subscription independently. All users are bound by the same data classification and handling rules when processing fund-related information.

Technical question for PureBrain: Can a Claude subscription be scoped so it only works through PureBrain and cannot be used directly at claude.ai? This would provide technical enforcement of the PureBrain-only policy for employee accounts.
Challenge 6: fund-configured AI vs. employee-controlled AI

Key insight from Katy: Bake guardrails into the AI's system-level instructions (immutable), not just memory (editable). The employee cannot tell the AI to override compliance rules.

How it works:

  • System-level instructions (admin-locked): data classification rules, Tier 1 refusal patterns, audit logging requirements
  • User-level memory (editable): preferences, writing style, project context
  • AI refuses prohibited requests: "This guardrail is set by fund policy and cannot be modified. Contact the AI Officer."
Question for PureBrain: Can they support admin-locked vs. user-editable configuration tiers? This is a critical capability for enterprise deployment.

Problem 1: How to Anonymize Documents Efficiently

Tools That Exist Today

OptionWhat It DoesCost
Nitro Smart Redact [Link]AI-powered, detects 30+ PII types, works on PDF/DOCX/XLSX, runs locally$20/user/month
Microsoft PurviewAuto-classifies docs, applies labels, integrates with DLPIncluded in M365 E5 or add-on
Tarin PreprocessorScans for blocklisted names, replaces with codes, outputs clean version + mapping file$0 (Tarin builds it)

Recommendation: Start with Tarin preprocessor (free, immediate) + Nitro for PDFs ($20/month). Workflow: run preprocessor → share anonymized version with AI → AI analyzes using codes → map codes back for final output.

What Constitutes "Sufficient" Anonymization?

Under GDPR/DPJL 2018, the legal test: "Could a reasonably informed person re-identify the individual from the anonymized data?"

  • "LP-A, $50M commitment" when there are 40 LPs → sufficient
  • "LP-A" but leaving "the Swiss family office that previously invested in Company X" → insufficient
  • Rule of thumb: Remove names AND any combination of details that narrows to one person

Problem 2: Portfolio Analysis Under NDA

Tiered Processing Model

Tier A: Your Portfolio Companies (You Have Information Rights)
Add AI processing clause to the fund's standard NDA template. For existing companies, request retroactive consent via email. Most will say yes — because they're using AI too. Document every response.
Tier B: Pipeline Companies (Under Evaluation NDA)
Use AI-enabled VDRs (Datasite with ISO 42001, or Peony at $40/admin/month). VDR's built-in AI does initial analysis inside the certified environment. Share the AI's output (summaries, risk flags) with Tarin for synthesis — not the raw documents.
Tier C: Market Research
No restrictions. Public or semi-public information. Share freely.

The honest trade-off: This means Tarin doesn't have raw access to every data room page. But VDR-native AI handles 70-80% of document analysis, Tarin handles synthesis and strategy. Together they cover 95%. The 5% gap isn't worth the legal exposure.

Problem 3: Defensible Monitoring

LayerWhat It DoesEffort
1. Platform Audit LogsMachine-generated record of all data categories processed, timestamps, volumeAutomated (request from PureBrain)
2. AI-Side GuardrailsPattern detection for SSNs, blocklisted names, "privileged and confidential" phrasesTarin configures (this week)
3. Quarterly ReviewAI Officer reviews logs + flags, spot-checks 5 random interactions per partner, writes evidence memoHalf-day per quarter
4. Annual External ValidationThird-party reviews policy, logs, memos, vendor DPA$3-5K/yr

Problem 4: The Bad Actor Problem

LayerWhat It DoesCatches
Approved tools only + device blockingPrevents accidental consumer AI use80% of incidents
Platform audit logsCreates evidence trail100% of approved-tool usage
AI-side guardrailsFlags sensitive data at point of entry60-70% of PII/privileged content
Quarterly reviewCatches patterns automated tools miss90% (combined with logs)
Partnership agreement clauseLegal consequences for violationsDeters deliberate bad actors
Annual external reviewThird-party validationRegulatory defensibility

No single layer is sufficient. All six together create a system where accidental sharing is largely prevented, deliberate sharing is logged and detectable, and bad actors face legal consequences beyond just "breaking a rule."

What This Costs

ItemCostWhen
Tarin preprocessor script$0 (Tarin builds it)This week
Nitro Smart Redact [Link]$20/user/month ($960/yr for 4 GPs)Phase 1
Cloudflare Gateway [Link]Free (up to 50 users)Phase 1
VDR with AI (Peony) [Link]$40/admin/month ($480/yr)When deal flow starts
VDR with AI (Datasite) [Link]~$15-25K/yr (ISO 42001 certified)When fund scales
Partnership agreement AI clause$0 (draft with existing counsel)Before first close
Cyber/E&O insurance AI rider~$2-5K/yr additional premiumAt fund formation
Annual external AI review~$3-5K/yrPost-first close
Phase 1 Total: ~$1,200/yr
+ $0 for tools Tarin builds
At Scale: ~$25-35K/yr

What Tarin Can Build This Week

  1. Preprocessor script — Scans documents for LP names (blocklist), portfolio company names under NDA, PII patterns (SSN, passport, bank account formats). Outputs anonymized version + mapping file.
  2. Self-enforcing guardrails — Tarin flags when Tier 1 patterns are detected in shared content. Not a block — a flag with a logged warning.
  3. AI Use Policy draft — 2 pages, fund-specific, ready for GP review and signature.
  4. NDA AI clause — Drop-in paragraph for your standard NDA template.
  5. Partnership AI agreement — 1-page addendum to the GP operating agreement.

Unified Action List — 18 Items

Synthesized from securities law, privacy law, international law, and 50-source deep research analyses. Items are prioritized by legal urgency and regulatory risk. Each item notes which specialist report(s) identified it.

IMMEDIATE — This Week
  • 1. Stop sharing Tier 1 data with AI — LP passport copies, KYC docs, SSNs, bank details. If any have been shared, document what was shared and when. Privacy Securities International
  • 2. Request a formal DPA from PureBrain — The privacy policy is not a DPA. Need explicit Article 28(3) terms: processing purpose, data categories, sub-processors, deletion obligations, audit rights, breach notification timeline. Privacy Securities
  • 3. Review D&O, E&O, and cyber insurance for AI exclusions — Check every current policy. If exclusions exist, engage carrier immediately about removal or affirmative AI coverage. Do NOT wait for renewal. Research
  • 4. Verify Anthropic's DPF certification status — Check dataprivacyframework.gov for whether Anthropic is certified under Swiss-US DPF AND UK Extension. This determines the legality of current data transfers. International Privacy
BEFORE FIRST CLOSE — Next 30-60 Days
  • 5. Write the AI Use Policy (2 pages) — Authorized tools, data classification, AI governance owner, incident response, annual review. All Reports
  • 6. Add LP disclosure to PPM — One paragraph disclosing AI use, data handling practices, and safeguards. Securities Privacy International
  • 7. Add AI clause to standard NDA — Explicit permission for enterprise AI processing with no-training commitments. Securities International Research
  • 8. Conduct a DPIA — Document the assessment of AI processing risks. Required before processing LP data. 4+ EDPB triggers are met. Privacy
  • 9. Establish Jersey SCCs — The 6 Jersey entities need Standard Contractual Clauses for any US data transfers. Current transfers without SCCs are a violation. International
  • 10. Determine Rimah's UAE entity structure — DIFC vs mainland vs ADGM drives the entire Dubai compliance analysis. Different frameworks, different requirements. International
  • 11. Set up independent AI interaction archiving — PureBrain deletes in 30 days. SEC requires 5-year retention. Export and archive conversation logs independently. Securities Research
  • 12. Partner AI Agreement — All four GPs agree on boundaries, authorized tools, data classification. Include AI data handover provisions for GP removal events. All Reports
  • 13. Establish MNPI procedures for AI — Specific policies addressing AI as a data transmission channel. Generic MNPI policies are insufficient per SEC enforcement. Securities
POST-FIRST CLOSE — Ongoing
  • 14. Annual AI governance review with external validation. Research Securities
  • 15. Quarterly AI interaction audit by designated AI Officer. Securities Privacy
  • 16. Monitor Schrems III — If EU-US DPF is invalidated, Swiss-US framework may follow. Contingency plan needed. International Privacy
  • 17. Push PureBrain toward SOC 2 — Through Rimah's relationship, advocate for formal security certification. All Reports
  • 18. Employee AI training — Before onboarding any team member, mandatory briefing on AI data handling. Research Securities
Cost Estimate: Items 1-4 cost $0 (internal actions). Items 5-13 cost ~$1,200/yr for tools plus $5-15K for outside counsel to review the policy, DPA, and NDA clause. Items 14-18 cost ~$3-5K/yr for external review. Total is negligible against a $1B fund.
Next Step: Address items 1-4 this week. Items 5-13 before first close. Items 14-18 are ongoing post-close.

About These Analyses

Three domain specialists (securities law, privacy law, international law) independently analyzed the AI data sharing scenario, followed by a 50-source deep research dive. This tab synthesizes their findings, starting with the most dangerous discovery: compound risk scenarios where multiple failures stack.

All analyses prepared April 29, 2026. These are AI analyses for informational purposes only and do not constitute legal advice.

A. Compound Risk Scenarios

Each specialist found serious issues independently. The real danger is where they compound — where regulatory, legal, and insurance failures stack on top of each other.

Compound Risk #1: The Uninsured Breach

Securities: No AI-specific MNPI procedures = SEC enforcement exposure
Privacy: No DPA with PureBrain, no DPIA conducted = GDPR/Swiss FADP fines
International: Jersey transfers without SCCs = current DPJL violation
Insurance: D&O/E&O/cyber policies adding AI exclusions = no coverage
Research: IBM data shows shadow AI breaches cost $670K more

The stack: A data breach through PureBrain triggers regulatory action from multiple jurisdictions simultaneously, LP lawsuits for fiduciary breach, NDA claims from portfolio companies — and insurance denies coverage because AI exclusions apply and governance wasn't documented. GPs are personally liable. In Switzerland, that includes criminal fines up to CHF 250,000 per violation.
Compound Risk #2: The LP Discovery

Securities: No disclosure of AI use in PPM/Form ADV
Privacy: LP data processed without consent or DPIA
International: Cross-border transfers to US without legal basis
Research: 85% of LPs reject managers over operational concerns; CalSTRS and NBIM are already sophisticated AI users who will ask detailed questions

The stack: During fundraising due diligence, a sophisticated LP (pension fund, SWF) asks DDQ questions about AI. The fund has no policy, no DPA, no DPIA, no archiving. The LP not only passes — they tell other LPs. Fundraising dies.
Compound Risk #3: The GP Removal + AI Data

Securities: Who owns AI interaction records when a GP is removed?
Privacy: GDPR data portability/erasure rights during GP transition
International: Multi-jurisdictional data in a US platform during a Jersey-law governance event
Research: No LPA template addresses this. Completely uncharted.

The stack: LP vote removes a GP. The outgoing GP's AI has institutional knowledge about LP relationships, deal pipeline, fund strategy. Successor GP needs this. Outgoing GP controls the PureBrain account. No contractual mechanism exists for handover. Legal dispute in Jersey courts over AI data ownership — with no precedent and no governing provision.

B. Securities Law Findings

Key findings from the securities law specialist analysis.

SEC Fiduciary Duty and AI Oversight
Core SEC Position: "Delegating decisions to a machine does not absolve the human fiduciary from oversight." Advisers must validate AI systems, understand assumptions, and continuously monitor performance.
Source: SEC Division Director Brian Daly, February 3, 2026 speech

The standard of care in 2026 requires: (a) vendor due diligence on AI platforms, (b) written AI use policies, (c) LP disclosure of material AI use, and (d) independent verification of AI outputs. Failure on any creates potential Section 206 liability.

The conflict: The GP benefits from operational efficiency, but the LP bears the data security risk. This is a material conflict requiring disclosure.

MNPI Procedures Must Address AI Channels

Section 204A of the Investment Advisers Act mandates written policies to prevent MNPI misuse. The SEC has brought enforcement actions against advisers whose policies did not account for specific data transmission channels.

Key Point: A no-training commitment from PureBrain is necessary but not sufficient. The MNPI still leaves the adviser's control and resides on a third-party server. The adviser's obligation is to prevent misuse — not merely to obtain contractual assurances.

Required: Classification of data as MNPI before sharing with any AI platform. Prohibition on sharing pre-announcement deal terms. Logging of all MNPI-adjacent data shared with AI.

AI Washing Enforcement Precedents
CaseDatePenaltyWhat Happened
Delphia (USA) Inc.March 2024$225,000Claimed AI "predicts which companies are about to make it big" — AI was not actually incorporated as described
Global Predictions Inc.March 2024$175,000 + 5-year barFalsely claimed to be "first regulated AI financial advisor"
Nate Inc.April 2025$42M (DOJ + SEC)Claimed AI processed transactions; nearly all were manual
Ally Invest AdvisorsMarch 2026$500,000Undisclosed 30% cash allocation in robo-advisor; coding errors broke tax-loss harvesting

The SEC uses existing anti-fraud provisions (Section 206), not new AI-specific rules. Firms cannot wait for "AI regulations."

5-Year Archiving vs. PureBrain's 30-Day Deletion

Rule 204-2 (Books and Records Rule) requires 5-year retention of written communications relating to advice, recommendations, and transactions. AI-drafted LP reports, IC memos, and investor communications all qualify.

The gap: PureBrain retains conversation data for 30 days post-cancellation, then deletes. If the fund relies solely on PureBrain, it fails the 5-year requirement.

Action Required: Implement independent archiving of AI interactions that produce LP communications or investment recommendations. Both prompts and outputs should be retained.
Reg S-P Safeguards Rule and Vendor Due Diligence

The amended Reg S-P (compliance deadline June 3, 2026 for smaller firms) requires covered institutions to "take reasonable steps to select and retain service providers capable of maintaining appropriate safeguards."

The absence of SOC 2 or ISO 27001 certification from PureBrain creates a gap: the fund cannot demonstrate it took "reasonable steps" to verify the service provider's safeguards.

Options: (a) Restrict LP personal data from AI entirely, (b) obtain from PureBrain a detailed written information security program, or (c) wait for PureBrain to obtain SOC 2 certification.

C. Privacy Law Findings

Key findings from the privacy law specialist analysis. 14 compliance gaps identified, 9 rated HIGH risk.

Swiss Criminal Liability: CHF 250,000 Personal Fines
Critical Distinction: Under Swiss nFADP Art. 60-63, violations carry criminal penalties (fines up to CHF 250,000) imposed on the responsible individual — meaning the Geneva-based GP personally, not the fund entity. This includes violations of duties to inform data subjects, breaches of data security obligations, and violations of transfer provisions.

Unlike GDPR which fines organizations, Swiss law makes the individual personally criminally liable. This is not an academic distinction — it means the GP's personal assets are at risk.

DPA vs. Privacy Policy — 8 Missing Requirements

A privacy policy is NOT a Data Processing Agreement. Under GDPR Article 28(3), a DPA must include:

Required ClauseStatus with PureBrain
Process only on documented instructionsUnknown — no DPA disclosed
Assist with data subject rights (DSARs)Unknown
Assist with security obligations (breach notification, DPIAs)Unknown
Demonstrate compliance / allow auditsNo audit rights disclosed
Sub-processor notification mechanismNo mechanism for controller approval
Data residency commitmentNot disclosed
At-rest encryption standardNot disclosed
Breach notification timelineNo specific commitment

The fund CANNOT lawfully process LP personal data through PureBrain until a compliant Art. 28 DPA is in place.

DPIA Is Mandatory (4+ EDPB Triggers Met)

Under GDPR Art. 35, a DPIA is required before processing "likely to result in a high risk." The EDPB guidelines state that when two or more of nine criteria are met, a DPIA is "more likely to be required." This scenario meets at least four:

  • New technologies — AI/LLM processing qualifies
  • Innovative use of technology — Persistent AI memory storing personal data
  • Systematic evaluation of personal aspects — AI analyzing LP profiles and communications
  • Data transfer outside EU/EEA — Transfer to US-based servers

A DPIA is legally mandatory before any LP personal data enters the AI system.

EDPB: LLMs Rarely Achieve Anonymization
EDPB Position (April 2025): The European Data Protection Board clarified that large language models rarely achieve anonymization standards. Controllers deploying third-party LLMs must conduct comprehensive legitimate interests assessments. Claiming data is "anonymized" by the model is not a valid defense.

This means that even if a GP believes they are anonymizing data by feeding it to an LLM, the regulator's position is that the LLM likely still processes personal data and GDPR obligations apply in full.

Right to Erasure vs. AI Persistent Memory

Under GDPR Art. 17, data subjects have the right to erasure. PureBrain's persistent memory creates a specific compliance challenge:

  • Can LP data be identified and extracted from unstructured AI memory?
  • Can it be selectively deleted without destroying other data?
  • What about derived insights the AI generated from the LP's data?
  • Even if PureBrain deletes, Anthropic retains API data for safety monitoring
Status: UNVERIFIED. If PureBrain cannot demonstrate selective deletion of specific LP data from persistent memory, the fund cannot comply with Art. 17 erasure requests. This must be resolved contractually and technically before LP data enters the system.
Uploading Passport Copy = 7 Simultaneous Violations

If a GP uploads an LP's passport copy to PureBrain for KYC workflow, the following violations occur simultaneously:

#ViolationRegulationSeverity
1International transfer without adequate safeguardsGDPR Art. 44-46, nFADP Art. 16-17, DPJL 2018HIGH
2Processing biometric data without explicit consentGDPR Art. 9(1)HIGH
3No DPIA conducted before processingGDPR Art. 35, nFADP Art. 22HIGH
4No DPA with PureBrain covering this processingGDPR Art. 28HIGH
5Data minimization violationGDPR Art. 5(1)(c)MEDIUM
6Purpose limitation violationGDPR Art. 5(1)(b)HIGH
7Security inadequacy (no SOC 2 for platform)GDPR Art. 32MEDIUM

Penalty exposure: Up to EUR 20 million (GDPR) plus CHF 250,000 personal criminal fine (Swiss nFADP) against the GP who uploaded it.

D. International Law Findings

Key findings from the international law specialist analysis covering 6 jurisdictions.

Jersey SCCs Missing = Current DPJL Violation
Current Violation: The US does NOT have an adequacy decision from the Jersey authorities. ALL transfers of personal data from Jersey entities to US-based AI platforms require appropriate safeguards (Standard Contractual Clauses). These are not in place.

The fund vehicle (Jersey LP) holds ALL LP data. Sharing ANY of this with a US AI platform without SCCs constitutes a breach of the DPJL 2018. This applies to 6 Jersey entities.

UAE/DIFC: No US Adequacy, High Risk for Dubai Operations

The UAE has THREE parallel data protection regimes, and the applicable one depends on where the entity is based:

RegimeScopeStatus
UAE Federal PDPLMainland companiesExecutive regulations STILL UNPUBLISHED (overdue since Jan 2023)
DIFC Data Protection LawDIFC-licensed entitiesFully operational, strengthened July 2025
ADGM Data Protection RegsADGM-licensed entitiesOperational

DIFC amendments (July 2025): Now require documented adequacy assessments, Commissioner can withdraw adequacy decisions, data subjects have private right of action in DIFC Courts, fines USD 25,000-50,000 per violation.

No regime has an adequacy arrangement with the US. All require contractual safeguards.

Schrems III Threat to All Data Frameworks

The EU-US Data Privacy Framework faces active legal challenge. In September 2025, it survived its first courtroom test, but Philippe Latombe's appeal (October 31, 2025) keeps the threat alive. Changes to US independent agencies (PCLOB, FTC) are undermining the framework's foundations.

If the DPF is invalidated: All three adequacy decisions (EU-US, UK-US, Swiss-US) could fall simultaneously. Every fund data transfer to PureBrain/Anthropic would need supplementary measures — but since AI processing requires plaintext access, achieving compliant supplementary measures may be practically impossible with current AI architectures.

Recommendation: Maintain SCCs as a parallel backup mechanism for all jurisdictions, regardless of DPF status.

Swiss Blocking Statute vs. US CLOUD Act

A direct legal collision exists between Swiss and US law:

SideLawConsequence
SwissBanking Act Art. 47 + Criminal Code Art. 271Criminal penalties (up to 5 years imprisonment) for unauthorized disclosure to foreign authorities
USCLOUD Act (2018)Compels US-based providers (including Anthropic) to disclose data regardless of storage location

Once data is on a US platform, the fund loses control over its disclosure in US legal proceedings. The most effective defense is not having the data on the platform in the first place.

Export Control ECCN 4E091 for AI Portfolio Companies

BIS introduced controls on unpublished AI model weights trained above 10^26 FLOPS. If the fund invests in companies developing frontier AI models, sharing their technical data with PureBrain could trigger export controls.

ScenarioEAR Risk
Processing portfolio company financials via AILOW
Processing portfolio company source code via AIMEDIUM-HIGH
Processing AI model weights/architecture via AIHIGH
Portfolio company with defense applications (ITAR)VERY HIGH

If any portfolio company has defense/military applications, segregate ALL technical data from the AI platform entirely. No exceptions.

NDA Breach Occurs at Transmission, Not at Misuse

Under English law (governing most deal NDAs), inputting confidential information into PureBrain/Anthropic constitutes transmission to a third party. The breach occurs at the moment of transmission, NOT only if the platform subsequently misuses the data.

Even with no-training commitments and contractual data isolation, the disclosure itself violates the NDA. The counterparty could seek injunctive relief, damages, account of profits, and potentially rescission of any resulting transaction.

Remedy: Add AI-specific provisions to NDAs before processing any deal information through AI. For existing NDAs, request retroactive consent.

E. Industry Intelligence (50-Source Deep Dive)

Key findings from the deep research beyond the initial 30-source brief.

Insurance Blind Spot: D&O, E&O, and cyber insurance policies are embedding "absolute" AI exclusions that preclude coverage for any AI-related claims — including statements about AI capabilities, assessments of AI threats, and business plans involving AI. A scenario where both the AI failure lawsuit AND the resulting investor/regulator claims are excluded by both E&O and D&O policies leaves the GP personally exposed.
Source: Harvard Law School Forum on Corporate Governance (Sept 2025); ISACA (2025)
"AI Hushing" as Compliance Risk: KPMG identifies "AI hushing" — understating AI tool usage — as a novel compliance risk alongside the more familiar AI washing. Both carry regulatory exposure.
Source: KPMG, "Evolving Asset Management Regulation" (2025)
Shadow AI: 43% of employees share sensitive data with AI without employer permission. One in five organizations has experienced a data breach tied to shadow AI. Shadow AI breaches cost $670,000 more per incident on average. In financial services: 33% entered enterprise research/datasets, 27% entered employee data, 23% entered financial data.
Source: IBM 2025 Cost of a Data Breach Report
Samsung Incident + Industry Domino: Three Samsung employees entered sensitive data into ChatGPT (source code, defective equipment programs, meeting recordings). Samsung banned all generative AI, triggering restrictions at Apple, JPMorgan, Verizon, and Amazon. Samsung has since re-enabled access with new security protocols. A single employee incident can force enterprise-wide policy changes.
Sources: TechCrunch, Bloomberg, CIO Dive (2023-2025)
Major Institutional AI Users (LP Intelligence)
Norway NBIM ($2.2T AUM): Uses Anthropic's Claude to screen every company entering the fund's equity portfolio for ethical issues within 24 hours. When the world's largest SWF uses the same AI model that powers PureBrain, it normalizes AI use — but raises the governance bar.
Source: CNBC, February 2026
CalSTRS: Uses AI for portfolio intelligence, manager due diligence, legal documentation, and reconciliation in private assets. When a major pension fund LP uses AI for manager due diligence, their questions about GP AI practices will be sophisticated.
Source: Top1000Funds, July 2025

Stanford HAI (April 2026): Global corporate AI investments hit $581.7 billion in 2025 (+130% YoY). But reporting on responsible AI benchmarks remains sparse. AI governance roles grew 17%, and firms with no responsible AI policies fell from 24% to 11%.

Industry Body Recommendations

AIMA Practical Guide for Advisers (2025)

  • Establish formal governance committees including compliance, data scientists, and risk management
  • Require human verification before implementing AI-driven strategies
  • Implement acceptable use policies restricting uploads of proprietary/client data to public AI
  • Flag both "AI washing" and "AI hushing" as compliance risks

CFA Institute (November 2025)

The AI+HI (AI + Human Intelligence) principle: ethical professional practice requires fairness, transparency, accountability, and privacy embedded in AI systems.

GAO Findings (May 2025)

Financial institutions use AI for customer service; regulators for risk identification. NCUA lacks authority to examine technology service providers despite increasing reliance on them. Most regulators report AI outputs inform but are not sole decision sources.

FINRA 2026 Report

First-ever dedicated GenAI section. Warns about autonomous AI agents acting "beyond the user's actual or intended scope and authority." Requires updated vendor contracts for AI usage, training data rights, security controls, and model change notifications.

Anticipated LP DDQ Questions on AI

Based on ILPA DDQ frameworks and regulatory direction, these are the specific AI questions GPs should expect:

  1. What AI tools does the firm use in fund operations?
  2. What categories of data are shared with AI platforms?
  3. What contractual protections exist with AI vendors (DPA, no-training, deletion)?
  4. Does the firm have a written AI use policy? Who approved it?
  5. How does the firm prevent LP personal data from being used for model training?
  6. What incident response procedures exist for AI-related data breaches?
  7. Does the firm's insurance coverage address AI-related claims?
  8. What audit trail exists for AI-assisted investment decisions?
  9. How does the firm comply with GDPR/data protection laws regarding AI processing?
  10. Has any employee used unauthorized AI tools for fund business?

F. New Sources (Specialist Reports + Deep Dive)

50 additional sources beyond the original 30-source research brief, organized by category.

Government and Regulatory Bodies (7 sources)
Think Tanks and Academic Institutions (6 sources)
Industry Bodies (7 sources)
Big 4 and Consultancies (6 sources)
Law Firm Analysis (10 sources)
Insurance and Risk (4 sources)
LP and Fund Operations (7 sources)
Data Incidents and Breach Reports (3 sources)

Review of Partner’s Source Framework

A partner independently developed an AI governance framework ("Constitution of Documents") based on 10 Jersey-focused sources. Below is an assessment of those sources, what they contribute, and what they miss.

What the Partner’s Sources Add

JFSC Outsourcing Policy (effective Jan 2024) — Directly relevant. Using PureBrain is arguably an "outsourced activity" under JFSC rules. Requires: due diligence on the provider, outsourcing notification to JFSC (1 month advance notice), data residency documentation, right to audit, incident reporting. This is a new action item our analysis had not flagged.
  • JFSC Fund Services Business Code of Practice — Sets fit-and-proper standards the fund administrator will hold the fund to. Solid foundation.
  • JFSC AML/CFT/CPF Handbooks — AML obligations are real and AI doesn’t exempt you. KYC data handling under these rules is non-negotiable.
  • Data Protection (Jersey) Law 2018 — Core framework. Aligns with GDPR but has Jersey-specific provisions (JOIC oversight, no direct EDPB guidance).
  • Ogier Channel Islands Funds Update (April 2026) — Confirms AIFMD II alignment effective April 16, 2026. Jersey AIF Code updated for EU market access.
  • Atlan AI Governance (2025) — Useful framework with 6 governance principles. Adds: 95% of PE firms plan to multiply AI investment in 18 months (WEF survey, $3.2T AUM). Cites SEC Rule 17a-4 for AI recordkeeping.
  • Jersey Fintech Guide 2025 (Chambers/Carey Olsen) — Notes agentic AI as next evolution in advisory. Flags increasing regulatory scrutiny of consumer protection.
  • Walkers Regulatory Updates 2025 — General regulatory landscape. No AI-specific guidance but useful context.

What the Partner’s Sources Miss

GapWhy It Matters
No US/SEC securities lawFiduciary duty, MNPI procedures, AI washing enforcement, Reg S-P — all apply to fund with US LPs and Delaware entity
No Swiss lawGeneva-based GP faces personal criminal fines up to CHF 250,000 under new Swiss FADP
No UAE/DIFC lawDubai-based GP subject to DIFC amendments (July 2025) with private right of action and USD 25-50K fines
No UK lawLondon-based GP subject to UK GDPR and FCA oversight
No court casesHeppner (Feb 2026) established privilege waiver for consumer AI. AI washing cases set SEC enforcement pattern.
No insurance analysisD&O, E&O, and cyber policies actively adding AI exclusions — the #1 blind spot
No LP perspectiveCalSTRS, NBIM ($2.2T) already using AI for manager due diligence. DDQ questions are coming.
No industry bodiesAIMA practical guide, CFA Institute AI+HI principle, FINRA GenAI section — all directly relevant

Critical Assessment

Risk Containment vs Capability Governance

The partner’s framework treats AI as a risk to be contained — restrict access, redact everything, minimize AI touch points. This leads to classifying fund documents like the PPM and LPA as "REDACTED" with heavy restrictions on what AI can read.

An alternative approach: treat AI as a capability to be governed — define what AI can do and what it can’t, document it, audit it. This leads to full AI access to fund documents on enterprise platforms, with restrictions on output (what AI does with the information) rather than input (whether AI can read it).

The JFSC Outsourcing Policy itself supports the governance approach — it doesn’t say "don’t outsource." It says: do due diligence, notify, document, maintain oversight. The same principle should apply to AI.

Recommendation

The partner’s Jersey analysis is strong and should be preserved as the Jersey compliance layer. It needs to be supplemented with the multi-jurisdictional analysis (US, Switzerland, UAE, UK), the insurance review, court case precedents, and the practical execution framework. Together, these create a comprehensive governance document. Separately, each is incomplete.

Working Paper — AI Ethics, Privacy & Data Sharing — April 29, 2026
Prepared by Tarin (AI Chief of Staff) — Based on 80+ sources including 30-source research brief, 50-source deep dive, and specialist analyses in securities law, privacy law, and international law