SFV IC Agent
The thesis

The bet.

AI-native financial infrastructure is the companies rebuilding back-office and middle-office financial workflows with LLMs and agents at the core — not as a feature on top of a traditional SaaS product, but as the load-bearing logic of the system. Underwriting, treasury, AP/AR, compliance, credit memos: the work that used to require a person now done by an agent that ships with examiner- ready audit trails. The buyer is the CFO, the chief credit officer, the BSA/AML officer; the budget being displaced is real headcount and real legacy software.

Why now is not “LLMs got better.” It is that, between 2023 and 2025, multimodal models crossed three specific thresholds at once: messy financial-document understanding became reliable enough for human- in-the-loop production use, conversational interfaces stopped embarrassing the buyer’s brand, and structured extraction from government forms hit examiner-acceptable accuracy. The category went from demo-able to deployable inside the same eighteen months that incumbent vendors were still figuring out their AI roadmaps. That window is the bet.

Where we look

Five subsegments.

See market map →
Underwriting

AI underwriting & credit decisioning

Companies rebuilding origination, credit analysis and decisioning workflows around LLMs — for consumer, SMB, embedded and SBA lending.

ExamplesCascaTaktile
Load-bearing question

Will banks and lenders trust AI-rendered credit decisions in front of regulators, or stay constrained to AI-as-assistant?

FinOps

Agentic FinOps

Treasury, spend management and procurement automation where AI agents own approval, fraud, and policy workflows end-to-end.

ExamplesRamp
Load-bearing question

Does the agent layer become the primary surface area for the CFO stack, or is it a UX skin on a card-and-software platform?

Back-office

AI-native back-office

AP/AR, reconciliation, close, audit and the broader controller stack — rebuilt around language-model document understanding, not OCR.

ExamplesRilletLaurel
Load-bearing question

Can an AI-native ERP or close stack actually displace NetSuite at venture-backed companies before the incumbents bolt on credible AI?

Embedded

Embedded finance

Embedded finance infrastructure with AI orchestration on top of BaaS rails — credit, payments, treasury inside a vertical product.

Examples
Load-bearing question

Where does AI orchestration justify a dedicated infrastructure layer above the existing BaaS plumbing?

Compliance

Compliance & KYC/AML

AML, KYC, KYB, sanctions and regulatory-change monitoring — agents that clear alerts and produce examiner-ready audit trails.

ExamplesSardineBretton AI (fka Greenlite)
Load-bearing question

Will regulated FIs trust an agent to clear an alert, or only to triage one — and is the difference a moat or a ceiling?

How we score

The rubric.

Six dimensions · 30 points
AI centrality
1

AI is a feature label on a traditional SaaS product

3

AI handles a meaningful but optional workflow

5

AI is the product; remove it and nothing is left

We are looking for products that disappear without their LLMs. Auto-coding, conversational intake, multi-source evidence synthesis, narrative generation. Cosmetic AI is the largest category in the market — a 1 or 2 score is the default, not the exception.

Workflow depth
1

Surface-level tool (notifications, summaries)

3

Owns one full workflow end-to-end

5

Owns multiple connected workflows; replaces a role

Depth is the difference between a copilot and a system of record. The companies we want own the credit narrative, not the dashboard above it. Multiple connected workflows are how a product crosses from automation to displacement.

Data loop
1

No proprietary data accumulating

3

Some proprietary data, weak loop

5

Strong loop: more usage → better model → more usage

The most over-claimed moat in fintech. We probe whether data is actually entering model updates or just sitting in a warehouse. A real loop changes the unit economics of every additional customer; a fake one is just storage.

Founder-workflow fit
1

Founders have no domain background

3

Founders have adjacent domain experience

5

Founders have done the exact job being automated

An ex-credit-officer building underwriting tools is a different bet than an ex-Google PM building underwriting tools. We score adjacency vs. exact-job experience explicitly. Engineers building for engineers usually outperforms — but not in regulated finance.

Traction signal
1

Pre-revenue or pilots only

3

Real ARR, mixed retention

5

Real ARR with strong NDR and logo quality

Real ARR with named logos beats a TAM model with conviction every time. We anchor on disclosed numbers only — no inference from round size, no extrapolation from headcount, no benchmarking to the median Series A.

SFV thesis fit
1

Tangential to the thesis

3

Clearly within the thesis

5

Central to the thesis; SFV should know this company

We score how well a company matches the AI-native financial infrastructure brief — not how interesting the company is in general. Adjacent breakthroughs in horizontal AI tooling don’t earn a 5 here.

Category-level risks

What kills the thesis.

01

Regulatory backlash on AI in lending and compliance

An OCC, CFPB or state regulator examines an AI-rendered adverse action and finds the explanation chain inadequate, or a fair-lending audit shows disparate impact in an AI-assisted approval flow. The category response — model risk management documentation, bias testing, examiner-ready audit trails — becomes a hard floor that compresses the gap between AI-native entrants and incumbents who can throw lawyers and consultants at the problem.

02

Incumbent counter-attack via M&A

nCino, Moody’s, FIS, Fiserv, SAP, Coupa or BILL acquire two or three of the strongest AI-native entrants in their adjacency, fold them into existing distribution, and the standalone-platform thesis collapses into an OEM thesis. Brex → Capital One in January 2026 already showed this is the live exit path. Good for early holders, harder for companies betting on independent scale.

03

The “AI-native” label getting diluted as everyone bolts on AI

Within 12–18 months every legacy vendor will claim AI-native status, the buyer signal collapses, and procurement teams stop treating AI-nativity as a tiebreaker. The thesis still works — but the marketing wedge disappears. Companies whose moat is real workflow ownership keep compounding; companies whose moat was being first to say the words don’t.

Research stack

How a memo gets made.

Step 1
User input
Company name
Step 2
Claude Agent SDK
claude-opus-4-6 · streaming query()
Step 3
Exa MCP
Four research tools
web_search_exacompany_research_exalinkedin_search_exacrawling_exa
Step 4
System prompts
system · memo template · rubric · operator context
Step 5
File output
output/<slug>-memo.md · output/market-map.md
The thesis · SFV IC Agent