· · · · · · · ·
Jump to Section
Tools & Pages
Page Sections
✕ CLOSE Home Assessment Prompt Forge AI Stack SOX + Agents Stablecoins LinkedIn ↗
Finance AI · Agentic Systems · Operator Perspective

Finance AI from
inside the work.

Not theory. Not vendor pitch. A working controller on how agentic AI is reshaping the finance function — projects, technical issues, implementations. Four published articles. An AI readiness assessment. A prompt generator for real controller work.

Read the Thesis
Controller Leverage
With AI
3–5× Output Leverage
on Written Work
4 Published
Articles
Agentic Close Automation Technical Accounting Memos Reconciliation Agents Stablecoin Treasury Controls SOX for AI Systems Flux Analysis Automation Revenue Recognition AI Controller Career Divide Finance LLM Review Gates Agent Risk Frameworks Agentic Close Automation Technical Accounting Memos Reconciliation Agents Stablecoin Treasury Controls SOX for AI Systems Flux Analysis Automation Revenue Recognition AI Controller Career Divide Finance LLM Review Gates Agent Risk Frameworks

How Agentic Systems Actually
Move Through Finance Work

Not marketing diagrams. Actual workflow patterns — with the control points, review gates, and failure modes included.

DATA SOURCE GL System DATA SOURCE Sub-Ledger DATA SOURCE Bank Stmts AGENT Data Ingestion Normalize · Validate · Flag AGENT Reconciliation Match · Diff · Score AGENT Flux Analysis Explain · Flag · Draft CONTROL GATE Human Review Controller Sign-Off Required EXCEPTION QUEUE Flagged Items Requires Investigation OUTPUT Close Package Signed · Filed · Archived OUTPUT Flux Commentary AI Draft · Human Edited Agent Handoff Requires Human Approved Output Exception Path

Where Agents Are Actually
Changing the Work

Mapped by maturity, domain, and what kind of human oversight is still required.

Live Now

Technical Accounting Research

Agent traverses ASC, IFRS, peer disclosures, and prior research to assemble relevant guidance. Controller validates and decides. What used to take 4-6 hours now takes 45 minutes — the same quality, radically different time cost.

ASC Research Memo Drafting Low Risk
Live Now

Flux Commentary Drafting

Agent ingests period-over-period data, identifies material variances, generates first-draft explanations by account. Controller edits and approves. Best use of AI in close because it scales: 50 accounts take the same time as 5.

Close Process High Leverage Review Required
Live Now

Policy & Procedure Drafting

Give the agent the existing policy, the new standard or business change, and the audience. It drafts the redlined update. Controller and legal review. Cuts the first-draft cycle from days to hours on even the most complex policy changes.

Policy Design SOX Docs Moderate Risk
Emerging

Reconciliation Exception Triage

Agent classifies unmatched items (timing difference, system error, potential fraud, duplicate) and routes accordingly. Auto-clears the obvious 80%. Surfaces the 20% that needs eyes with a ranked brief. Still requires tuning — false negative rate matters here.

Rec Automation Rule-Based High Volume
Emerging

Project Status Synthesis

Agent pulls project tickets, email threads, meeting notes, and milestone trackers and drafts a weekly status update with RAG (Red/Amber/Green) scoring. Useful for controllers running 3+ concurrent projects. Still needs human sanity check on the RAG ratings.

Project Mgmt Synthesis Low Stakes
Proceed With Caution

Reserve Calculation Assistance

Agent can run the math, stress-test assumptions, and draft the methodology memo. It cannot make the reserve judgment — that requires management intent, legal exposure awareness, and audit relationship context the model doesn't have. Use with extreme human oversight.

Reserves High Stakes Expert Review

Four Diagrams Every
Controller Should See

The leverage model, the judgment stack, the career divide, and the AI tool layers — visualized.

Leverage Matrix

AI output amplification by work type. Highest on written/volume work. Near zero on judgment and relationships.

Flux commentary
Policy first drafts
Accounting research
Project synthesis
Rec exception triage
Reserve analysis
1.5×
1.5×
Audit negotiations
The Judgment Stack

What agents can handle vs. what requires a human with signing authority. The dividing line is accountability, not complexity.

Audit defense & negotiations
History · relationship · credibility · legal exposure
Human Only
Management intent & estimates
Reserve adequacy · impairment · restructuring
Human Only
Technical accounting positions
Research: AI · Position + defense: Human
Human Gates
Close package sign-off
AI produces · Human reviews · Human signs
Human Gates
Flux commentary & memos
AI drafts · Human edits · Low error cost
AI-Assisted
Research & synthesis
AI leads · Human validates key claims
AI-Led
The Career Divide — 2024 → 2030

Two trajectories from the same starting point. The gap between controllers who internalize AI fluency and those who don't is already visible — and compounds monthly.

Base +50% +100% +150% 2024 2025 2026 2027 2028 2030 Same starting point Gap AI-fluent controller Controller without AI integration

The gap is already visible in teams with both kinds of people. It is not theoretical and it is not coming — it is here.

The Controller's AI Stack

The layers a working finance controller actually uses. The governance layer matters most and gets the least attention — most AI-in-finance failures are missing review protocols, not model failures.

Governance
Prompt versioningReview protocolsAudit evidence retentionControl documentationChange management for prompts
Applications
Memo drafting workflowsFlux analysis templatesPolicy redline processProject status genResearch synthesisStakeholder translation
Tools
Claude / GPT-4oNotebookLMCustom GPTsCursor / code AIPerplexityDeep Research
Context
Personal reference materialsPrior research & writingASC / IFRS standardsProject historyControl matrix
Foundation
Large Language ModelsReasoning modelsRetrieval systems

What Actually Works.
What Doesn't. No Spin.

The operator view. Based on actual use in finance project work, not demos.

What Agents Do Well
First-draft production at scale

Memos, commentary, policy redlines, status reports, issue briefs. The output needs review, but the blank-page problem is gone. Senior controllers spend time editing instead of drafting.

→ 3-5x leverage on written deliverables
Research synthesis across large document sets

Pulling relevant guidance from ASC, IFRS, external precedents, and prior work you've researched in parallel. Still requires validation but eliminates the manual scavenger hunt.

→ 4–6 hour research task → 30–45 minutes
Pattern recognition in high-volume data

Reconciliation exception classification, flux anomaly detection, duplicate transaction flagging. Consistent and tireless in ways humans aren't at the end of a close sprint.

→ Most valuable in high-volume, rule-based work
Explaining complexity to non-finance stakeholders

Feed the agent a technical memo and ask it to produce an executive summary, a board talking point, or an answer to a specific question from legal. Genuinely useful in project work.

→ Underrated use case
What Agents Fail At (Structurally)
Judgment under audit exposure

When the answer requires knowing what your auditor will accept, how aggressive your historical posture has been, and which positions are worth defending — the model has none of that context and confidently fills the gap anyway.

→ Dangerous precisely because outputs look right
Anything requiring management intent

Reserve adequacy, impairment triggers, restructuring determinations. These positions depend on facts in management's heads, not documents. Agents work from documents.

→ Controller must own these, period
Negotiating with external parties

Auditors, regulators, counterparties. The output of an AI can inform your position; it cannot be your position. The relationship, the history, and the credibility are yours.

→ AI is your prep, not your presence
Knowing what it doesn't know

The single most dangerous property of current LLMs for accounting work: confident outputs in areas of genuine uncertainty. The model doesn't know your company, your auditor, your prior positions, or your regulatory environment — and doesn't caveat the gap well.

→ Your review gate is not optional
Quick Reference · Can AI Lead This Task?
Low Stakes / Correctable
High Stakes / Signed Output
High Volume
AI-Led
Flux commentary
Research synthesis
Rec triage routing
Human Gates
Close package review
Policy sign-off
SOX control testing
Low Volume / Judgment
AI-Assisted
Memo drafting
Policy redlines
Project status
Human Only
Reserve judgment
Audit defense
Management estimates
April 2026 · Volume 1 ∼ 8 min read

What the AI Controller Actually Looks Like (From Inside the Work)

Most finance AI content is written by people who've never closed a book, run a reserve analysis, or defended a technical position under audit pressure. This is the other kind of content.

In This Essay

The Part of the Job Nobody Writes About

Ask a controller what they do, and they'll tell you they close the books. That's technically true and practically incomplete. The real senior controller job is the other thing: the projects, the technical issues, the implementations, the one-off questions that land on your desk because they don't fit anywhere else in the organization.

New revenue recognition system needs a data validation protocol? That's you. Auditor questioning the reserve methodology? That's you. New entity onboarding with no accounting policy precedent? That's you. ERP cutover with a six-month integration window? That's you. The procedures manual doesn't cover any of this — which is why it takes senior judgment, and why it's where the real hours go.

This is the territory I've been working in. And for the past year, I've been using AI heavily in that work. Not experimenting — actually using it, every day, across real deliverables. This is what I've learned.

The skill that used to separate senior from junior was domain depth. Now it's domain depth plus AI fluency — and the gap between controllers who've internalized both and those who haven't is becoming the most significant career divide in the function.

What It Actually Changes

The simplest way to describe the shift: one senior controller with AI can now produce the output that used to require a small team. A research memo that took a day takes two hours. A policy redline that took a week takes an afternoon. A project status synthesis that required four people's inputs and a coordinator takes one person with the right prompts and a careful review.

That's not hypothetical — it's the actual experience. The leverage ratio of a competent controller just went up by a factor of three to five on written, research, and synthesis work. The people who've internalized this are doing more, moving faster, and spending their senior judgment on higher-order decisions. The people who haven't are still doing manually what their peers are doing with AI.

This is the divide. It's not theoretical and it's not coming — it's already here, in every finance team that has both kinds of people. The gap compounds every month.

What It Doesn't Change (And Why This Matters)

Here's where most AI-in-finance content gets it wrong by omission: the list of things that genuinely don't change is long and important.

Judgment under audit exposure. Management intent. Reserve adequacy. Positions you'll defend in a confrontational audit conversation. Any decision where being confidently wrong is worse than being cautiously uncertain. These require context the model doesn't have: your auditor's historical positions, your company's risk tolerance, your CFO's view on aggressiveness, your relationship with the regional examiner, the prior-period matter that's still in the back of everyone's mind.

The test I use: if a human has to sign their name to the output and take responsibility for it — in front of an auditor, a board, or a regulator — the AI is a drafting tool, not a decision tool. That distinction is not optional and it's not conservative: it's the correct technical framing for how these systems work and where they fail.

The Control Problem Nobody's Naming Yet

Here's what the accounting standards community hasn't caught up to: when an agent drafts your flux commentary and a controller reviews it, what exactly is the control? When the underlying model is updated between periods, is that a change in a key financial reporting system? What does an ITGC look like for a tool that's helping produce work that goes into financial statements?

The Timeline Reality

Big 4 firms will have formal opinions on AI in close processes in roughly 18 months. Regulators will have guidance in approximately three years. Controllers are being asked to make these governance decisions right now with no authoritative framework. The answer will be written by practitioners — not standard-setters — for at least the next two years.

These questions don't have good answers in the current literature, which means they're landing on controllers' desks right now with no framework. The Big 4 will have opinions in 18 months. Regulators will have guidance in three years. Controllers are being asked to make these decisions today.

That's the actual frontier of this work — not "will AI replace controllers" but "how do controllers govern AI in the finance function before anyone tells them how to." The answer will be written by the people doing it, not the people advising on it.

The Signature Test

The most useful mental model I've developed for navigating all of this: if a human has to sign their name to the output and take responsibility for it in front of an auditor, a board, or a regulator — the AI is a drafting tool, not a decision tool.

This isn't a conservative posture — it's the correct technical framing for how these systems actually work and where they fail. The signature creates accountability. The accountability requires judgment. The judgment requires context. The context requires a human who has it.

This test doesn't say AI is useless in high-stakes work. It says AI is preparation, not principal. Use it to research the position, draft the memo, stress-test the logic, structure the presentation. But the position is yours. The memo has your name on it. You sign it, you own it.

Why I'm Writing Here

This site is the working log of a controller who uses AI seriously — not as an innovation story, but as a tool for getting complex, judgment-laden work done. I'll write about specific projects, specific failure modes, specific technical questions, and what actually happened when I used an agent for something it wasn't obviously suited for.

No hype. No skepticism theater. The honest working view from inside the function.

If you want to connect and discuss — find me on LinkedIn.

How to Actually Deploy Agents
in a Finance Function

Not a vendor roadmap. A practitioner's sequence, based on what breaks and what doesn't.

01
Phase One · Audit Your Current Work
Map the Work Before You Automate It

Most AI implementations fail not because the technology doesn't work, but because the underlying process was already broken. Before any agent deployment, document what the work actually is — not what the procedure manual says, what actually happens.

  • Identify the 10 highest-volume written deliverables your team produces monthly
  • Classify each: data → analysis → draft → review → sign-off. Where does the time go?
  • Flag which steps require external information the agent can't access
  • Mark every step where a human signature creates legal or audit accountability
02
Phase Two · Start With Low-Stakes, High-Volume
Earn Trust With the Easy Wins First

The biggest mistake is deploying agents in high-judgment areas first because that's where the time is. Start with work where errors are correctable and volume is high. Build the review discipline before you raise the stakes.

  • First candidates: flux commentary, policy FAQs, project status synthesis, research first drafts
  • Establish review protocols before deployment, not after — what specifically does the reviewer check?
  • Track error rates by type; this is your calibration data for deciding when to expand scope
03
Phase Three · Build Your Control Framework
Document the Agent's Role in Your Process

For any workflow where agent output influences financial reporting or audit evidence, you need a documented control. Not a general AI policy — a specific description of what the agent does, what the human reviewer verifies, and what the escalation path is when the output is wrong.

  • Treat the agent as a preparatory tool, document accordingly in your control matrix
  • Define "review" specifically — what constitutes adequate verification for each workflow type
  • Version-control your prompts — a prompt change is a process change and should be treated that way
  • Consider audit evidence: can you demonstrate what the agent produced vs. what you filed?
04
Phase Four · Expand Deliberately
Move Up the Judgment Stack Carefully

Once you have reliable performance in low-stakes workflows and a proven review discipline, you can selectively expand. The boundary to never cross: the agent does not make decisions that require management judgment, legal interpretation, or audit defensibility.

  • Candidate expansions: reserve calculation support (math + stress-testing, not the conclusion), technical accounting position development (research + structure, not the position), audit response drafting (framework + language, not the substance)
  • Maintain the human gate at every point where a signature creates accountability
  • Brief your auditors proactively — they will ask, and having the answer ready is better than discovering the question in fieldwork

Articles & Essays

Operator-grounded. Four pieces published. Click any card to read.

01
Published

What the AI Controller Actually Looks Like (From Inside the Work)

The opening thesis. What changes, what doesn't, and what nobody's saying out loud about the career divide coming in finance.

02
Published

The Controller's AI Stack — What I Actually Use and Why

NotebookLM, base models, prompt versioning, the review discipline — and what doesn't make the cut. A real kit, not a product review.

03
Published

SOX Wasn't Designed for Agents (And Nobody's Saying It Out Loud)

When an agent drafts your flux commentary, what exactly is the control? The ITGC gap, the prompt-change problem, and a framework for today.

04
Published

Stablecoins Are Going to Land on the Controller's Desk Before Anyone's Ready

Merchant acquiring is moving to stablecoin rails. Revenue recognition, FX treatment, reserve methodology — the accounting questions with no clean answers yet.

I'm a finance controller working in payments and acquiring finance. My day-to-day is not just the close — it's the projects, the technical issues, the implementations, and the cross-functional problems that don't fit cleanly into anyone else's lane.

That work spans revenue recognition, scheme economics, reserve methodology, FX exposure, pricing and margin analysis, and the full controller project stack: ERP implementations, SOX control redesign, close acceleration, new entity integrations, technical accounting position development. I've also written the Payments Controller Handbook — a reference-grade resource for controllers in the acquiring finance function — and the Finance PM Handbook on how controllers run change projects.

I've been using AI heavily in finance project work for the past two years. Not experimentally — as an actual tool for getting complex, judgment-laden deliverables done faster and better. I built this site because the content on AI in finance that I was finding was almost entirely written by people who've never had to close a complex subledger, defend a reserve in a tough audit conversation, or make a technical accounting call with real consequences.

This is the other kind of content. Operator-grounded, specific, honest about what doesn't work. If that's useful to you — read, share, and find me on LinkedIn.

Want to Connect?

Find me on LinkedIn. Whether you're thinking through the same problems or just want to discuss what's happening in the space.

Connect on LinkedIn →
Educational Disclaimer: The content on this site is for educational and informational purposes only. Nothing here constitutes professional accounting, legal, tax, or financial advice. All views are those of the author in a personal capacity and do not represent the views of any employer, client, or affiliated organization. Accounting standards, regulatory guidance, and technology capabilities referenced herein are subject to change. Readers should consult qualified professionals before making any accounting, legal, or business decisions. References to AI tools, workflows, and frameworks reflect personal experience and should not be relied upon as authoritative guidance.

Diagnostic Tool

Controller AI
Readiness Assessment

How ready is your finance function for agentic AI — really? Twelve questions. Four categories. Scored results with specific gaps and recommendations.

12Questions
4Categories
~4Minutes
Workflow Awareness — do you know what's actually automatable?
AI Fluency — are you and your team actually using it?
Control Framework — can you govern AI use through an audit?
Strategic Positioning — are you building toward this or reacting?
April 2026 · Volume II ∼ 7 min

The Controller's AI Stack — What I Actually Use and Why

Not a product review. The actual tools, prompts, and review practices I use in real finance project work — and why the stack looks the way it does.

The Question I Get Most

When other controllers find out I use AI heavily in my work, the first question is almost always: what are you actually using? Not in a trend-chasing way — in a "I want to try this and I don't know where to start" way. So here it is. Not a curated list of tools I think sound impressive, but what I actually open on a given day and what it does for me.

The short answer is that the stack is simpler than people expect, the tools are less exotic than the press coverage implies, and the thing that makes it work isn't the tools at all — it's the review discipline. More on that at the end, because it's the part that matters most and gets the least attention.

The Stack, Layer by Layer

Layer 1 — The base model. I use Claude (primarily) and GPT-4o depending on the task. For long-context work — reading a full contract, working through a complex accounting standard, analyzing a lengthy policy document — Claude is my default. For faster iteration tasks, GPT-4o. I've largely stopped trying to optimize which model to use for each task; at current capability levels both are good enough that the prompt matters more than the model choice.

Layer 2 — Context loading. The single biggest leverage point in my stack is NotebookLM. I've built notebooks for my most frequently referenced materials: the relevant ASC sections I work with regularly, publicly available accounting guidance and standards, prior work I've written, and current project reference materials. When I research a question, I'm not starting from a blank model — I'm starting from a model that already has my specific context. The output quality difference is substantial.

Layer 3 — Structured prompt workflows. Over 18 months I've built a library of prompts for my recurring high-effort tasks: the research memo template, the policy redline workflow, the flux commentary generator, the project status synthesizer. These aren't clever prompts — they're well-structured ones that tell the model exactly what output format I need and what judgment I'll apply on review. I keep them in a versioned markdown file.

Layer 4 — Code and analysis. For anything quantitative — model builds, sensitivity tables, data manipulation — I use Cursor (AI-assisted coding). Being able to write a Python script to process a large transaction file or build a reserve model without spending three hours in Excel has changed how I approach analytical work. The barrier between "thing I can do in Excel" and "thing that requires IT" is mostly gone.

How I Actually Use Each Tool

Technical accounting research. This is where I get the most time back. The workflow: load the relevant standard sections into NotebookLM, write a precise question that includes the specific fact pattern, ask for the guidance hierarchy. I get a first-pass research synthesis in 15-20 minutes that would have taken 3-4 hours of Codification navigation. Then I spend 45-60 minutes validating, extending, and converting it into a memo. Net time saved: significant. Quality: equal to or better than manual, because the research phase is more systematic.

Flux commentary. I feed the model the period-over-period trial balance, a description of known business drivers, and a prompt that specifies what good flux commentary looks like. First-draft commentary for every material account in one pass. I edit for accuracy and add context the model can't know. Editing takes 20-30 minutes. Drafting used to take 2-3 hours.

Policy drafting. Most useful when there's an existing document to redline. Load the current policy, describe the change driver, ask for tracked changes with rationale. For complex changes affecting multiple sections, this goes from a multi-day task to a few hours of editing. The model is good at structural logic but not always good at institutional nuance — so I review every proposed change against how the policy is actually interpreted in practice.

Project status synthesis. I usually run 3-5 concurrent finance projects. Weekly status updates used to be 90 minutes. Now I feed the model a structured dump of the week (I keep a rolling notes doc in each project) plus the prior update and any new issues. It drafts. I edit. Total: 20-25 minutes. The output is consistently more readable than what I was writing manually because it naturally prioritizes and doesn't bury the lead.

The Review Discipline That Makes This Work

This is the part that doesn't get written about, and it's the most important part.

Every AI output that enters my work has a review step. Not a skim — a review. For a research memo, I check every ASC citation: does the cited paragraph actually say what the model says it says? More often than you'd expect, the paragraph is real but the interpretation is slightly off. For flux commentary, I verify the model's characterization of each variance against what I know to be true. For policy redlines, I read every proposed change against the existing language.

The review step isn't a friction cost — it's the actual work. AI produces the draft. You do the job. The question is whether you've shifted time from production to judgment, which is always the right direction.

I've also started logging AI errors I catch — not every minor imprecision, but the substantive ones: wrong citation, incorrect interpretation of management intent, missed nuance. This log does two things: it calibrates how much I trust model output in different task types, and it's the foundation of control documentation if anyone ever asks about my process.

What Doesn't Make the Cut

Finance-specific AI tools (the dedicated accounting applications). Most are wrappers around the same base models with a finance-themed interface and a significant price premium. I use base models directly with my own context loaded. The underlying model quality isn't better; the context loading is often worse.

AI for anything audit-facing without a human principal review. Not because the output is necessarily wrong — because the accountability structure requires human ownership. I'll use AI to draft the response; I write the response.

Autonomous agents for financial data. I've tested setups where agents read data, make decisions, and write to outputs without a review step. Not ready. The failure modes are subtle enough that you wouldn't catch them without the kind of review that defeats the purpose of the autonomy.

Building Your Own Stack

Start with one workflow. Pick the one that costs you the most time in first-draft production. For most controllers that's flux commentary or technical accounting research. Build a repeatable prompt for it. Use it on every instance for 30 days. Develop the review checklist. Then decide if it earns a permanent place.

Invest in context before tools. A good prompt with your own reference materials — relevant standards, guidance you've researched, prior work you've written — loaded in context will outperform a specialized tool that knows nothing about your domain. NotebookLM is free and effective. Build your notebooks first.

Document your prompts like process documents. If you change a prompt, note what changed and why. If an AI output enters something reviewed by auditors, be able to describe your process, including what instructions you gave the model and what your review step was.

The stack isn't the point. The leverage is. A simple stack with disciplined review practice will outperform a sophisticated one used carelessly every time.

May 2026 · Volume III ∼ 9 min

SOX Wasn't Designed for Agents (And Nobody's Saying It Out Loud)

When an agent drafts your flux commentary, what exactly is the control? The questions your auditors will start asking — and the answers the literature doesn't have yet.

The Problem Nobody's Naming

In the last two years, controllers at companies of every size have quietly started using AI tools in their close and reporting processes. Flux commentary drafted by Claude. Technical accounting research synthesized from ASC text by a language model. Reconciliation exceptions triaged by an agent. Policy documents redlined by a model that read the original and the change driver.

None of this is controversial in practice — the outputs are good, the time savings are real, and the humans are reviewing before anything gets filed. But here is the question that finance teams are not answering publicly, often because they haven't asked it privately: when this work product enters your close package or audit evidence, what exactly is the control?

SOX was designed for a world where material financial reporting processes were performed by people and systems whose change management, access controls, and testing regimes were well-understood. PCAOB AS 2201 contemplates people following procedures and software executing coded logic. It does not contemplate a probabilistic language model that produces different outputs to the same input, whose underlying parameters may change between periods without a traditional system change, and whose "procedure" is a natural language prompt that a controller may have updated last Tuesday.

This gap is real. It is landing on controllers' desks right now. And the accounting community's guidance on how to address it is, charitably, nascent.

What Happens When an Agent Assists in Your Close

Walk through a concrete scenario. Your team uses Claude to draft first-pass flux commentary. The model is given the trial balance movement, a description of known business drivers, and a prompt specifying the output format. A senior accountant reviews the output, edits for accuracy, and incorporates it into the management commentary. The CFO reviews and approves. The commentary goes into the MD&A disclosure.

Under your current ICFR framework, you likely have a control that says something like: "Management reviews and approves period-end flux commentary prior to inclusion in external reports." The control owner is the CFO or Controller. The evidence is the sign-off.

But what did the control actually catch? If the AI subtly mischaracterized a variance — attributed a revenue decline to business mix when it was actually an accounting reclassification — would the review step have caught it? Maybe. Would it always? That depends on whether you've defined what the review step entails. This is the first problem: the existing control may be nominally present but functionally weaker, because the reviewer is now editing rather than originating, and editing tends to anchor on what's already there.

The ITGC Question

IT General Controls provide assurance that systems used in financial reporting are operating reliably and that changes to those systems are controlled. Standard ITGC categories: access controls, change management, computer operations, data backup/recovery.

Change management is the sharpest edge. When a model provider updates the underlying model, that is a change to a system influencing your financial reporting process. Does it need to go through your IT change management process? The answer isn't obvious. The model change wasn't initiated by your company, wasn't communicated through your normal change channels, and may have changed your AI-assisted workflow outputs without anyone noticing.

The Prompt Change Problem

When your controller updates the prompt used to generate flux commentary, is that a process change requiring documentation and approval? A prompt is the instruction set for a key part of your reporting process. Changing it could change the outputs. Most companies are treating prompt changes informally — undocumented, no change management. That is a control gap.

Access controls. Who can modify the prompts used in close-critical processes? Is there segregation of duties between the person writing the prompt and the person reviewing the output? At most companies right now: no. The same person writing the prompt is reviewing the output. This may be acceptable given the overall review structure, but it's a question an auditor could reasonably ask.

Audit trail. For a traditional automated control, there's a system log. For AI-assisted work, what's the evidence? You have the human's review sign-off. You may or may not have a record of what the model produced before the human edited it. You almost certainly don't have a record of the specific model version. If an error surfaces a year later and auditors want to understand the process, what can you show them?

What Your Auditors Will Start Asking

If your auditors don't know you're using AI in your close process: they'll find out eventually, through inquiry or when an error surfaces. The conversation where you disclose this in the context of a clean audit is better than the conversation in the context of a restatement.

If your auditors know but haven't asked pointed questions yet: they're likely still developing their views. This is the window to shape the conversation — to show them your control framework before they form opinions about what they'd need to see.

The questions they'll eventually ask: What AI tools are used in your financial reporting process and for what tasks? How do you ensure the outputs are accurate? What controls exist over changes to the AI tools or prompts? What would you do if you discovered an AI output error in a prior period's disclosures? How do you differentiate a key control when it involves AI assistance?

Most controllers cannot fully answer these today. That's fine — it's early. But controllers who have a thoughtful answer are in a materially better position.

A Framework for Today

Document AI use in process narratives. Every process narrative that includes AI assistance should say so explicitly. "The period-end flux commentary is drafted using an AI language model based on trial balance movement data and known business drivers. The output is reviewed and approved by [role] prior to inclusion in external reports." Simple, honest, creates the documentation base for auditor inquiry.

Define review specifically. The generic "management reviews" control is insufficient when the output being reviewed is AI-generated. Document what the reviewer actually checks: every material variance characterized by the model against source data; all citations or references verified; judgment areas flagged for additional scrutiny. Specific enough that a different person could perform it to the same standard.

Version-control your prompts. Treat a prompt used in a key control process as a process document. Keep a version log: what changed, when, why, who approved. This costs almost nothing and closes a real gap.

Document the model version when it matters. For key control processes, note the model name and version in your audit evidence. "Claude Sonnet 4, prompt version 1.3, [date]" is a defensible audit trail.

Brief your auditors before year-end. Don't wait for the audit to surface your AI use. Schedule a discussion during interim procedures. Show them your process, your control documentation, your thinking on the ITGC questions. Most audit teams will appreciate the transparency.

The Gap That Needs Filling

The honest conclusion: the control framework for AI-assisted financial reporting is being written right now, by practitioners, in real time, without authoritative guidance. PCAOB hasn't issued staff guidance. The Big 4 have internal positions that haven't been published. The SEC has been signaling concern but hasn't given specific control requirements.

Controllers are the de facto standard-setters for this question for the next 12-24 months. The frameworks we build, the conversations we have with auditors, the documentation we create — these will form the template that eventually gets codified into guidance. That's an unusual amount of agency to have in the accounting world. It's worth using it thoughtfully.

May 2026 · Volume IV ∼ 8 min

Stablecoins Are Going to Land on the Controller's Desk Before Anyone's Ready

Merchant acquiring is moving to stablecoin rails. The accounting questions are real, the guidance is sparse, and the timeline is shorter than most controllers think.

The Timeline Is Shorter Than You Think

Most controllers outside of crypto-native companies think of stablecoin accounting as a niche problem — someone else's issue, relevant to fintechs and exchanges but not to mainstream corporate finance. That view is becoming wrong faster than most people realize.

The migration of payment infrastructure toward stablecoin rails is not a future event — it's happening now, in production, at scale. PayPal's PYUSD is live for merchant payments. Visa and Mastercard are running live stablecoin settlement pilots. Stripe acquired Bridge specifically to accelerate stablecoin payment infrastructure. Several large acquiring processors are actively testing stablecoin-denominated merchant settlement.

The controller who works in payments, treasury, or acquiring finance and thinks "we'll deal with this when it arrives" is likely already behind. The question is not whether stablecoins will require accounting attention; it's whether you'll be building the framework or responding to an urgent CFO request after the fact.

What's Actually Arriving

Merchant settlement in stablecoins. Some acquirers are beginning to offer merchants the option to receive settlement in USDC rather than via ACH. The accounting question: is this a change to the liability held between authorization and settlement? What's the carrying value of a stablecoin receivable? How does a peg break affect the settlement obligation?

Treasury diversification into stablecoins. CFOs are asking whether holding a portion of working capital in stablecoins (for faster cross-border movement, for yield on idle cash) makes sense. The accounting question: what standard applies? Fair value? Amortized cost? Neither maps cleanly.

B2B payments on stablecoin rails. Cross-border vendor payments, intercompany settlements, and supply chain finance are moving toward stablecoin infrastructure because it's faster and cheaper than correspondent banking. Each transaction has an accounting moment: what rate do I use, how do I recognize FX gain/loss if any, what's the functional currency question?

Customer-facing acceptance. Some consumer-facing businesses are beginning to accept USDC as payment. Revenue recognition question: is USDC-denominated revenue recognized at spot rate at transaction date? What if the stablecoin depegs between transaction and settlement?

The Accounting Questions Without Clean Answers

Here is where the honest assessment has to happen: the GAAP framework for stablecoins is incomplete. Not "evolving" in the sense of getting better — incomplete in the sense that the relevant standards weren't written with stablecoins in mind, and the analogies are imperfect enough that reasonable accountants disagree.

The FASB issued ASU 2023-08 in December 2023, requiring fair value accounting for certain crypto assets. But ASU 2023-08 explicitly applies to cryptocurrency that meets the definition of an intangible asset under ASC 350 — and there is active debate about whether a fiat-backed stablecoin meets that definition or falls somewhere else (a financial instrument? a deposit? a prepaid claim?). The FASB deliberately excluded "crypto assets that provide the holder with enforceable rights to receive a specified amount of fiat currency" from the scope of ASU 2023-08. That exclusion should cover dollar-pegged stablecoins, but the standard's application still requires judgment.

So where does that leave the controller holding USDC on behalf of merchants, or carrying PYUSD as a treasury asset? Without clear authoritative guidance, making a defensible policy election that is consistently applied and well-documented.

Revenue Recognition and FX Treatment

If your company accepts USDC as payment, the revenue recognition question is technically an ASC 606 question. The transaction price is the amount of consideration you expect to receive. For a USD-pegged stablecoin at a 1:1 peg, the answer seems simple. But the peg is not contractual — it's maintained by the issuer's reserves and redemption mechanism. A temporary depeg creates variable consideration that, strictly applied, would require a constraint analysis under ASC 606-10-32-11.

The Peg Break Scenario

March 2023: USDC briefly traded at $0.87 following SVB's failure (Circle held approximately $3.3B in reserves at SVB). Companies that had recognized revenue at $1.00 and held the stablecoin through the depeg had a real measurement question with no established policy to apply. Building the policy before the next event is the right sequence.

The FX question is equally contested. ASC 830 applies to transactions denominated in a currency other than the entity's functional currency. If a company's functional currency is USD and it holds USDC — is that a foreign currency transaction? The intuitive answer is no, but USDC is not USD. It is a claim on a reserve, redeemable for USD at 1:1, subject to the issuer's solvency and operational continuity. Whether that claim should be treated as a USD equivalent or as a financial instrument denominated in a synthetic unit that happens to be pegged to USD is not settled.

My current view: a well-designed, dollar-pegged stablecoin with strong reserve transparency is sufficiently USD-equivalent that applying ASC 830 remeasurement would produce misleading financial information — showing volatility that doesn't correspond to any real economic exposure. But this is a judgment call that needs documentation, and one where your auditors' position matters as much as yours.

Reserve Treatment for Stablecoin Holdings

The reserve question is one of the least-discussed and most practically consequential aspects of stablecoin accounting. If your company holds stablecoins as a treasury asset or as a float against pending merchant settlements, what reserve — if any — do you carry against that balance?

The peg break history makes this non-trivial. If you're treating your stablecoin balance as a USD equivalent with no reserve, you need a policy basis for that treatment. The most defensible basis is a combination of: the issuer's published reserve attestation, the historical stability of the peg, and an assessment of the issuer's operational resilience. None of these are guaranteed, which is why a small credit reserve or disclosure of the exposure is increasingly standard practice for material balances.

For companies holding stablecoins specifically against merchant settlement obligations, there's an additional question: if the stablecoin depegs between the time you receive it and the time you settle, who absorbs the difference? The answer should be in your contracts with merchants and in your accounting policy — preferably before the scenario arises.

What to Do Now

Build the policy before the transaction. Get ahead of the treasury and payments teams — they're evaluating stablecoin solutions now. A one-page policy covering: classification of the instrument, recognition approach, FX treatment election, peg break procedures, and disclosure requirements will serve you far better than making these calls under deadline pressure.

Pick your standard of analogies. Since there's no perfect guidance, you're reasoning by analogy. The strongest analogies: demand deposits (for reserve-backed fiat stablecoins), money market instruments (for yield-generating stablecoins), prepaid assets (for network-specific tokens). Document which analogy you've applied and why. Consistency matters as much as the choice.

Have the auditor conversation now. Your external auditors are forming views on stablecoin accounting as their clients encounter it. Getting into that conversation now — sharing your policy, getting their feedback, understanding where they'd push back — is vastly better than presenting a completed policy to ratify at year-end under time pressure.

Monitor the standard-setters. The FASB has stablecoin and digital asset accounting on its technical agenda. SEC Staff Accounting Bulletin 122 rescinded SAB 121's controversial on-balance sheet treatment for custodied crypto. This space is moving fast enough that a policy written today may need to be revisited within 12 months.

The controllers who handle this well are those who engage with it as a genuine accounting problem — uncertain, judgment-intensive, requiring real analysis — rather than either avoiding it or dismissing it as a crypto curiosity. The accounting questions are real. The transactions are coming. The framework is yours to build.

Prompt Tool

Finance
Prompt Forge

Optimized, copy-ready prompts by workflow type. Pick a category, add context, get three variants — Quick, Research, and Deep Dive. No API needed.

Select a category and subcategory
then click Generate Prompts
Copied