The compliance officer of a 35-person boutique investment advisory has the firm’s draft 2026 marketing copy on her desk. Two of the proposed lines mention “AI-driven” portfolio insights. She has been at her desk reading recent SEC enforcement summaries from late 2025 and early 2026, including the Global Predictions Inc. and Delphia USA Inc. settlements where firms were found to have falsely claimed AI capabilities.
She knows what the firm’s models actually do. She knows what the marketing claims will imply they do. The gap between those two is the thing she has to write a memo about by Friday. Her concern is specific: claims the firm cannot substantiate.
How fast is the sector actually moving?
The Federal Reserve’s monitoring of AI adoption in the US economy puts the financial sector at approximately 30 percent operational adoption by the end of 2025, with year-on-year growth of around 30 percent as of November 2025. Alongside professional services at 33 percent, financial services is one of the two highest-adopting sectors in the US economy.
The size pattern matters at this firm scale. McKinsey’s 2025 Global Survey on AI shows 88 percent of respondents say their organisations are regularly using AI in at least one business function, but the scaling phase is concentrated in larger firms. Nearly half of respondents from companies with $5 billion-plus revenue have reached the scaling phase, compared to 29 percent of those under $100 million.
For 10 to 50 staff firms (boutique advisory, fintech, specialist payments, small investment management), AI use is concentrated in the experimentation-to-piloting phase. Active use of generic tools (Microsoft Copilot, ChatGPT for non-client-data work) is widespread. Production deployments on regulated workflows (AML, KYC, fraud detection, compliance document review) are still uncommon but growing.
The Bank of England’s regular Bank/FCA AI Survey provides ongoing visibility into adoption patterns across the UK financial sector. Specific employment-band figures are not published. The pattern at SME scale tracks the overall sector trajectory: regulated workflows lag general productivity workflows by 6 to 12 months.
What is the SEC’s “AI washing” enforcement angle?
The SEC has begun enforcement action on what the New York State Bar Association has called “AI washing”: firms making AI capability claims they cannot substantiate. Recent named settlements include Global Predictions Inc. and Delphia USA Inc., both found to have falsely claimed AI capabilities in marketing material. The exposure is on unsubstantiated claims. AI use itself is not the trigger.
The mechanic behind the enforcement is straightforward. SEC Rule 10b-5 prohibits material misstatements and omissions in connection with securities. Marketing copy that says a firm uses “AI-driven” portfolio insights when the underlying methodology is statistical regression from 2003 is a material misstatement. The audit trail requirement is the practical defence. Firms that can demonstrate, with logs and model documentation, what their AI actually does are not exposed. Firms that cannot are.
The “AI washing” frame matters because it shifts the regulatory question from “are you using AI?” to “can you substantiate what you say it does?”. A firm using AI legitimately on AML or document review faces almost no SEC exposure as long as the marketing matches the reality. A firm with thin or absent AI use plus aspirational marketing copy is exposed regardless of intent.
For a 35-person boutique advisory writing 2026 marketing copy, the practical move is to audit every “AI” claim in the draft against what the firm can demonstrate. If the underlying model is statistical, the marketing should say statistical. If the firm uses generative AI to draft client communications, it can say so. The principle is that claims must trace to documented capability.
Three use cases that work at SME scale
Three use cases produce measurable returns at 10 to 50 staff financial services firms today: AML and customer due diligence automation, fraud detection and payment risk scoring, and regulatory compliance document review with regulatory change monitoring. Each operates on structured data the firm already has, integrates with existing compliance systems, and keeps human reviewers in the final-determination position.
AML and customer due diligence automation is the highest-volume use case for advisory and payments firms. AI systems analyse transaction data for patterns and anomalies, automate Suspicious Activity Report generation, and verify customer identities against multiple data sources. The Strategy.com source on AML compliance frames the model: AI flags candidates for investigation; compliance staff make the final determination. ML models for transaction monitoring must remain “reasonably designed to detect suspicious activity” under BSA/AML rules.
Fraud detection and payment risk scoring is the second use case, particularly for fintech and payments firms processing thousands of transactions daily. AI scores transactions in real time, flags high-risk patterns (unusual merchant, high velocity, card-not-present), and routes flagged transactions to human review. The competitive return is in reduced false declines as well as reduced fraud. A processor handling £10 million in monthly volume at 0.5 percent fraud loss saves £25,000 to £50,000 per month if the fraud rate drops to 0.25 percent.
Regulatory compliance document review and change monitoring is the third use case. AI monitors regulatory updates (FCA announcements, SEC rule changes), flags updates relevant to the firm, and scans incoming compliance documents (client agreements, attestation forms) for compliance gaps. The use case is high-value because compliance work in many smaller firms is done by staff with high turnover; AI reduces the manual document review burden materially.
What does the regulatory framework actually require?
The federal framework for US financial services AI deployment includes UDAAP (Consumer Financial Protection Act) on automated customer interactions, FCRA on credit scoring disclosures, GLBA on privacy and information security for nonpublic personal information, and BSA/AML on transaction monitoring. State-level: Colorado AI Act effective June 30, 2026, and California CCPA on automated decision-making.
The Bank of England’s TRUSTED framework names the bar for UK firms. Targeted, Reliable, Secure, clearly Understood, supported with Ethical guidance, stress-Tested, Durable. Stress-tested in particular is the one most SME deployments don’t address before going live. A firm that has tested its AML model under benign conditions but never under adversarial conditions has not stress-tested it.
The Colorado AI Act is the most prescriptive new state-level law for financial services. From June 30, 2026, developers of high-risk AI systems must make public-facing disclosures, notify consumers of AI use, conduct impact assessments, and use “reasonable care” to prevent algorithmic discrimination. California’s CCPA already requires pre-use notice, right to opt out of automated decision-making, and right to appeal automated decisions. Smaller firms with multi-state customer bases face the higher bar of the strictest applicable state law.
The federal-state interaction matters. UDAAP-driven enforcement (FTC, CFPB) and SEC enforcement on AI claims operate alongside state law. Firms with multi-state exposure must build for the strictest applicable rule at the customer level.
Why is “stress-tested” the gate most firms miss?
Stress-tested is the part of the Bank of England’s TRUSTED framework that most SME deployments quietly skip. A model that performs well under benign conditions can fail in market stress, regulatory change, or adversarial inputs. The cost of finding out in production rather than in testing is the cost the framework exists to prevent.
For an AML model, stress testing means running the model against synthetic adversarial inputs (deliberately structured to evade detection) and against historical scenarios from the most recent stress events. For a fraud detection model, it means running against periods of market volatility and known fraud campaigns. For a credit decision model, it means running against the demographic edges of the customer base and the regulatory definitions of protected characteristics.
Most boutique firms don’t have the in-house capability to run this kind of testing themselves. Vendors selling specialist financial AI tooling typically include some level of stress testing in their compliance documentation. Firms using generic AI on regulated workflows generally don’t get this layer at all. The fix is either to use specialist tooling with documented stress testing, or to procure stress testing as a separate service from a compliance technology vendor.
What does the maths look like for a 35-person advisory?
AML and customer due diligence automation pay back in 8 to 12 weeks at typical advisory firm scale. A 35-person firm onboarding 100 to 200 customers per month saves 5 to 10 hours per week on CDD at typical compliance staff rates of £50 per hour. That is £250 to £500 per week recovered. Implementation cost of £3,000 to £8,000 pays back in 8 to 16 weeks.
Fraud detection at processor scale pays back faster. A processor handling £10 million in monthly volume at 0.5 percent fraud loss is losing £50,000 per month to fraud. A model that drops the fraud rate to 0.25 percent recovers £25,000 monthly. Implementation cost of £5,000 to £15,000 pays back in 1 to 3 months.
Regulatory document review pays back in 8 to 12 weeks at typical compliance team scale. A 10 to 50 staff firm with one or two compliance staff saves 5 to 10 hours per week on document review at £50 per hour, worth £250 to £500 per week recovered. Implementation cost of £2,000 to £5,000 pays back in 8 to 12 weeks.
The caveats are sharper in financial services than in other sectors. ROI calculations assume clean structured customer data; many smaller firms have data fragmentation that adds 2 to 3 months of pre-deployment work. Audit-heavy validation and ongoing monitoring consume 20 to 30 percent of nominal time savings. Smaller firms also face higher per-customer compliance overhead than larger firms, which limits scale-economy returns.
What is the actual next move?
The next move for the 35-person advisory writing 2026 marketing copy is to audit every “AI” claim in the draft against what the firm can substantiate. If the underlying methodology is statistical, the marketing should say statistical. If the firm uses generative AI to draft client communications, it can say so. Claims must trace to documented capability.
Once the marketing audit is done, the next operational move is to pick the highest-friction regulated workflow (usually AML/CDD for advisories, fraud for payments, regulatory document review for any compliance team) and run a 60-day pilot on a specialist platform with stress testing documented. The audit trail and stress test log become the firm’s defence if the regulator asks, and the basis for the firm’s wider AI policy.
The compliance officer with the marketing draft and the SEC enforcement summaries on her desk is doing exactly the right work. The 30 percent sector adoption tells her the firm is well within the band where AI is normal. The SEC enforcement actions tell her the firm’s marketing must match its reality. The advice from peer compliance officers in the same band is consistent: write the policy from real internal evidence, and update the marketing copy to match.
If you would like to walk through this for your firm specifically, book a conversation.



