Skip to main content

AI Is Rewiring Insurance and Financial Services Faster Than the Law

Banks adopt AI at 52% while insurers lag at 8%, but deepfake fraud and new regulators are reshaping how financial services use AI in 2026.

Artificial intelligence reshaping insurance and financial services in 2026
Artificial intelligence reshaping insurance and financial services in 2026
  • South African banks have adopted AI at 52%, while insurers sit at just 8%, according to a joint FSCA and Prudential Authority report.
  • AI-related incidents reported worldwide jumped 56.4% in 2024, according to Stanford’s 2025 AI Index.
  • A California jury hit Meta and YouTube with $3 million in damages in March 2026 over algorithmic recommendations.
  • Insurers are quietly moving from “silent” AI cover to explicit “affirmative” AI policies as deepfake claims rise.

The Claims Desk Has Already Been Replaced by an AI Pipeline

The traditional insurance claims desk is being dismantled in real time. Webber Wentzel partners Kim Rew and Jered Shorkend describe a workflow where AI drafts every reply before the handler reads the email, public chatbots trained on policy wordings handle the bulk of broker queries, and AI assistants schedule meetings and generate the minutes automatically. Fraud screening runs upstream of the human reviewer, not after it. Even the claims themselves are starting to look different — collisions involving driverless taxis, medical aid approvals based purely on AI diagnoses, and professional indemnity notifications from financial advisors whose deepfaked likenesses are being used to push fraudulent investment schemes.

Banks are leading the AI adoption curve in financial services. According to a November 2025 joint report from South Africa’s Financial Sector Conduct Authority and the Prudential Authority, banks have integrated AI at 52%, while insurers sit at a far more cautious 8%. That gap is closing fast: insurers told regulators they plan to expand AI heavily into underwriting and claims management over the next 24 months. The pressure to automate is intensifying as consumer-facing AI products mature and back-office economics make the case for every legacy financial workflow to be rebuilt around large language models.

Deepfake Fraud and AI Liability Lawsuits Are the New Risk Layer

The same technology powering claims automation is also creating an entirely new category of risk for insurers to underwrite. Stanford University’s 2025 AI Index recorded a 56.4% jump in AI-related “incidents” worldwide in 2024 alone — and the legal exposure is starting to crystallize. In March 2026, a California jury found Meta and YouTube liable for $3 million in damages over their recommendation algorithms. Tesla has been held liable for a fatal accident involving its Autopilot system, and Air Canada was forced by a tribunal to honor a discount that its chatbot had mistakenly promised a customer.

The deepfake threat is hitting financial services hardest. South Africa’s FSCA has flagged deepfake videos of well-known figures endorsing fraudulent schemes — a trend already linked to the final liquidation of at least one financial service provider. Criminals are now stitching together stolen real ID numbers with AI-generated names and faces to create “synthetic identities” capable of bypassing know-your-customer onboarding. In response, the Association for Savings and Investment South Africa and the South African Insurance Association have jointly stood up a Computer Security Incident Response Team to share intelligence on emerging attack methods across the sector.

Regulators Are Catching Up — Slowly — While Insurers Move to Affirmative AI Cover

Lawmakers are scrambling. The European Union’s AI Act is now the global benchmark, Denmark is considering copyright protection for individual likenesses against deepfakes, and South Africa’s draft National AI Policy Framework is expected to enter formal public consultation soon — though final approval is unlikely before the 2026/2027 financial year. In the gap, regulators and industry bodies are setting the rules themselves. The FSCA and Prudential Authority are now urging financial institutions to adopt board-level AI oversight, deploy recognized “explainability methods,” and clearly disclose to consumers when AI is influencing decisions on credit or insurance pricing.

The most immediate change for buyers of insurance is happening in the policy wording itself. Most current policies cover AI risks through what the industry calls “silent cover” — AI is not explicitly mentioned, but the underlying risks fall under general policy language. As AI claims rise and disputes about coverage scope multiply, the entire industry is moving toward “affirmative cover” that targets AI risks directly. South Africa’s Information Regulator has separately reported a 40% increase in data breach incidents in 2025 versus the prior year, accelerating the timeline. For any business adopting AI in 2026, the message from regulators and insurers is the same: governance frameworks, POPIA compliance, and explicit AI cover are no longer optional. The companies that treat AI as a regulated financial product — not a productivity hack — will be the ones still standing when the first wave of AI liability lawsuits reaches the courts.

FSCA | Webber Wentzel | Stanford AI Index

Tags

#AI #insurance #fintech #regulation #deepfake