Want to make a billion dollars in insurance? Figure out how to write California homeowners insurance profitably at an affordable premium. Want to waste your time and money? Try to replace actuaries with AI.
Yet several well-funded startups, backed by tens of millions of dollars, are effectively trying to do just that. Why are they pursuing this, and what don’t they understand?
It starts with a fundamental misunderstanding of what actuaries actually do, and what can and cannot be materially improved by AI. Much of the current buzz assumes that the time-consuming parts of actuarial work are mechanical: gathering and loading data, building models, and selecting parameters. They are not. Actuaries have been performing these functions with increasing efficiency for decades, without AI.
The real work begins after the analysis. Why did you make that assumption? Would another actuary reach the same conclusion? What does the result imply for the business? Who needs to understand it, and what actions should follow? These questions, including documentation, communication, governance, and ultimately judgment, define actuarial value. They are not mechanical, and they are not easily automated.
AI has a role to play. In many areas, it can be quite valuable. But the idea that AI will replace actuaries or fundamentally change core actuarial decision-making reflects a misunderstanding of where the real risks and opportunities lie.
Actuaries are already highly efficient at building analytical models, particularly in spreadsheets and other pre-built actuarial modeling tools. Excel, for all its limitations, remains an extraordinarily powerful tool in experienced hands. Models can be built quickly, iterated on easily, and understood by peers. Having AI generate spreadsheets may save some setup time, but it does not materially improve the speed or quality of actuarial analysis. At best, it is incremental. At worst, it introduces unnecessary complexity that makes models harder to review and validate.
Data ingestion is also not the bottleneck many assume. Mature processes already exist, ranging from simple structured file imports to SQL pipelines and ETL workflows to built-in functions in Snowflake and Databricks, that efficiently move and organize data for analysis. While data quality can always improve, AI does not represent a step change relative to well-designed conventional approaches.
Most importantly, AI does not replace actuarial judgment. The selection of development factors, trend assumptions, credibility weights, and methodologies require experience, context, and accountability. These decisions involve interpretation and trade-offs, not just computation. No serious practitioner is delegating them to AI, and it is difficult to imagine management or regulators accepting that approach.
AI creates value when it is embedded within structured, (human-designed) workflows, not when it is used as a standalone or ad hoc tool.
Within a structured workflow, validation is one of the clearest opportunities. AI tools can review spreadsheets and models to identify inconsistencies, errors, and anomalies. Even experienced actuaries make mistakes in complex models, and an additional layer of review can meaningfully improve confidence in results.
AI is also effective at interpreting unstructured data. Policy language and reinsurance contracts are good examples. These documents are often dense and complex, yet they must be translated into structured inputs such as limits, attachments, effective dates, and claim triggers for modeling. AI can assist in extracting these elements and mapping them into structured model parameters within a defined system. With actuarial oversight, this can significantly improve efficiency.
Documentation is another area of value. Actuarial consulting work often lacks consistent explanations of assumptions, methodologies, and data sources. AI can help backfill this information within a controlled workflow, improving transparency and auditability. In a regulated environment, which is a meaningful benefit.
It is also important to distinguish between AI and automation. They are not necessarily the same. Conventional automation, building software to perform repeatable actuarial functions, has been evolving for decades and will continue to do so. These solutions are tried and true and add tremendous efficiencies and it makes no sense to try to replace these with AI-based solutions. On the other hand, AI can accelerate the process of creating these applications. Tools that assist in coding and development can materially increase the speed at which actuarial models and workflows are built, automated, and improved.
That is quite different from the idea of an AI agent accessing (proprietary) data, building its own actuarial models, selecting assumptions, and producing a report without any structured system or oversight. One is an extension of disciplined engineering. The other is a breakdown of it.
When used outside of structured systems and controls, AI can actually make actuarial work worse.
The most significant risk is the move toward black-box models. Replacing transparent, well-understood analyses with opaque AI-generated outputs is a step backward. Actuarial work requires explainability. Results must be defensible to management, auditors, and regulators. Black-box models, no matter how sophisticated, undermine that requirement.
AI can also lead to overly complex models. Automatically generated spreadsheets or code can introduce layers of logic that are difficult to trace and validate. Complexity without clarity is not an improvement. It is a liability.
There is also the issue of false confidence. AI-generated output often appears polished and complete, which can mask subtle errors. In actuarial work, small mistakes can compound into large financial or regulatory consequences. A model that looks right but is wrong is dangerous.
There are also real security and data governance concerns. Actuarial work often involves sensitive financial and policyholder data. Careless use of AI tools can expose that data in ways that are difficult to detect and even harder to remediate.
If AI is not the solution to everything, what does an optimal actuarial model look like?
It starts with structure. Data should flow from core systems into a standardized, normalized database with built-in validation and visualization tools (perhaps including AI-based tools). From there, pre-built analytical engines that implement established actuarial methods operate on that data. These models should be transparent, repeatable, and subject to version control.
Actuaries then apply professional judgment within this framework by making assumptions, adjusting parameters, weighing the relative value of each method, and interpreting results. This remains the core of the profession.
Policy terms and other inputs can be structured and fed into the system, with AI assisting in interpretation and ingestion where appropriate. The result is a workflow that is automated where it should be, but still controlled, transparent, and auditable.
In this model, AI plays a supporting role within the system. It enhances validation, improves documentation, accelerates development, and assists with data extraction. It does not replace the system, and it does not replace actuarial judgment.
Much to the chagrin of the AI-Actuary startups, humans will not augment systems designed by AI. AI will operate within systems designed by humans.
The idea that AI will handle actuarial work from start to finish is not a vision. It is a misunderstanding of where the real work and risk reside. Firms whose business plan is premised on this assumption will fail.
The challenge in actuarial work has never been performing the math. It has been ensuring that data is reliable, models are validated, structured and transparent, and decisions are made with sound judgment and accountability.
AI can help, and in some cases meaningfully. But it is not a substitute for the systems, controls, and expertise that underpin actuarial work.
Insurance firms that succeed will not be those that try to replace actuaries with AI. They will be the ones that build better systems (automated when practical) that are structured, governed, and scalable, and then use AI to enhance those systems in targeted, practical ways.