A commutations overview: Effectively managing reinsurance programs

Overview
A reinsurance commutation is, in essence, an early termination of a contract of reinsurance in return for a mutually agreed upon consideration.  The parties to the commutation intend to terminate the reinsurance contract and to, thus, “unwind” the entire reinsurance transaction to a mutually agreed “as of” effective date.  After the commutation is complete, there is no ongoing reinsurance cover in place and future risks are borne on a net basis to the cedant.  It is possible that a new reinsurer may be brought in to handle the prospective risks through a new reinsurance arrangement; however, this can be problematic based on market conditions and other factors.  For example, if the underlying business is performing poorly, any available replacement terms might be onerous and/or restrictive.
The reasons for a commutation vary; however, the key categories include:

  • The cedant has strategic reasons to exit a particular line of reinsured business;
  • There are credit concerns regarding a particular assuming reinsurer, probably due to the perception of an above average insolvency risk;
  • The commutation improves a reinsurer’s underwriting results, since the price of the commutation is often less than the carried reserves;
  • There could be favorable tax advantages as a result of the commutation for either or both parties;
  • There is a disagreement over the appropriate reserves to be carried, or in general, over the reinsurance contract’s terms.

Whatever the motivation, one fundamental fact should be emphasized—a commutation is a risky undertaking given the inherent variability of loss reserve development and the pattern of reported claims over time.  For example, commutations negotiated during the early 1990’s may have been seemingly, at the time, “good” deals.  However, the continuing pattern of adverse casualty and environmental development has recast many of these transactions in disastrous terms.  Therefore, a fundamental assumption regarding commutations should be that they be entered into with considerable caution and a fair degree of general skepticism.  Any actuarial assumptions regarding the development of reserves over time should include a reasonable range of possible outcomes in order for all parties to fully understand the potential risks.
Additionally, counterparty credit risk is implicit in every reinsurance transaction; a cedant pays premiums to a reinsurer immediately with the expectation of receiving indemnification for losses over time.  For decades, that business expectation was taken for granted; however, the environmental debacle of the late 1980’s has changed that belief permanently.  During the 1990’s alone, more than 20 companies exited the London and International market due to insolvencies.  The domestic markets have also been affected by catastrophic and latent losses that have yielded several insolvencies.  Burdened by this pattern of increasing insolvencies, the concept of reinsurance “security” has grown in prominence and caused a “flight to quality” that favors the larger, better capitalized reinsurers.  Clearly, the best means of mitigating this inherent credit risk is in adhering to proactive monitoring and controls to ensure that the most financially viable panel of reinsurers possible is utilized.
The insolvency of a US, UK, Bermuda or other alien reinsurer brings with it a patchwork of varying regulatory proceedings.  These include Rehabilitations, Liquidations, Involuntary Schemes of Arrangement (similar to rehabilitations), Cut-off Schemes (similar to liquidations) and many other forms.  Despite the venue or form of the regulatory proceeding, one key fact bears careful consideration.  That is, the ultimate financial failure of a reinsurer will create a de minimus financial distribution, over time, to the cedant/creditor.
As a general matter, the insolvency proceedings over the past decade have produced ultimate recoveries to cedants of less than 40 cents on the dollar, with many yielding nil to this class of creditor.  Clearly, a primary company is in a much better position to trade a commutation today for the inherent uncertainty of a protracted insolvency administration.  Therefore, in this context, a recovery of 70 to 80 cents on the dollar today is a far better alternative than waiting for the result of an insolvency proceeding.
The most well managed primary companies understand this fundamental issue and are proactive in identifying these risks and establishing internal protocols to manage an active commutation program.  Those who hesitate are disadvantaged relative to their more proactive competitors.  In this context, he who hesitates is lost.
In the current market environment, there are a number of solvent reinsurers seeking to commute business with cedants and are actively offering a variety of solicitations.  These are often solvent companies that have entered into a voluntary run-off and are using commutations in conjunction with their business model to reduce assumed liabilities quickly and efficiently.  Thus, a never-ending cascade of such solicitations is received by primary carriers from such (solvent) run-off entities on an ongoing basis.  Our advice to the reader is to view these offerings critically and focus your commutation efforts exclusively on the financially impaired universe of reinsurers.
In addition, there are several other factors that should be considered when determining what and when to commute. For example:

  • The reinsurance terms of a commutation should provide enough of an economic outcome to warrant their assumptions, on a net basis, of the entire book of underlying business.  The proffered terms must provide for a significant degree of IBNR recoverable with a discount factor present for the time value of money.  Often, the cedant is unwilling to entertain a discount, or may even seek a premium (say, 150% or more) for those “commercial” commutation deals involving solvent reinsurers.  Clearly, however, assuming reinsurers will rarely commute a cover without some degree of discount present.  Thus, there are conflicting demands at work in this dichotomy.
  • The surplus impact to the cedant of unwinding the transaction must not be overly onerous—despite the potentially favorable economics of the situation.  In the ultimate “triumph of accounting over economics”, many primary carriers are holding recoverables from reinsurers that are clearly on the path to failure as they are unwilling to take a financial “haircut” today for a commutation due to the immediate financial statement and surplus implications.
  • Quota share reinsurance is often purchased to provide surplus relief to a cedant.  Thus, an “unwinding” of this type of reinsurance transaction through a commutation could cause some undesirable financial statement presentation issues to the cedant and adversely impact their appetite to commute.
  • Each side to the transaction should consider obtaining the financial statements of the respective parties involved (if available) and do some high level modeling of what the impact of the transaction could be to their balance sheet.  It will also allow for some measure of the appetite of the respective parties for a commutation.  For example, if they appear to be cash flow constraints or possess borderline IRIS ratio issues.  This could also bring to light potential rating agency impacts or DOI implications that could affect their desire to transact.

Pricing of the Commutation
In terms of pricing the commutation, there are a number of factors that must be considered.  Usually, calculations begin with a determination of the cost to the reinsurer of not commuting.  This cost is the difference between the following two quantities:

  • The present value of expected future paid losses (using an after-tax discount rate appropriate to the company and line of business)
  • The present value of the tax benefit related to the unwinding of the federal tax discounted reserves (using the IRS prescribed discounting procedure)

Next, the cost of commutation is calculated by subtracting from the cost of not commuting the value of the tax on the underwriting gain or loss generated by the commutation.  This is the result of the takedown in reserves and payout of the final cost of commutation.  This final cost of commutation represents the break-even price and reflects no loading for risk or profit.
It is this calculation where significant scrutiny should be given to the assumptions used for determining IBNR reserves, discounting losses and determining taxes.  A thorough commutation analysis should include a range of outcomes, as mentioned earlier with respect to reserve development estimates, as well as a thorough examination of possible tax scenarios including the realization of carry-forwards and the potential impact of the alternative minimum tax.
These calculations obviously require significant expertise and judgment.  As a general matter, the most difficult commutation negotiations involve either a purely unseasoned book of underlying business or, conversely, one that has substantial ceded losses.  This difficulty is largely due to the impact either of these situations have on the process described above.
General Considerations
Although it seems obvious, it is imperative that all parties mutually agree to an effective date for the commutation that will serve as the basis for any valuation procedures including, for instance, the present value of the ceded loss reserves.  The date should coincide with the natural flow of the underlying business.  For instance, it should not be materially impacted by premium booking and renewal activity.  The date can coincide with the Treaty Year Anniversary, but it can also differ.  For 12/31 anniversary treaties, experience has shown that a 3/31 commutation date often works well, whereas, a 6/30 or 9/30 date is often problematic given the high degree of general premium bookings around those dates.
While the general inclination in a commutation is to commute all effective years of assumption between the two parties, it is possible to commute only one or two years through a so-called “lasering” transaction.  This is almost always the norm in a long-term relationship between two financially solvent parties; situations in which the inducement to commute is based purely on business considerations.  For example, a large public carrier may want to commute only a few years of a long-term treaty program with a large reinsurer due to a strategic exit from a certain line of business.
In terms of negotiating strategy, there is a tendency in the Property & Casualty industry to “abandon” the commutation process to various parties within an organization (i.e. Legal, Actuarial, Finance, Claims, etc.).  Although each of these parties is required in the overall process, it is a critical error not to have the direct negotiating done by key “business people”.  The best commutations are achieved by having a key businessperson—with the authority to make a deal—work with their business counterpart in the other organization.  The workout team (or consultants) would do the detail “file” work on the account and have the more working level discussions with the other party.  Clearly, however, a senior business leader must communicate the initial and final offers.  Reinsurers (and Rehabilitators) want to transact business with decision makers within an organization.
An additional, and potentially substantial, “wrinkle” to any commutation agreement is the presence of contentious claims (or even those that might be subject to an Arbitration proceeding).  It is possible to craft a commutation agreement that “carves out” a certain ceded account to allow the dispute/resolution process to move forward unimpeded.  It is more desirable, however, to attempt to resolve these matters within the context of an overall “global” commutation, if at all possible.  The key caveat is that overall economic goals of the transaction should not be sacrificed to achieve a resolution of any one claim.
Resource Requirements and Strategy
As a practical reality, there is an ideal time “window” in which cedants and reinsurers tend to focus on reinsurance commutations the most—that being during the latter half of the year and particularly during the fourth quarter.  Year-end allows the reinsurer to reduce its liabilities and allows the cedant to clean-up legacy recoverable issues.  (There is bonus potential in getting deals done!)
A commutation can be a very laborious process, with several from recent experience taking upwards of six months from inception to ultimate resolution.  It can be a business annoyance as resources necessary to support the timeline of a commutation effort are often conflicted by various other duties and time requirements.
As a general matter, commutation and “work-out” activities require a very different approach and mind-set than do ongoing business activities.  The latter are based on long-term relationships and the opportunity for both parties to enjoy a mutually advantageous and profitable relationship over time.  A commutation, on the other hand, is a contentious matter, which will (often) lead to the collapse of any long-term relationships.
This is an important point that bears emphasis; the degree to which internal management resources are used in a commutation effort could impact longer-term goal realization.  For this reason, companies often tend to utilize a workout unit, or utilize outsourced resources and advisors, to do the key day-to-day activities related to the commutation.
Other Aspects of the Commutation Process
One of the most important steps in the entire process is that paid loss recoverable balances be fully reconciled with the cedant (and broker) to the effective “as of” date.  As a practical matter, accounts often experience delays in reporting between the cedant and reinsurer, and these delays can have a profound impact on actuarial estimates of ultimate reserves.  Further, since a commutation represents a full and final release between both parties, any “surprise” transactions that are discovered after the agreement (those not properly reported or reconciled) could have a tremendous impact on the perceived economics of the transaction.  This risk is more prevalent in the context of excess of loss rather than quota share reinsurance.
In the same vein, assuming reinsurers should perform a thorough audit over both the premium and loss processes as soon as practicable.  This detail work is critically important at the outset of the commutation cycle as it will profoundly impact the IBNR reserve estimation process and could help discern trends and adverse developments essential to properly valuing the transaction.  It is an opportunity for the commutation team to review some of the underlying account files in detail to better gauge the cedant’s due diligence and claims handling procedures.  Some of the questions that should be asked are:

  • How timely and thorough are the bordereaux reports from the cedant?
  • How effective are the system controls in place?
  • What reconciliation efforts from prior reviews are present?
  • How effective are the controls for capturing and ceding all premium related transactions?  Has a full universe of transactions been reported?  Is any non-subject business to this agreement being ceded to the reinsurer?
  • Have all endorsements been reported?

To the extent the business in question was sourced by an intermediary, it is efficacious to utilize the placement broker to assist in the commutation effort.  Despite the representations by placement brokers that they are “unbiased” in such matters, there is a strong tendency to work most diligently on behalf of whichever party (cedant or assuming reinsurer) that is likely to produce future business on their behalf.  (There is, however, an ongoing economic cost to the broker from having to administer the claims throughout the period of a run-off.)  To the extent the book represents a “one-off” transaction; their efforts will be minimal at best.  The broker partner is critical in assisting in the accounting and claims reconciliation process that underlies this effort and should be actively utilized to assist in the reconciliation between ceded loss bordereaux reports and what the reinsurer records.  Delays are often caused by the intermediary themselves; thus, it is imperative that reconciled and timely data be used at the outset of these negotiations.
Broker developed IBNR estimations are not recommended.  Their use will rarely be viewed as objective and often, because of reasons discussed above, their quality is often lacking.  Likewise, the cedant should utilize their own draft Commutation Agreement rather than one developed by the placement broker (or the counterparty).
As mentioned, the estimate of IBNR will have a significant impact on the cost of commutation.  The settlement patterns of the line of business and its age at commutation most directly affect the amount of IBNR that will be estimated.  Clearly, less mature and/or longer-tailed lines of business will likely have more IBNR (as a percentage of total incurred loss) and will often rely more heavily upon the Bornheutter-Ferguson methodology of loss reserve estimation.  This methodology utilizes an expected loss ratio, along with historical development patterns (either based upon the actual underlying experience or from some similar industry benchmark data), to estimate expected IBNR.  More mature and/or shorter-tailed lines will likely have less IBNR (as a percentage of total incurred loss) and will be more susceptible to methods like the standard Chain Ladder development method based upon either actual experienced loss development or on industry benchmarks.
Either method, if used appropriately, should produce reasonable estimates of IBNR.  In most cases, it is recommended that at least four different estimates be made of the IBNR reserve.  Two Bornheutter-Ferguson estimates based on paid and incurred losses and two Chain Ladder estimates based on the same data.  The results from all of these estimates should be examined together and any differences reconciled before a final estimate of IBNR is made.
If a dispute arises regarding the IBNR estimate, the most likely candidates for further analysis would be the applicability of the expected loss ratio from the Bornheutter-Ferguson technique and/or the applicability of the selected loss development pattern from either technique.  These assumptions should be as reflective of the actual underlying business as possible and the use of industry data in their place is often a distortion for books with special underwriting guidelines, markedly different pricing philosophies or atypical case reserving/claims handling practices.
The choice of discount factor should be relatively objective and based upon objective external data points as much as possible.  It should reflect current yields; however, it should also be an after-tax yield specific to the company’s tax situation.  It should also take into account any change in the tax situation that may be caused by the commutation itself.
As a general caveat, the party initiating a commutation should draft the first version of the operative Commutation Agreement to be provided in the deal.  Always work off of an agreement that is advantageous to your position.  It is always a better negotiating ploy to make modifications away from your standard agreement than to attempt to modify an opponent’s agreement to include your required verbiage.  The following contractual features need to be addressed in any agreement:

  • Offset. Do not allow for any offset with any other reinsurance agreements (ceded or assumed) that may exist with the other party, or their affiliates.  This agreement must stand solely on its own and have no impact to other contractual relationships or entities.
  • Governing Law. To be determined by a variety of factors
  • Funding Clause. Insist on including contractual verbiage that is quite specific as to the mechanics of funding the consideration of the Agreement.  Words to the effect of:  “Within 48 hours of the mutual execution of this Agreement, Reinsurer will wire transfer $XX to the Reinsured’s account ABCXXX in full settlement of the Commutation Agreement (as per the Consideration Clause) representing a full and final release of the Subject Business of this Commutation Agreement.”  Time and again, the agreement should be specific as to the business commuted and that it is a full and final release.
  • Specificity. The Agreement should clearly specify which contracts are being commuted and for what terms.  You want to preclude the other party from attempting to later reinstate some features for a recovery (i.e. alleged accounts that were somehow “not included”, etc.).
  • DOI Approval. It is possible, given the potential financial materiality of the agreement, that regulatory approval may be required.  Keep this contingency in mind during the negotiation timeline.

Finally, to the extent that there are any Letters of Credit present, there are two options:

  • If unused to date, simply cancel them with the reinsured and the bank;
  • If partially or fully used to date, the Letter of Credit draw downs can be used as a partial method of funding the overall Commutation proceeds.

In the context of a reinsurer that undergoes regulatory intervention and insolvency, it is possible that a “preference” transaction may be deemed (an assertion that a transaction such as a commutation done between 90 and 180 days of the insolvency filing date is adverse to creditors generally).  The impact could be to unwind the deal.  While this is a very uncommon risk, it does bear close scrutiny in the context of a teetering reinsurer.  Again, time is of the essence.  When everyone knows there is a problem, it is too late to act.


Steve McElhiney is the President of EWI Risk Services, Inc., a reinsurance intermediary based in Dallas, and a subsidiary of NL Industries a diversified industrial company.  He also serves as President of Tall Pines Insurance Company of Vermont, an affiliated captive insurance company.  His insurance industry experience has spanned over two decades with groups including Fireman’s Fund, TIG, and Overseas Partners US Reinsurance Company. 
Mark Jones is the Director of Research & Development and a Consulting Actuary with Perr&Knight. His primary responsibility is the development of new products and services for all areas of the firm. Mark has a broad background including experience with ratemaking, regulatory compliance, competitive analysis, catastrophe modeling and reserving for most personal and commercial lines of business. He also has experience in the development and implementation of predictive models, dynamic financial analysis tools for reinsurance applications, financial forecasting applications for budgeting and retention analysis, rate monitoring tools for most lines of business and ad hoc statistical studies for claims investigation, premium audit and loss control. Mark’s skill set includes an extensive knowledge of insurance data systems and company operations, as well as Visual Basic, SQL, R and other software.

Harnessing network effects: A Web 2.0 primer for the insurance industry

 
Introduction
The ascent of man from simple hunter/gatherer to progenitor of global economy can be directly attributed to our innate and profound ability to build profitable and self-sustaining networks.  And as with all complex and dynamic systems, information is just as important a constituent in any man made network as the more tangible economic nodes such as buyers, sellers, goods and physical infrastructure.
To recap: Ancient trade networks helped us to survive harsh prehistoric times, and in turn contributed to the advancement of language.  More complex networks then gave rise to nation-states, artisan crafts and the elite classes for whom information was both privilege and power.  Even more complex networks eventually manifested through the industrial revolution to give us the enterprise, wherein information was monetized as intellectual property and principals of mass production.
But within the past century, parallel developments in Information Theory and digital technology, along with massive increases in computing power, have sparked a new paradigm—the information revolution.  In this new age, information is equally critical for production as other traditional commodities, and contributes directly to the value of products and services.
The dawn of the Information Age “can be seen globally as the surreptitious replacement of citadels—which tend to restrict the flow of information—by less viscous environments, and the subsumption of information within capital.”[i]In few industries is this as evident as insurance, where information derived from voluminous amounts of data drives every key decision from the boardroom to the underwriter’s desk.
The information revolution—powered by instantaneous modes of communication—has justly prompted a major shift in the very fabric of capitalism, such that we are now largely operating within a network economy.  Whereas ownership over physical property and ideas belonged solely to the enterprise during the industrial era, products and services are now created and value is added through large scale social networks. Economies of scale stem from the size of networks instead of the enterprise, and the value of centralized decision making and expensive bureaucracies is greatly diminished.
Newer, more agile business models are supplanting formerly rigid power structures as more pervasive networks blur the line between a business and its environment.  Value is now intrinsically tied to connectivity and the openness of systems.
In the network economy, “Understanding how networks work will be the key to understanding how the economy works.”[ii] Such an undertaking is greatly simplified when one understands network effects and Web 2.0.
Network Effects
A network effect (sometimes also referred to as a network externality) is simply the effect that a user of a product or service has on the value of that product or service to other users.  A product or service displays positive network effects when more usage of the product by any user increases the product’s value for other users, and sometimes all users.
The importance of such effects in building enhanced and profitable economic networks was spurred by 19th and 20th century innovations in communications, which gave us the telephone, the ethernet and the internet.
Bell Telephone employee N. Lytkins demonstrated the term network externality in a 1917 paper covering the importance of network effects in building the telephone industry.  The paper explained how more users of the relatively new invention would increase the value of owning a telephone for all users.
Robert Metcalfe, inventor of the Ethernet, furthered the study of network effects through Metcalfe’s Law, which states that the value of a communications network is proportional to the square of the number of connected users of the system (n2).
Network scientist David Reed; however, postulates that there are even greater values to be exploited, as explained in Reed’s Law.  According to Reed, the effects are more akin to 2n as opposed to n2 since benefits increase on the basis of the combinations among the users and the total many-to-many possibilities, made possible by the internet.
Metcalfe’s Law, according to Reed, only accounts for one-to-one possibilities.  In Reed’s Law, the utility of networks and social networks in particular, can scale exponentially with the size of the network.  Thus the internet is now the prime amplifier of network effects.
There are multiple types of network effects:

  • Direct network effects are the simplest type to recognize, wherein the value of a good or service increases as more people use it.  The most classic example of a direct network effect involves the telephone.  As the network of people using telephones swells, so too does the value of owning a telephone since there are more people available to call.
  • Indirect network effects are activated when the usage of a good spawns the production of complementary goods, which in turn adds value to the original product or service.  For instance, the addition and increasing quality of web-enabled software increases the value of the internet itself.
  • Cross-network effects are also referred to as two-sided network effects since increases in usage by one set of users increases the value of a complementary product to another divergent set of users.  Google exemplifies this effect since any increase in the number of users raises the value of placing advertisements on Google.  In turn, Google takes the money from advertisers and invests in additional services for the users.
  • Social network effects are also sometimes referred to as local network effects.  In this model, the value of products or services is not necessarily increased by the number of users.  Rather, each consumer is influenced by the decisions of a subset of other consumers connected through a social or business network.  The extent of network clustering and amount of information each customer possesses becomes relevant in this model.  Progressive’s MyRate program employs social network effects by enabling policyholders to compare their driving habits online to those of similar policyholders.

Such effects—especially when compounded—drastically improve the efficacy of n-sided markets, or those that connect two or more different groups of customers/users to sellers/partners.
The insurance industry is a prime example of an n-sided market.  Consider therein the multitude of networked mechanisms including insurance groups and companies, agencies, brokerage firms, risk retention groups, departments of insurance, technology providers, business consultants and policyholders – just to name a few.
Consider also the industry’s absolute reliance on data, the massive amount of potential information contained within that data, and the fact that information contributes to the overall value of goods, and therefore the collective system.  The amount of intrinsic information/value then in such a system is inherently vast, but that value can be further amplified and exploited by applying positive network effects.
And no other school of thought is enabling the application of positive network effects better than Web 2.0.
Web 2.0
The term “Web 2.0” refers to the current evolutionary stage of web principals and practices that amplify online collaboration and empower end-users to create valuable networks of shared information. Tim O’Reilly, well-recognized Web 2.0 thought leader, further explains that:

Web 2.0 is the business revolution in the computer industry caused by the move to the internet as a platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them.[iii]

The emergence of Web 2.0 was not planned. Rather, it’s core conceptual and technological underpinnings were derived from closely examining internet companies that survived the dotcom bubble of the late 1990’s and ultimately emerged as clear market leaders and innovators over the span of the last decade. But the recent codification of Web 2.0 principals and practices is enabling a bustling new era of user-centric, network enabled software applications. This Web 2.0 systemization dictates strategic positioning of the web as a platform, user positioning wherein users control data, and a broad set of core competencies, which include:

  • Cloud Computing, which describes the provision of computing resources (software applications, networks, servers and data storage) as a service delivered through the internet. This can be viewed in sharp contrast to more conventional and dated means of provisioning, wherein businesses manage their own networks, servers and data stores, and IT staff is required to install, update and trouble-shoot software on individual devices. There are three basic service models for cloud computing:
    • Software as a Service (SaaS), wherein a consumer uses a service provider’s software applications on-demand, running on a cloud infrastructure. In this model, consumers do not manage the underlying infrastructure of networks, servers, data stores, operating systems or individual application capabilities. Users can, however, control software configuration settings and add modular software components. Google Apps is a key example of SaaS.
    • Platform as a Service (PaaS), wherein a consumer uses a service provider’s cloud infrastructure to deploy software applications. In this model, consumers do not manage the underlying infrastructure of networks, servers, data stores or operating systems. Consumers do retain control over the deployed software applications and hosting environment configurations. To this end, SalesForce.com enables developers to create new applications that can either add to existing SalesForce.com functionality, or create new functionality.
    • Infrastructure as a Service (IaaS), wherein a consumer utilizes the fundamental computing resources of a service provider, including data storage and network capabilities. In this model, the consumer can deploy and run any software of its choosing, including operating systems and applications. The consumer has little control, however, over the infrastructure itself, except with respect to select networking components such as firewalls. Many companies, such as Google, provide this infrastructure in tandem with use of their software. Other providers simply provide this service via data centers. This model can be likened to renting physical warehouse space, wherein the consumer has complete control over physical goods, as well as itemization and inventory techniques and shipping mechanisms. The consumer has very little say though as to how the warehouse is operated by its owner.
  • Software above the level of a single device, which postulates that applications that are limited to a single device, such as a personal computer, are far less valuable than applications that integrate services across any device that provides internet access. Software that serves multiple platforms displays positive network effects.
  • Architecture of participation, which describes the nature of systems that are designed for user contribution. One of the fundamental tenants of Web 2.0 is that users create value by contributing information to systems as a side-effect of ordinary usage. End-users contribute by creating hyperlinks to connect disparate information sources, and by adding to online information bases and SaaS feature-sets. Programmers are also enabled to contribute to cheaper and more agile open-source code and software standards through the architecture of participation.
  • Harnessing collective intelligence, which involvesthesystematic collection, categorization and analysis of broad sets of usage patterns and user contributions to create actionable intelligence and increase value for all users. Collective intelligencesystems tap the expertise of a group rather than an individual for decision making purposes.  For example, PredictWallStreet.com focuses one million unique monthly visitors on predicting whether a stock will close up or down. Resulting algorithms are able to outperform the market, which individual analysts typically can’t do. Diversity of opinion, independence, decentralization and aggregation are required to effectively harness the wisdom of crowds.
  • The importance of data, which describes the increasing significance of proprietary data and associated databases as a core competitive advantage, as opposed to storage and transfer technologies. Such technologies are becoming cheaper, more agile and more ubiquitous by the day, enabling companies to produce more accessible and more participatory data sources that can be quickly and continuously augmented to increase their value.
  • Rich User Experience, which makes clear that web applications must be able to provide a user interface and base functionality that perform just as well as –  or better than – more traditional, device-dependent software.  Recent advances in mainly open-source technologies such as AJAX are enabling developers to build web applications that accomplish this directive. And by using the web as its platform, Web 2.0 systems are able to provide an enhanced set of network enabled, value generating features not typically found in non-web native software, including:
    • Blogs, or Web logs, which are online journals or diaries hosted on a Web site and often distributed to other sites or readers using RSS, or syndicated feeds. Blogs may be of a personal nature, or intended for a business audience. When used for business purposes, blogs are prime, easy to use enablers of thought leadership. Blogging software, such as WordPress.com and Blogspot.com, is often free, and enables blog subscribers or readers to post comments in an open environment for further discussion.
    • Mash-ups combine content from existing online sources to create new services.  For example, a mash-up might retrieve policyholder data from a networked database and display the locations of the policyholders elsewhere on a web-enabled Google map.
    • Podcasts are a multimedia form of a blog, typically containing audio or video content.  Podcasts are a method of broadcasting that does not depend on scheduled broadcasting times.  Rather, podcasts can be streamed or downloaded and played on demand.  iTunes is the most popular aggregator of podcasts. Because iTunes and the iPod were early enablers, the term “podcast” is a mash-up of the terms “broadcast” and “iPod.”
    • RSS (Really Simple Syndication) allows internet users to subscribe to online distributions of news, blogs, podcasts, or other information. Aggregators such as iGoogle and MyMSN combine RSS feeds from multiple sources to provide personalized access from a single portal.
    • Social networking refers to sites such as LinkedIn, which allow members to communicate, form groups, and access other members’ personal information, skills, talents, knowledge or preferences. Such sites have experienced explosive growth within the past few years, and collectively boast membership in the hundreds of millions. Social networking concepts can also be applied to other types of web applications.
    • Web services enable communication between disparate systems in order to automatically pass information and conduct transactions.  For instance, an insurer and an insurance agent might use web services to communicate over the internet and update each others’ various systems without the need for multiple, manual updates.  Web services also enable service oriented architecture (SOA) which builds interoperable services around business processes.
    • Wikis are systems for collaborative publishing, which allow many authors to contribute to an online document or discussion.  The foremost example of a popular wiki is Wikipedia, which greatly exemplifies the principals of architecture of participation and harnessing collective intelligence.

Also fundamental to Web 2.0 is the ability for users to create hyperlinks and post comments. To this end, Web 1.0 can be considered “read-only” from the end-users’ perspective. In the former model, web masters prescribed static hyperlinks to connect disparate web sites, or to navigate to different pages within the same website. Additionally, end-users were merely provided access to site content, lacking the ability to provide open and transparent feedback by way of comments and replies. This model ultimately reflected the limited role of consumers during the industrial and mass-media dominated eras.
Conversely, Web 2.0 can be considered “read-write” from the end-users’ perspective. In this model, users are afforded the ability – and indeed imbued with the obligation – to create hyperlinks. Or as the aforementioned Tim O’Reilly so elegantly writes:

As users add new content, and new sites, it is bound to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition and intensity, the web of connections grows organically as an output of the collective activity of all web users.[iv]

Web 2.0 end-users are also provided dialectically transparent feedback mechanisms packaged with site content, typically in the form of commenting functionality. Most Web 2.0 site content, and almost all blogs, includes the ability to post comments, thus removing the barriers between author and reader and between all readers. As such, content consumers can initiate further, elucidated discussion, and keep authors and content providers honest. Further to this, many web applications that enable users to share hyperlinks also provide mechanisms for commenting on the hyperlinked content.
Deriving Value from Web 2.0 Enabled Network Effects
Much as been written (and even lamented) about the insurance industry’s apparent sluggishness to adopt and implement new technologies, particularly in the realm of Web 2.0. In fairness, the insurance industry represents a massive and necessarily risk adverse n-sided market, subject to more rigorous standards and complexities than most other industries.
But we have reached a tipping point where much of the risk involving Web 2.0 has already been assumed by leaders such as Google and Amazon.com. Web 2.0 companies are thusly beginning to target the insurance industry with new technologies and methodologies at break-neck speed, and it is widely believed (for a variety of reasons) that companies who do not make the switch to Web 2.0 will ultimately suffer, mired in decreased competitive positioning. So let us discuss some of the ways in which insurers can derive value from web 2.0 enabled network effects:

  • Embrace the cloud. Cloud computing represents a sea change in modern business operations, and one which the insurance industry must embrace sooner rather than later. As it is, modern businesses must concentrate, first and foremost, on their core competencies. And no other facet of insurance operations is more distracting than maintaining dedicated, internal resources for software maintenance, network and server architecture, and data storage. Cloud computing provides software and operating platforms  locked in states of perpetual beta, in which improvements are constantly made and rolled out without interruption to the end-user. Additionally, the storage of data off site negates the need, almost entirely, for companies to employ and manage network administrators, and actually keeps data far more secure than any insurance company is capable of. The development of cloud computing now also means that business objectives are no longer limited to IT objections based on the availability of limited, internal technology or IT competencies. In past decades, such restrictions enabled IT staff to dictate the ultimate reach of many business decisions. But cloud-based systems are malleable, built and customized directly in support of business operations, not IT proficiency. Lastly, and perhaps most importantly, insurers in today’s economy must operate leaner and meaner than in more fiscally liberal times. Resources must be dedicated to business imperatives, and not unnecessary software, server and data storage licensing costs and expenditures. Cloud computing ultimately presents companies with enormous potential for cost savings. (To this end, the City of Los Angeles will save $13 million in software licensing and manpower costs over the next five years, simply by adopting Google Apps hosted solutions.)
  • Mobilize your workforce. Web 2.0 software that serves multiple devicestranscends geographic limitations, thus drastically increasing productivity and improving collaborative business processes. So make sure that your workforce is capable of performing any base function from anywhere (reasonably speaking) in the world. For instance, mobile claims adjustors, who can process claims real time and on-site, present drastic improvements in efficiency. Furthermore, any member of a workforce should be able to access and edit information or internal documents directly from a database through the web. This stands in start contrast to the former models of e-mailing documents back and forth between onsite and offsite workers, or downloading documents from the company’s server for use on a different device, which effectively creates new versions of the documents for each instance of use.
  • Establish architecture of participation. Again, one of the most beneficial aspects of Web 2.0 is user generated content, given that users create value. So implement systems that support and employ this effect. Encourage all nodes in your network, including consumers, to contribute to your total information base through wikis and other forms of online discussion. Enable your end-users the ability to comment on and create hyperlinks to all pertinent web pages and information bases, and watch the value of your systems and inherent information grow exponentially.
  • Harness collective intelligence. Collective intelligence is market intelligence, so make sure that your systems are capable of collecting and analyzing usage patterns, user feedback and other data created through use of your systems. A simple example to consider is the ease in which online surveys can be created, conducted and analyzed entirely online. Savvy companies are relying increasingly more on such efficient mechanisms to gather user feedback, and turn the wisdom of crowds into actionable business intelligence.
  • Focus on creating unique, hard-to-duplicate sources of data. Again, the increasing ease and ubiquity of data collection, storage and analysis is enabling companies to instead focus on the production of richer, deeper data sets as a competitive advantage. And in addition to using such data sets for improved rating and underwriting techniques, insurance organizations can leverage valuable data sets to sell to other organizations. Although this activity centers mainly on service providers, the potential spans all organizational types in an industry that views data as one of its most important and valuable assets.
  • Provide a rich user experience. Consumers are relying more and more on the internet to locate coverage options, receive and compare quotes and manage their policies all online in real-time. Further to this, consumers are looking for quicker, easier means of self-service, which Web 2.0 is enabling far better than past methods. So a rich user experience, when done right, can contribute both to new customer acquisition, and customer retention. Additionally, Web 2.0 is proving to be the most effective tool available to an organization’s branding and marketing wings. Contemporary, successful marketing and branding smartly focuses its attention to blogs, podcasts and other forms of interactive social media to reach target audiences faster, more efficiently, and cheaper than ever before. And by making the branding and marketing processes two-way, consumers are feeling more comfortable and providing more direct feedback in a climate where consumers otherwise cynically view non-inclusive marketing techniques as indifferent to their actual wants and needs. Furthermore, Web 2.0 enabled marketing techniques enable real-time analysis of the ultimate success of marketing efforts. And finally, with Web 2.0 web applications providing user interfaces that rival those of traditional software, replacing legacy systems becomes a much easier sell.

Conclusion
This article by no means attempts to provide a final, definitive resource for network effects and Web 2.0, nor does it claim to provide an exhaustive list of all pertinent elements and methodologies. This article does aim; however, to highlight the benefits of Web 2.0 enabled network effects for an industry that is primed and ready for major innovation.
The article’s author would urge the reader to use this information as a starting point – either for debate, or for new considerations for operational excellence. Readers who wish to pursue this topic further would be well served by reading Amy Shuen’s Web 2.0: A Strategy Guide.
References


[i] Hookway, Branden (1999). Pandemonium: The Rise of Predatory Locales in the Postwar World.  New York: Princeton Architectural Press.
[ii] Kelly, Kevin (1998). New Rules for the New Economy: 10 Radical Strategies for a Connected World.  New York: The Viking Press.
[iii] Web 2.0 Compact Definition: Trying Again(2005, September 30). California: O’Reilly Media Inc.  Retrieved on October 15, 2009 from the World Wide Web: http://radar.oreilly.com/archives/2006/12/web_20_compact.html.
[iv] What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software(2005, September 30). California: O’Reilly Media Inc.  Retrieved on October 15, 2009 from the World Wide Web: http:oreilly.com/lpt/a/6228.


Josh Struve is the Digital Marketing Manager at Perr&Knight, as well as the Managing Editor of the Journal of Insurance Operations.

Adapting to change, driving change: Insurance companies keep their competitive edge sharp with BPM

When a well-known Fortune 500 insurance provider needed a change agent to increase efficiencies in its growing operations, it turned to a four-person team and a business process management (BPM) solution for help.
The company needed to maintain its hallmark level of customer service and improve operations while managing the growth of a burgeoning company with more than $59.8 billion in assets. The BPM team introduced new process improvement technology to the company’s wellness claims group, which worked with more than 400,000 external account relationships billing on a biweekly or monthly cycle.  And, after the first 90 days of the BPM project, their payroll reconciliation costs shrunk by 12 percent.
Similarly, Xbridge offers insurance in the UK to small business, landlords, shops and restaurants. While smaller in scale than the example above, Xbridge faced similar issues. In just 18 months its call center staff grew from only six people to more than 50, and the company’s manual processes were breaking down as the organization grew.
Managing that dramatic growth has been a driver for the adoption of BPM. With BPM, people understand where they fit into the process and how they impact the customer. With BPM, management receives performance metrics in real time allowing them to make critical decisions that they couldn’t have made previously.
Insurers like these are turning to BPM in growing numbers to reduce expenses, gain greater efficiencies in processing and managing claims, add new products and services, deal with regulations, and maintain an edge against their competitors.
And now with the global financial community completely turned upside down and a government overhaul underway, the insurance industry will more than ever before look for ways to better manage their operations and overcome new adversities in the market. BPM will no doubt play a vital role in this effort.
The idea behind BPM is to systematically improve an organization’s business processes. BPM software helps to make business processes more effective, more efficient, and more capable of adapting to an ever-changing environment. If executed properly the results of a successful BPM project can be enormous.
For example, one large U.S. insurance provider deployed a process to optimize invoice reconciliation and was able to reduce the error rate of handling paper invoices by 30 percent. Another streamlined customer service processes that spanned four business groups, increasing workload capacity by 192 percent.
BPM is one of the fastest growing software categories on the market. In a recent survey of 1,400 CIOs by Gartner Executive Programs, the top business priority identified by CIOs was business process improvement. For individuals or organizations that are being asked to investigate process improvement, BPM is a term that frequently associated with the process improvement.
The insurance industry has quickly adopted BPM as a means to serve not only as a solution to specific, immediate process improvement objectives, but as a platform that gives them the ability to tackle diverse process improvement initiatives and realize the following benefits:

  • Enable collaboration across and beyond the enterprise. Automatic work routing and notifications across groups, outside agents, and customers reduces the time, errors and complexity of executing processes.
  • Enable Straight-Through Processing. Business rules in processes can help automate the routing and processing of tasks – often reducing the amount of human intervention needed by over 80 percent.
  • Gain real-time visibility and control over processes. Managers can view real-time process performance and proactively manage bottlenecks.
  • Extend the value and life of core systems. Leverage existing applications by reading and posting transactions while introducing more efficient Web-based forms and interfaces.
  • Ensure that the process that is documented – is the process that is executed. Process models that actually run the process provide consistency, adherence and audit trails to ensure compliance with regulations like SOX, HIPAA, etc.
  • Respond to change faster. Revise processes to respond to organization or regulatory changes – in days, if necessary.

BPM broken down
BPM is the understanding, visibility and control of business processes. A business process represents a discrete series of activities or tasks that can span people, applications, business events and organizations. Based on this definition, you could logically relate BPM with other process improvement disciplines. That assumption is valid – there is certainly a described process (or methodology) that should be followed to help an organization document their business processes and understand where they are being used throughout their business. During discovery, everyone agrees on how the current process is defined. The ‘as-is’ process is then used as a basis for determining where the process can be improved.
However, simply documenting what the process looks like does not give the business managers (those responsible for the actual results) control over the process. The real value of BPM comes from gaining visibility and control of the business process. By applying technology, BPM software can activate the process, orchestrate the people, data and systems that are involved in the process, and give the business managers a view into how the process is operating, where bottlenecks are occurring and highlight possible process optimizations. Process operational metrics are automatically collected by the BPM software. Business metrics and key performance indicators (KPIs) can also be measured to add specific process or organizational context to the data.
Armed with data on how the process is currently operating, business managers can use any process improvement technique to optimize the process. The next generation process will drive maximum performance and efficiency. The impact of an improved business process can be realized in many ways, including reductions in cost, improved customer satisfaction, increased productivity by allowing reallocation of resources to more value added tasks, or by compliance with industry or regulatory requirements.
The description above represents the promise of BPM – process ‘nirvana’. Most companies are far from achieving this level of process capability. Business managers have limited visibility, especially for processes that may cross outside the borders of their department or extend outside the organization. Individual work activities may be processed in a first in – first out fashion, rather than being based on an optimized global prioritization. For organizations that have expanded or grown by acquisition, each business unit may perform similar processes, but each completing the work using specialized processes that don’t allow sharing of human and technology resources. Not knowing the current status of work paralyzes the business because managers cannot predict when work will be completed, who will complete it, if there are problems and how much the work is costing the company.
The term “Process-Driven” means that a person or organization has a passion for superior business performance through process innovation. Process-Driven organizations are those that understand how their work is getting done and focus on finding opportunities to make it better. They focus on the business and the results. They leverage technology, process improvement methodologies and best practices while embracing change to drive the processes that support their business. BPM is a business-oriented architecture that allows process owners to set improvement goals and orchestrate actions across the company to achieve those goals.
The evolution of process technology
The term BPM has evolved from a history of usage in related business process fields such as business process improvement, business process reengineering, and business process innovations. Just as these process disciplines have changed, BPM systems or suites have evolved similarly to other management systems. These advances can be mapped at the lowest level to the technology itself. Understanding these relationships is important to help ‘place’ a BPMS in the hierarchy of an organization’s systems.
The operating system of the computer is an example of the very lowest level of a management system. Database management systems (DBMS) are the primary controller of data. Widespread use of computers in business heralded business applications that managed functional areas. At this point, organizations found that the data that supported their business was organized in silos, driven by the functional applications adopted by the company. Examples of these types of applications include Enterprise Resource Planning (ERP) Systems, Customer Relationship Management (CRM) systems and Order Management (OM) systems.
Organizations found themselves with a ‘four wall’ scope. It was difficult to share data and work between different departments because the applications enforced a department-level scope. Unfortunately, most business processes spanned systems, departments and sometimes external business partners. In addition, businesses were forced to operate the way the application was developed, rather than by the way they defined their own processes.
These applications were difficult – if not impossible – to modify, and it was typically a lengthy and costly undertaking. Technology came to the rescue again, and tools like workflow management systems and Enterprise Application Integration (EAI) suites were introduced. These tools allowed work and data to be routed and synchronized across an organization, but they simply served as conduits. It was difficult to tie the activities back to a higher level business process. However, they did serve as an enabler of BPM because they provided cross-system accessibility.
BPM evolved because of an increase in process focus. Organizations realized that they could set themselves apart from their competition by optimizing their business processes. BPM suites are integrated software facilities that enable organizations to adopt and implement business process management. They foster process characteristics like efficiency, effectiveness, and agility. In order to accomplish this, they must contain features that support the following:

  • A graphical modeling capability that can be used by both business owners and process analysts to create both workflow components and higher-level business processes. Modeled processes should include human resource parameters, business event definitions and system activity steps.

2-1

A graphical process modeling environment

  • The ability to simulate one or many business processes, using test, historical and in-flight process data.
  • A facility to create user interface forms and reports.
  • A facility to create business process rules and allow their use to drive process flow and decisions.
  • The ability to integrate with external systems, including many standard technologies or systems.
  • The ability to send and receive business and system event messages.
  • An embedded capability to capture and manage process performance and business indicators as they correlate to the business processes being executed.
  • The ability to create graphical scoreboards for reporting business process metrics in real-time (also referred to as Business Activity Monitoring or BAM).

2-21

A graphical scoreboard from a leading BPM suite

  • A shared business process repository to house all process and process-related artifacts
  • Tools for the administration of the business process engine or server.

Where does BPM fit?
The adoption of BPM by insurance providers and brokers involves a major shift in the way their organizations will operate. BPM technology and best practices and methodologies associated with it cannot be assigned solely to the IT staff. The organization’s leadership team must demonstrate a commitment to BPM and its benefits in order to effect change and adoption throughout the organization. Change is never easy, but with BPM, the benefits can be easily demonstrated to build momentum throughout the organization.
First and foremost, organizational leadership and business managers must take ownership of the business processes that support the company and their specific organizational groups. These organizational groups are responsible for the performance of the company.
BPM enables them to start small, achieve outstanding process results and optimizations in a pilot project, and then apply the technology to other projects. In fact, deploying a process “as is” in a BPMS can – without making any other changes – lead to a twelve percent productivity improvement. This significant gain just sets the stage for further improvement. The ease with which an organization can deploy a new process or update an existing process is a key differentiator in a BPM suite.
A BPM suite that offers a shared process repository will enable all groups within an organization to leverage the process successes that have already paved the way for BPM adoption. In addition, it is essential that insurance organizations adopting BPM employ a more iterative approach to the development and delivery of process applications. Because processes change so frequently and because new requirements emerge as process improvement expands across organizational boundaries, an iterative development approach has proven to be the most successful model for delivering process applications.
Of course, an insurance IT department must be willing and able to integrate and support the BPM technology. This is simplified by the fact that most leading BPM solutions are themselves service-oriented and fit into a Service Oriented Architecture (SOA) seamlessly. In fact, BPM implementations are often the leading “consumers” of the services made available by SOA initiatives – providing concrete business value and impact. Furthermore, the Object Management Group (OMG) is actively driving the definition and adoption of industry accepted standards for all aspects of BPM functions. This eases the IT adoption of the technology by increasing the interoperability of your processes as well as the portability of technology assets. For companies already using process improvement methodologies like Six Sigma or LEAN, a BPM suite adds new measurement and control capabilities that help scale the application of process improvement methodologies across the entire organization.
Insurance organizations that have been successful with making BPM an integral part of their way of doing business have often decided to create BPM Centers of Excellence (COE). At inception, the COE may have been part of the IT organization, but as the enterprise evolved into a more process-driven entity, the COE became a more structured group of individuals that could contribute to BPM projects for the entire organization.
Gartner reports common themes of COE charters to include:

  • Streamline internal and external business processes
  • Maintain control and accountability
  • Provide end-to-end visibility
  • Increase automation

BPM-related services that the COE can provide to the organization include:

  • Coaching and facilitating
  • Promoting best practices
  • Delivering process training and education
  • Maintaining a business process knowledge base

Regardless of how an organization decides to implement BPM, it is important to build momentum by making process successes visible to all levels of the organization. Groups and individuals in the organization will become aware of contributions they can make to the organization by leveraging BPM to optimize their business processes.
Typical insurance process challenges
All insurance companies, regardless of the market segment they serve, share common processes, including business and market development, product development and maintenance, product promotion, and distribution. The processes that would most benefit from BPM, however, vary by segment and according to product types offered by the company.
Accordingly, each market segment has different processes that they would like to improve. The priority assigned to improvement efforts may be based on transaction volume, complexity of work, error rates or overall client pain. Examples of business processes that are prime candidates for business process improvement, by market segment, include:

  • Life. Application submission, underwriting, policy issuance (new business), inquiry management, call center services, internet services (customer services), contract maintenance, policy loans (maintain contracts), billing, payment processing (collect premium), agent commission management (create/maintain distribution system) – prioritized by individual and group products (respectively)
  • Healthcare. Manual receipt processing, claims data entry, claims manual adjudication, payment or denial, auto-adjudication, pend management (claim management), recovery of overpayment, claim adjustments/refunds/voids, subrogation (claim adjustment), group processing, membership processing, premium billings, billing/payment reconciliation (membership), claim status, verification of coverage, verification of benefits (customer services) – prioritized by group, managed care (HMO), individual, indemnity and dental products (respectively)
  • Property & Casualty. First notice of loss, analyze coverage, conclude claims (adjudicate claims), submit application, underwriting, policy issuance (issue contracts), endorse policies

Success stories
Because of the variety, volume and mixed complexity of insurance-related business processes, opportunities for process improvement abound. Many insurance companies have adopted BPM suites to create competitive advantages in their businesses. While the areas in which they use such suites are varied, the results are predictable: process efficiency that provides quick and measurable ROI, outstanding visibility into the operational aspects of the processes, enabling further optimization and the agility to quickly identify and change business processes to stay ahead of the competition.
Xbridge, the United Kingdom’s leading online insurance and finance broker, turned to BPM to adapt to the rapid growth the company was experiencing. A small company of a little more than 120 employees, Xbridge competes and usually wins against bigger names in the industry.  Unlike large organizations that implement BPM with a legacy system, Xbridge brought in BPM to automate its existing manual process of processing and responding to leads.
Three years ago, Xbridge managed more than 1,500 inquiries a month in the company’s call center of six employees, which provides an alternative to potential customers who prefer to work with a person instead of online. Conventional wisdom at the time implied that every inquiry was a potential lead and should receive a personal response. However, within a week of automating the process with BPM, they discovered that was not the case. Today Xbridge manages more than 20,000 inquiries a month with a staff of 60 employees in their call center. With automation, the response to inquiries has improved greatly; in addition, Xbridge is able to better qualify inquiries to determine which merit a personal response, based on their ability to offer the prospective customer an appropriate product.
For another Fortune 500 insurance provider on the opposite end of the size spectrum, invoice reconciliation posed a process management challenge. Of the approximate 500,000 monthly invoices sent to customers, up to 30 percent would be disputed in some way. The challenge for the reconciliation team was to resolve each dispute before the next billing cycle, typically a window of less than 30 days. If a dispute was not resolved by the next bill, the customer frequently disputed it again the following month, resulting in wasted effort and customer frustration.
By leveraging BPM, an invoice reconciliation process was developed and deployed in just 90 days. The initial deployment is well on its way to reducing by 80 percent the number of full-time equivalents (FTE) required to handle paper invoices. The BPM system provided the platform that enabled the process design required to automate task assignment, perform time and value-based prioritization and provide proactive notification and interactive approval controls, all in an integrated working environment.
The key takeaway from these examples is this: in a mature market like insurance, no process is off limits when evaluating the improvements that can be introduced by a BPM suite. Improvements that increase the efficiency, effectiveness and agility of existing processes enable insurance providers to outperform their competitors in both the cost to provide products and services and by offering outstanding service to both agents and end consumers.


Wayne Snell is Senior Director, Marketing for Lombardi Software, developers of the Teamworks Business Process Management Suite. Wayne manages all aspects of Lombardi’s public and customer communications. His more than twenty years of experience include senior-level marketing, product management and technical implementation positions with BEA Systems (now part of Oracle), start-up services provider Symphion, Viasoft (now owned by Allen Systems Group), and Computer Associates.

P&C underwriting automation: It’s time to optimize and modernize

Background
The property & casualty insurance industry continues to face challenging market conditions.  Premium rates continue to drop while at the same time the economic slump results in exposure basis reductions.  In the face of this premium shrinkage, carriers are trying to hold the line on expenses even as they strive for higher submission and policy counts to keep premium revenue up. Agents who face their own revenue pressures are now shopping more risks around and demand greater ease of doing business from their carriers. At the same time, underwriters are under constant pressure to improve underwriting quality and discipline. Through it all, internal processes are cumbersome, key systems are inflexible, and any changes involve major commitments of people, time, and money with uncertain results.
Challenging times indeed!  I’m not fond of “perfect storm” analogies, but if you feel like George Clooney trying get his fishing boat up over that wave, or Mark Wahlberg at the end, stretched out in his survival suit one hundred miles from land, we need to talk!
It is time to modernize and optimize your underwriting processes, even in the face of challenging times.  There are technologies and methods emerging that can do all kinds of interesting things, but before selecting the technology, we have to figure out what our new process should be.  So let us consider what carriers really want their underwriting process to be, and then look at what technologies can get us there.
The carriers speak out
We conducted three separate research studies where we surveyed Commercial carrier CEOs and senior management for their input with regard to pain points, emerging technologies and underwriting management systems.  Let us share with you some of our key findings.
1. Strategic Technology Investments to Combat the Soft Market – A Survey of Commercial Insurance Executives (Conducted by The Ward Group)
Meeting technology expectations of agents and employees is significant and often overlooked.  Beyond the profit and loss improvements that technology investments are expected to deliver, there is a growing expectation among agents and employees, especially among younger professionals, that technology should be easy to use, friendly, and cutting-edge.
In this survey conducted by The Ward Group, commercial carrier CEOs were asked ten questions about technology implementation, how technology helps them compete against other insurance companies, and the use of technology for underwriting activities.
The findings clearly show that technology is recognized as a powerful competitive weapon.  Eighty-five percent of executives polled indicated that technology can play a “significant role” or a “more than average role” in their companies’ ability to compete against other carriers.
Additional benefits that these executives expected from technology investments and a modernized underwriting system were:

  • Improved underwriting productivity and reduced underwriting expense
  • Reduced loss ratio
  • Ease of doing business
  • Better individual risk selection and pricing
  • Better understanding of the entire book of business
  • Streamlined processes and reduction of expenses
  • Meeting expectations of agents and employees

The survey participants also provided, in their own words, what they believe are the most important ways to implement new systems or to invest in new technologies that will help in a soft market:

  • “Technology is key to accomplishing underwriting and processing more efficiently….”
  • “Make it quick and easy for the agent to do business and they are more apt to use your products in a soft market.”
  • “If agents have to rekey to do business with us, they will place the business elsewhere.”
  • “Automating underwriting rules will speed up policy processing and shorten turnaround time.”
  • “Quickly understand at what price level a risk can be written and still make a profit.”
  • “New systems and technologies…allow underwriters more time to review the risk and make more qualified underwriting decisions.”
  • “Improved efficiencies give underwriters the opportunity to review more submissions.”
  • “Technology can differentiate a company from competitors.”

2. Mid-Tier Carrier CEO Study (conducted by Phelon Group)
When surveyed about pain points, the predominant concern for mid-market P&C carriers (48%) is how to get profitable business on the books in the softening market.  Executives recognize that their current underwriting processes are grossly inefficient, partly due to processes based on outdated legacy systems.  However, they believe that their intellectual capital lies in their existing systems and analytics, and they are unwilling to walk away from that competitive advantage.  Carriers are looking for ways to leverage this asset and to further codify their knowledge to get profitable business on the books.
Regarding underwriting challenges, executives chose the following priorities:

  • Improving ease of doing business with agents (33%)
  • Automation of underwriting (30%)
  • Straight-through processing (30%)
  • Integration with predictive analytics systems
  • Management visibility
  • Sharing of best practices

The participants shared with us some of their perspectives:

  • “Getting the business on the books and pricing properly with respect to risk is my main concern. Profitability is key.”
  • “Our legacy systems create huge inefficiencies and the bodies we need to process the underwriting are too heavy.”
  • “It is hard to establish true and profitable pricing in a softening market…We need better tools to analyze trends and create pricing that accurately reflects the market.”
  • “We looked to the market for a 3rd party solution, but we could not find one that met our customization requirements. Whatever we would choose would have to integrate with our system to leverage the investments we have already made in customizing our policies and pricing.”
  • “We operate in a highly competitive market and need to make it easier to work with our agents.”

3. Magic Wand Survey (Conducted at NAMIC Commercial Lines UW Seminar)
Earlier this year, we asked senior and underwriting managers, “If you had a magic wand, what top benefits would you want from an underwriting automation system?”

  • The overwhelming winner was increased efficiency and productivity. It got the most votes and the most number #1 votes.
  • Tied for second place were both speed & agility and ease of use (for underwriters). Managers are looking for user-friendly, intuitive systems that will make it easy to do their jobs without adding complexity or requiring extensive training. At the same time, they are looking for agility, the ability to change their rules, data, and processes quickly to respond to changing market conditions.
  • The fourth most popular response was ease of doing business with agents.

These four responses all address the need for better workflow and systems in the underwriting process. Additionally, our surveyor shared with us some insightful comments.

  • “We want increased premium capacity with the same number of staff, for profitable growth.”
  • “I want to reduce the number of people handling a submission and cut down on the back-and-forth questions between underwriters and agents.”
  • “We want to improve the customer experience.”
  • “The ideal system would allow our customer – the agents – to interact with our associates and view the system together.  This would allow us to provide better service.”

The remaining responses included:  integration of disparate systems into a unified underwriting desktop, management visibility, discipline & consistency, scale the business, and predictive modeling & analytics. (It is interesting to note that all of these items received some first and second place votes.)
How much we’ve spent, how little we’ve changed!
A few months ago I was involved in a discussion about the challenges of tracking submission activity and turnaround times. It reminded me of how little we have accomplished over the last 25 years. The question had to do with what to use as the received date/time for a submission – when it was received in the mailroom/imaging station, when the underwriting assistant got it, or when the underwriter got it.  I realized that I had that same discussion with my business users 25 years ago. While certain steps had been automated in and of themselves, we still have basically the same processes, the same steps, the same people!
With today’s capabilities, a submission could be received through upload or agent portal entry (including supplemental data and attachments). Any necessary web services could already have been run in accordance with carrier rules (e.g., address scrubbing, geo-coding, financials, etc.) and attached to the account.  It could immediately appear on the assistant’s or underwriter’s work queue.  Submission tracking from that point could auto-magically be done by the system and available in real-time through a dashboard.
But that is not where most carriers are today. Generally, we have automated various individual steps, but the overall workflow is still a manually-controlled one, performed by the mailroom, imaging, clerical, and underwriting staff.
For example, we’ve spent millions of dollars to go paperless, but in many companies, underwriters still are pulling up electronic images and re-typing data into another system, just like we used to do with mailed-in or faxed-in paper. This is wonderful document management and forwarding, but is still the same old workflow. In fact, underwriting team members may be re-typing information into their rating engine and/or quoting system. They are probably re-typing into multiple web services like D&B, Choicepoint, geo-coding, engineering survey, loss control vendors, etc. And maybe they are still typing to get loss history, customer id’s, submission file labels, and who knows what else.  (Take a little test: How often does your entire staff enter, type, or write the insured name, whether it is in a system, on a letter or form, on a label, in a web service, etc.?  Once, twice, four times, seven times, ten times, more?)
How many of us still pass paper from one person to the other – underwriter to assistant, rater, or referral underwriter? How many of us still take hours or days to acknowledge receipt of a submission, to collect the supplemental data needed to underwrite it, to generate a quote, to get the agent’s feedback? How long does it normally take to resolve a 30-second issue between the underwriter and the agent or the underwriter and his/her supervisor? How long does it take to send, receive, research, make a decision, and reply to a referral?  On the other hand, how long would it take if all the information were presented to the underwriter and manager in context, a click or two away, and the transmission was instant?
And that’s just what we do to ourselves.  How does an agent feel about how we help him/her provide service to their customer?
Our business processes are constrained by our old systems and our old patterns. Our systems treat underwriting as a data entry process for policy administration instead of a unique workflow with its own set of players, sources of information, processes, and rules.  And our ideas tend to be limited to this view of what is possible.
We need to break free of this mindset – to be able to see what is possible. Let’s start with a list of workflow “don’ts”, things that underwriters and agents shouldn’t have to do or use anymore:

  • Tracking sheets
  • Typing in data from a paper or from one system to the other
  • Waiting for a paper file to be pulled or received
  • Having to close one submission in the system to be able to access another
  • Re-typing data into another system to get a loss control, loss history, credit report, MVR, VIN validation, etc.
  • Searching through the underwriting manual or old emails to find that company directive on writing xxx LOB in yyy territory
  • Agent entering 5 screens of data only to find out you don’t write that class at that size in that state
  • Waiting two hours for an agent/underwriter to get back in the office to check their files on something
  • Losing the account because someone misplaced the submission paperwork
  • Finding out six months later that an agent/underwriter shouldn’t have quoted that account because it was outside your appetite or their authority level
  • Collecting information, printouts, separate documents, and the underwriter’s notes to pass it on to the referral underwriter
  • Reconciling your agent portal quote with your backend rating quote

Now let’s review what types of emerging technologies are ready for prime-time and then think about how we can do things better.
Emerging technologies
Over the last several years, there have been many exciting advances in technology and what you can do within the context of business operations. By and large, most of these are still on the wish-list for carriers and, for that matter, for insurance systems vendors. But these emerging technologies provide the new foundation to break free of the older system/technology constraints that have kept us stuck in our old workflows.
Service-Oriented Architecture
One of the most basic innovations is Service-Oriented Architecture (SOA). SOA breaks application systems into separate “services” that can receive input parameters, run, and return their result set to whoever invoked them. Each service acts like a building block that can be used and re-used in various contexts, like a Lego block. This allows applications to be assembled from appropriate services: You’d like to check the customer’s financial status? Just plug in a Dun & Bradstreet report. SOA provides a more flexible and more sustainable way to set up your enterprise applications.
Note that retrofitting existing applications into a service-oriented architecture can be challenging. Some major subsystems (e.g., rating, policy issuance) may be able to be broken out into services to let the legacy system play with newer SOA applications, but a full rework of legacy applications is rarely practical.
However, SOA is clearly the best practice now and all new applications, whether built in-house or acquired from solution providers, should be service-oriented architecture solutions.
Web 2.0 & rich internet applications
Web 2.0 or Rich Internet Applications are generic terms that refer to the use of technologies and methods to bring new levels of interactivity and real-time behavior into browser-based applications.  Examples are the use of blogs, wikis, chat, social-networking, photo/video, and voice.
What’s new is not so much the technical capabilities themselves, but the new forms of mass use that have sprung up as internet access has expanded past critical mass. People have been sharing files and chatting over the internet for decades.  But now it is so common and standard that internet applications are being built around these capabilities, with documents and streaming video and chatting as a part of the application interaction. Witness Facebook, dating services and even the NBC Olympics website.
Similarly, Web 2.0 offers new options for how we do business in the insurance space. We can incorporate chat, real-time notes, flexible file/video attachments wherever they can improve the quality and/or speed of the process.
Configurability
Configurability refers to the ability to specify or change details of a system without having to touch the underlying base code of the system. The concept is not new – vendors have talked about being configurable for a couple of decades.  But both the breadth and the ease of configuration has improved dramatically in the last several years.
In the past, configurability usually referred to the ability to redefine the values of a few fields to fit a carrier’s specific data requirements, or a control table that would direct processing between a few pre-defined paths. But now you have the ability to truly define or redefine any and all the data elements, values, supplemental data, screens and screenflow, edits, risk selection and appetite rules, underwriting guidelines and best practices, straight-thru processing, assignments, referrals, users, permissions, letters of authority, and the internal and external services you want to perform. Before, you could tweak your hard-coded process with a few variations.  Now you can configure virtually your whole process for each line of business, geography, distribution channel, and even each individual.
Configuration has gotten more powerful and much easier.  In the past, the “configuration” was done by the vendor’s programmers, either in native programming code or through a proprietary pseudo-code.  Today, the advanced solutions in the marketplace offer point-and-click configuration tools that allow business system analysts or developers to specify what should happen.
Configurability is another best practice that carriers should insist on as they look at new solutions.  (But make sure you get to see and try it – everyone says they are configurable, but what they actually offer varies widely.)
Rules & workflow engines
Rules and workflow engines allow the definition of specific business rules and/or process workflows separate from the system’s data and screen handling.
This segregation of the rules and/or process steps allows for easier modification of the rules and/or process without having to change the underlying base code. For instance, if the carrier decides to tighten their underwriting rules, change their assignment rules, or tweak their scheduled credit ranges for a specific territory and class, the change may be made to the appropriate rule or workflow, and the application will automatically absorb that change every place the application uses that rule or workflow.
In addition, these engines permit separate and more effective management and facilitate re-use of the business rules and workflows across the carrier’s entire business operation. (Rules and workflow engines are different from each other, though in some installations they overlap, but are similar in how they relate to the business application.)
Separate external rules and workflow engines have been available for many years. But, in reality, their effectiveness in insurance applications has often been limited. Traditionally they have been toolkits with little or no applicable insurance content out-of-the-box.  As a result, you would have to build a new application from scratch, or you would have to integrate the external rules/workflow into your existing legacy systems. Either approach involves significant cost and time. In addition, often you would find that you can’t efficiently invoke the rule/workflow engine everywhere you would like without prohibitive performance overhead (e.g., invoking a rules engine at the field level).
In recent years, however, modern configurable solutions are increasingly emerging with embedded rules and workflow capabilities.  These products offer the necessary level of rule and workflow management while also providing standard insurance rules and workflow out-of-the-box, allowing configuration of company-specific rules and practices, and performing efficiently at any level in the application (e.g., pre-screening, field-level, screen-level, assignment, quote, referral, etc.). This can enable the carrier to implement a modern solution with configurable, embedded rules and workflow in a much more reasonable time and cost.
Underwriting 2.0 – the platform of the future
Okay, so that’s the technology with all of its marketing glory. But let’s be real. What can these emerging technologies do for our process?  Can they bring all our islands of automation into a coherent, efficient underwriting process? How will they really improve productivity and quality for the underwriter and the agent? What can the modern process be like?
Above we reviewed some of the “don’ts” that have plagued our workflows for the last few decades. Let’s start looking forward and defining some “do’s” as principles for our future underwriting process.

  • Everything you need to see in one place (not in different systems, email, the fax room, your in-basket, the document management system, etc.).
  • Everything in 1-3 clicks – everything!
  • (account, submission/policy header, application, correspondence, attached files, web service reports, external system data (loss history, loss control, payment history), underwriting worksheets, rating worksheets, predictive. analytics model results, rating, rating factors, quotes, ….).
  • Everything is accessed and updated real-time.
  • Everyone is notified of everything relevant, immediately.
  • Underwriters and/or agents can work with each other, not at each other, in one process (notes, chat, shared view/update, instant update and communication).
  • Multiple accounts open at once (a click away).
  • Straight-thru processing for the clear winners and losers and, for the rest, everything set up in one desktop for the underwriter.
  • Automatic advice and reminders for the underwriter based on account characteristics or activity.
  • Intuitive, easy-to-learn, easy-to-use  (insurance terminology, no Save buttons, even a configurator designed for real insurance people and processes).
  • No re-entry – ever.
  • Configurability to keep the system current with the business needs and opportunities.

These are all possible today. The technologies are available now, and people are using them to do exactly these kinds of things (though not always in insurance). And they don’t require tens of millions of dollars and years of waiting. The first step is realizing that this is the business process you want.
What you need is a single platform, an integrated desktop, a control station for the underwriter and the agent that has all the necessary steps and resources right there. Tasks that don’t require human intervention happen automatically ahead of time. Tasks that require professional judgment or decision are automatically queued up for the underwriter and agent – with all the appropriate research, background information, and pre-analysis needed to make the best possible decision available at the click of a mouse. This new desktop and process is integrated with and leverages the carrier’s existing systems, data, rating, forms, models, and knowledge resources. Communication and collaboration with others is instantaneous and part of the account record. The platform and the process are intuitive for underwriters and agents. And all aspects of the desktop and the process can be adjusted, added to, or redirected as fast as the market changes.
Let’s now explore in more detail how this type of platform works and what it delivers.
Agent productivity & ease of doing business
Underwriting 2.0:  The agent can upload a submission from their agency management system or can easily enter a submission from scratch. The entry process and screens are intuitive so agent training and errors are minimal. The agent desktop provides quick pre-qualification and risk appetite feedback so the agent doesn’t waste time submitting risks that the carrier is not interested in. Supplemental data is prompted for at entry while the submission is still in front of the agent. Electronic documents, photographs, loss runs, and notes can quickly be added to the submission as a part of entry. When the agent submits the risk, it goes directly to the appropriate assistant’s or underwriter’s desktop and the receipt is confirmed to the agent instantaneously. The entire process of submitting a risk, including supplemental data and attachments, only takes 5 – 15 minutes from the agent’s desktop to the underwriter’s desktop.
Quotes (including multiple quotes and quote options), agent responses, re-quotes, bind requests, and binders are prepared and delivered in real-time. Now the agent and underwriter can work through a rush quote much more efficiently and accurately, collaborating and communicating together on the same system.
For example, the new platform utilizes immediate alerts and notifications, notes, live chat (like Instant Messenger), email correspondence, shared viewing and update of the account.  This enables the agent and underwriter to resolve questions and move the account along as fast as possible without time-wasting email, fax, and voicemail delays and constant account pick-up/put-downs and handoffs.
The Result:  The agent wants to bring business to you because he/she can get a confirmation, quote, and binder from you faster and more efficiently than with any other carrier. Both sides benefit and help each other succeed.
Underwriter productivity
Underwriting 2.0:  The system automatically prepares the risk for the underwriter’s consideration. Leveraging its SOA platform, the platform can pre-assemble carrier system data (e.g., loss history, loss control, payment history), web service data (e.g., MVR, Xmod, financial, geo-code, etc.), and predictive analytics results, or it can allow the underwriter to select what information is appropriate for this risk.  In addition, the desktop analyzes the submission to either highlight risk conditions or characteristics for the underwriter’s attention or to require a referral based on the carrier’s underwriting best practices, knowledge base, and the underwriter’s letter of authority.  Given all of this information about the account, using its embedded rules capability, the desktop advises the underwriter or automatically drives the appropriate processing for the risk.  The platform also can screen out clear winners and losers for straight-through processing before the underwriter has spent any time on the risk, can also present it to the underwriter with best-practice advice, or can flag the account to require a referral.
These features allow the underwriter to spend more of his/her time underwriting, concentrating on the risk characteristics and the appropriate price. Everything the underwriter needs is on the desktop, just clicks away –  the complete application, attachments, notes and chat, external web reports, underwriting guidelines and best practice checklists, rating and pricing, quote and bind capabilities, issuance, endorsements, cancellations, renewals, and dashboard visibility.
For example, the underwriter prepares any worksheet items that have not been prefilled and generates one or more quote options and proposals in real-time.  If a referral is required, the full account and all of the backup information can instantly be placed in the referral underwriter’s queue for their review and decision.
Once the quote is released to the agent, an alert pops up on the agent’s desktop and an email is sent to the agent with the quote attached to notify them immediately.  The agent and underwriter can now collaborate through chat or notes and can share views and updates of the risk.  This helps the underwriter to instantly respond and modify the quote if appropriate, lets the agent accept the right quote, and lets the underwriter close the business in real-time.
Finally, when the underwriting process is completed, all the policy information and documents are passed to the carrier’s existing systems of record so the existing processes and systems are not disrupted. Throughout the entire underwriting process, all information and actions are saved in a detailed audit trail for reference by the underwriter, the referral underwriter, a claims adjuster, loss control, billing, and auditors.
The Result: The Underwriter spends more time underwriting,  handles more quotes, and writes more business. Setup activity is automatic, incorporation of web data and carrier knowledge happens in real-time, communication is instantaneous, and the agent gets their response as quickly as possible. Ultimately, agents bring you more business because you get them an answer first.
Underwriting quality/discipline
Underwriting 2.0: The underwriting desktop needs to enforce quality as well as productivity. Quality underwriting is the key to an insurance carrier’s profitability. This platform will use its embedded rules engine and the external data from web services and the carrier’s systems to guide and enforce best practices throughout the underwriting process.  Every step of the process is assisted by contextual business rules that advise the underwriter and/or drive the process – the initial screening of the risk, the analysis of the risk characteristics, the knowledge-based reminders, assigning appropriate tiering/rating/pricing factors, checking electronic letters of authority, and automatic referral flags.
The Result: Quality is built right into the process. Underwriters are advised and directed in accordance with the carrier’s guidelines and best practices every step of the way.  Rather than relying on the underwriter to find and use paper- or email-based directives and after-the-fact audits, or forcing all risks through a referral process to ensure senior underwriters’ review, the desktop will lead every underwriter through the carrier’s approved risk analysis and pricing regimen. The carrier’s book will be accurate, consistent, and auditable.
Incorporating predictive analytics
Underwriting 2.0: Predictive analytics brings sophisticated analysis into the underwriting process, but only if it is used. Rather than modeling being a separate activity that involves additional work, the new platform will incorporate predictive analytics. The underwriting desktop can then directly apply model results to screen risks out, qualify them for straight-through processing, alert the underwriter to the key risk characteristics, pre-fill rating and pricing factors, and/or mandate referral processing. Having the best information and analysis available lets your underwriters assign the best price – aggressive pricing for the winning accounts, and defensive pricing for the marginal accounts.
The Result:  Incorporating predictive analytics into the underwriting process helps the underwriter write better business at the best price.  Precision pricing on top of informed risk selection and underwriting quality will produce the most profitable book of business.
Actionable knowledge management
Underwriting 2.0:  So often, a carrier’s underwriting knowledge and experience is locked up in senior underwriter’s heads or buried in underwriting manuals and email archives. The new platform leverages this intellectual capital within the underwriting process. By capturing and presenting knowledge items within context of specific risk criteria, they become actionable – suggesting attention to specific characteristics, requiring specific action, enforcing a referral, or performing an automated function.  Every underwriter will receive the benefit of the carrier’s best underwriters’ guidance and best practices as they are underwriting an account.
The Result:  Retaining the knowledge of our senior underwriters and training our junior underwriters is one of the major challenges in our industry today. Capturing and presenting underwriting knowledge through the underwriting desktop protects and leverages this most valuable asset, giving your junior underwriters the benefit of your best underwriters’ wisdom and experience where it matters most, right within the underwriting of the account. Actionable knowledge management will improve the quality of the book of business, preserve the carrier’s knowledge assets, and enable easier training of junior underwriters.
Visibility
Underwriting 2.0:  In today’s insurance world, everyone needs to know how they are doing against their goals. The new platform will track and display everything that has been processed through a real-time dashboard. Both individual underwriters and underwriting management have detailed, easy-to-read, and configurable displays of key metrics such as item and premium counts, ratios, and turnaround time. Further drill-down into those metrics are also available with a few clicks of the mouse.
The Result: Underwriters and managers now have real-time statistics that reflect what is being processed and written, enabling them to recognize and respond to their own progress as well as market changes and opportunities.
Configurability
Underwriting 2.0: Even while the new streamlined process is being laid out, changes are inevitable. As such, the new platform can’t be a rigid solution that requires costly and time-consuming intervention to manage any such changes. It needs to be able to incorporate new information, new rules, new knowledge, and new services with ease – through simple configuration – in order to keep the underwriting process current.
A truly configurable system enables changes to data, screens, edits, rules, documents, and screenflow to be implemented quickly and accurately by business analysts with only modest technical skills. When the market changes, the carrier’s appetite or capacity shifts or new opportunities arise, the underwriting desktop can be changed on the fly with them.
The Result:  The ability to quickly respond to the market changes and position your products and underwriting attention to new opportunities before your competition provides a clear competitive advantage.
Modernize, optimize, transform – start now
Can you underwrite business as efficiently and effectively as you think you should be able to?  Or, are you constrained by your existing processes and systems?
Are your underwriters spending most of their time underwriting?  Or are they chasing information and doing an hour of setup and data entry for every half-hour of true underwriting?
Do your agents consider you their carrier-of-choice because you make their job easier and help them succeed?  Or do they think you are hard to do business with, so you have to constantly press them for their quality submissions?
Are you leveraging your underwriting knowledge and best practices to write the best business at the best price?  Or are you just doing pretty well with what you have to work with? Do you even really know?
Modernizing and optimizing your process can transform your business.

  • Because you help agents to be more productive in getting answers to their customers, more business will come in.
  • Underwriters will be able to focus on underwriting and handle more submissions in less time with better quality. Yes, underwriters will be able to write more business – and better business – at the best price.
  • Managers will finally be able to see across all lines of business, react in real-time, and deploy a true enterprise underwriting strategy.

These platforms are all within our reach today, but only if we are willing to transform how we process our business.
Stop looking at the underwriting process as just data entry for the policy administration system – it is a unique business process with a unique set of demands and goals.
Stop investing your energy and resources in small enhancements to the same constraining workflow – tantamount to “paving the cowpaths” – and start thinking differently about how you would process if you had that magic wand.
The best time to modernize and optimize is when it helps you lead, not when you are trying to catch up. The possibilities are here, now. And if you don’t seize them, your competition will. The first step is to define the business process you want. So get started – thinking, talking, planning, and acting.


Edward Gray is the Director of Customer Solutions for FirstBest Systems in Lexington, MA, where he works with customers to develop a shared vision for how an underwriting management system can bring real-world productivity and quality benefits to the carrier’s internal and agency operations. Ed has more than twenty-five years of insurance expertise in Information Technology and Business Operations with carriers and brokers, including roles as CIO, COO, and Senior Vice President of Operations. He has extensive hands-on experience in system and business process architecture and re-engineering in policy administration, claims, billing, reinsurance, accounting, and management reporting areas, so he has seen what does (and doesn’t) deliver real value to the insurance organization. Ed would be happy to hear your thoughts on the underwriting process.

Controlling claims costs: A long look at litigation expenses

Background

Primary general casualty insurers are justifiably concerned with the costs of defending lawsuits against policyholders. Payments to defense attorneys are a measurable percentage of earned premiums, and next to the costs of staff claim personnel, legal fees are the largest segment of loss adjustment expense. The amount of defense costs is particularly significant since these expenses are related to a relatively small portion of total claims. Typically, 20 to 25% of an insurer’s claims are in litigation requiring the use of defense attorneys. The legal defense costs and percentages are even higher for insurers of professional liabilities.
Focusing on the amounts of paid expenses, insurers perpetually seek methods, approaches, and schemes to contain these costs. Litigation Management manuals and monitoring reports are published and updated by, in some companies, dedicated personnel. Spanning a half century, this article presents and assesses an inventory of favored approaches used by insurers. The observations and comments are based upon reviews of insurer claim files in the course of a career as a claim professional and a consultant retained by insurers. Suggestions are provided for the use of time tested practices which remove non-lawyer work from defense counsel to staff claim personnel.

Financial relationships: Tough to monitor

Staring at the amounts of paid and projected payments to defense counsel demonstrates the costly issue faced by executives. The amounts call for decisive action but often result in declaring a single condition or element as the cause of the problem. This narrow focus will drive monolithic strategies to address the cause of the problem.
Some insurers have decided that defense attorneys’ hourly rates are too high and have designed strategies to lower them. These strategies include the use of fixed-fee schedules in which attorneys agree to handle certain types of lawsuits, usually the less complex ones, for agreed-upon prices. A broader variation is the use of an annual retainer wherein a law firm agrees to handle a loosely defined number of lawsuits of all types in return for fixed monthly payments. Another approach is to simply shop around for the lowest hourly rate and to assign the work to the attorney with the lowest hourly rate; a cheap labor strategy.
It is difficult for an insurer to monitor the fixed fee or retainer contractual arrangements to determine whether they actually reduce defense costs. Also, for all of the financial arrangement strategies there is a potential loss of quality in terms of the level of defense services provided. This is an acute problem, given the duty of an insurer to defend and the duty the defense attorney has to his client, the policyholder. Cut-rate defenses can backfire into bad faith actions against the insurer and professional liability actions against the defense attorney.
The underlying reason why these strategies typically fail is the fallacy that the hourly rates charged by defense attorneys are too high. Nationwide, insurance defense attorneys charge about $130 per hour, with higher hourly rates often found in big cities. Actually, insurance defense rates are not high, compared with the hourly rates of most other legal practice areas such as work relating to the Securities and Exchange Commission, mergers and acquisitions, labor law, corporate litigation, real estate syndications and domestic litigation. Quite often, the hourly rates attorneys charge insurance companies for defense work are upwards of 40% lower than what they charge insurers for corporate work.
The use of staff employee defense attorneys is a fine extension of the do-it-yourself approach. The hourly cost of staff attorneys, including support staff and overhead, is approximately $80-$90. Therefore, companies enjoy a $40-$50 an hour savings for every hour of defense work shifted from an independent attorney to a staff attorney. This is an excellent method to lower defense costs, but its use is limited to those insurers who have sufficient geographic concentrations of lawsuits to keep a staff attorney busy. To be cost effective, there must be sufficient billable defense work to shift at least 1,900 hours per year from an independent counsel to a staff counsel.
The use of staff counsel can be undermined by hiring less experienced attorneys who cannot otherwise obtain employment in the market which charges $130 an hour for services. This error occurs as insurers seek to lower the hourly cost of staff counsel operations below a reasonable market value. Because of the difference in competencies, real or perceived, and potential conflict-of-interest considerations, staff attorneys often handle only the routine, less explosive cases. Given the necessary concentration of work, staff counsel can contribute to the reduction in overall defense costs. To work, however, staff attorneys must be competent and experienced to be an equivalent alternative to independent counsel.

Other approaches

A naïve approach, perhaps taken out of frustration, finds insurance companies forming advisory councils with defense attorneys or defense organizations to discuss and design plans to lower defense costs. This approach is doomed. Defense costs are expense to the insurers and revenue to the attorneys. Does anyone really believe that attorneys are interested in determining how they can earn less?
More directly, the “cost of defense” settlement will reduce payments to independent counsel. The notion is to pay in loss an amount up to the expense cost of defending a threatened lawsuit. This can absolutely reduce legal expenses but it will certainly raise loss payments. A perversion of this concept is to assign a settlement value to virtually any asserted claim. In practice this does happen and subscribers will defend the concept as being financially prudent. Notes to claim files often describe the conclusion of a settlement negotiation as agreeing to pay an amount to avoid the cost of defense. Letters from defense counsel will also suggest a settlement amount to consider paying as the cost of defense. This is wrong since the insurer violates the insuring agreement to pay (only) those sums for which the policyholder is legally liable. It also fosters the notion that the insurer is an easy or liberal payer of claims; a perception that will bring demands to pay something for anything. The appeal of this approach should disappear with the presence and use of staff counsel.

Back-end techniques

In the mid 1990s, responding to pressure to lower legal defense costs, claim executives intensified and developed specific techniques directed toward lowering the billed amounts of defense counsel.
For decades, insurers have published general guidelines setting forth the duties and obligations of defense counsel selected to provide defenses to policyholders. This proper practice grew in scope in many unintended and often negative ways. Over time, the guidelines replaced a case specific letter of assignment provided by the staff claim handler to the selected defense counsel. (The recommended elements of a proper assignment letter are provided later in this article.) This was an initial step in reducing the affirmative role of insurer claim personnel in the management of litigated files. It fostered the concept of abandoning the file to defense counsel. Insurers recognized this as a bad practice but few focused on change. Actually, the use of guidelines is sound. It was the replacement of assignment letters and the often eventual removal of the role of the claim person that was bad. From defense counsel’s view, their primary duty was to the policyholder. With little or no direction from the claim person, counsel had no choice but to do what they were left to determine was necessary and in the best interest of the defendant policyholder. This evolved to counsel performing non lawyer work including taking the lead in gathering documents such as medical records; determining the need for and arranging for medical examinations; providing periodic status reports to the claim department; initiating the valuation of claims including recommendations for the amounts of case reserves; effectively deciding whether and when to try a case to verdict or to settle; conducting negotiations; and, essentially handling all aspects of the claim.
The unintended changes in the scope of the role of defense counsel described above resulted in a greater number of billed defense hours and higher defense charges per case. This began the use of general guidelines as a post billing hammer to adjust downward the number of hours charged. For example, if a billed item was not specifically included in the attorney’s responsibility as set forth in the guidelines, the billing charges were deducted and not paid. Often, the issue of whether the charged work was necessary to the defense of the policyholder and the fact that no one except the attorney elected to obtain the work was not addressed. This approach marked a deterioration of the working relationship between insurer staff claim handler and defense counsel. This is not a good thing.
To further manage litigation and defense costs, insurers properly required attorneys to provide budgets of estimated costs typically through discovery and exclusive of trial; a good management idea. In the wrong hands, the budget became the bar under which no management of the file by insurer staff was delivered. The budget also provided an opportunity to automatically rule out any charges that exceeded the budget. These cuts to bills overlooked the issue of necessary work as discussed in the preceding paragraph.
The success in actually and simply eliminating charges for work performed is moving the management of legal defense costs toward a health insurer model. The defense attorney (provider) submits an itemized bill for services; the insurer unilaterally compares and adjusts the charges against litigation management guidelines and pre-work estimated budgets (fee schedules); and, then sends a check in payment along with a marked and adjusted bill (EOB). This approach has not been seen to be successful in lowering defense costs. It has eroded the relationship between insurer and defense counsel.
Completing the back end approach is the creation of staff and vendor auditors of attorney bills. These auditors receive and scrutinize bills to identify variances to guidelines, budgets, and often subjective determinations of overcharges. The attorney then receives the lowered audited amount. Some vendors are paid on the basis of a percentage of savings. A vendor has confided that all attorneys intentionally over-bill. Perhaps this is a sign of “gaming” the system in expectation of bill reductions. An attorney remarked that the current adversarial relationship with insurer clients is prompting a move away from tort defense.

What can be done?

There is no magic bullet or quick fix to contain legal defense costs, and the so–called litigious society is not going away. Insurers will continue to be buyers of expert services and, in the case of litigation, those services are provided principally by independent tort defense lawyers. Therefore, the guiding principle for insurers should be to hire lawyers to do only the work that requires a lawyer’s services and not to ask them to do work that could be done by claim people. This means minimizing the need for lawyers in the first place by working to control the numbers of lawsuits filed against policyholders. This also means working to adjudicate matters by using alternative dispute resolution forums such as binding arbitration.

Pre-litigation strategy

The control of defense costs begins before the suit is filed. The handling of claims should be directed, to the extent practical, toward limiting the numbers of lawsuits to those claims where the loss amount demanded is greater than the insurer is willing to pay. Lawsuits filed because the insurer has been slow in investigating or negotiating often result in unnecessary defense costs and should be avoided. Of course, this excludes cases where the lawsuit constitutes the first notice of a matter.
As a quick test, claim management personnel should review the claim file upon the receipt of a lawsuit to determine whether the claim adjuster has been responsive to the claimant or his attorney. If it is a case in which the insurer would pay some amount to settle, has this been communicated or has an offer been made? Based upon independent studies, a conservative finding is that 5 to 10% of all lawsuits – and their resulting expenses – were probably unnecessary. This first test identifies the need for training and, perhaps changes in the supervision process such that claim personnel exercise greater contact and communication with third parties or their attorneys to eliminate the filing of potentially avoidable litigations. Of more direct and immediate consequence, the claim personnel can request an extension of time to file and answer to permit an evaluation of the claim and perhaps a successful settlement. In the latter case, a successful and justified negotiation means eliminating the need to retain counsel and the avoidance of legal expenses. Any extension of time must be confirmed in writing from the plaintiff’s counsel.

Alternative dispute resolution/arbitration

Working to reduce the potential of lawsuits to fair differences means a claim reaches the point where there is an impasse between the amount demanded and the amount the insurer is willing to pay. At this point, the matter requires adjudication. There are two courses available to the parties. An alternative to the filing of a lawsuit is the selection of an arbitration forum to apply for binding arbitration. The American Arbitration Association is an example of an organization which is equipped to facilitate arbitration. Arbitrations require the consent of both parties. Arbitrations are typically decided by a three member panel of arbitrators. Each side appoints one panelist and the two selected arbitrators agree upon the third member, the umpire. Arbitrations typically produce lower legal fees and are decided in a much shorter time. An insurer is missing the opportunity for reducing legal expenses if it has not attempted arbitration.

Assignment to counsel

Another critical point in managing legal expenses occurs when an unavoidable lawsuit is initially assigned to defense counsel. The initial assignment is the first opportunity the insurer has to direct the work of attorneys, and it often sets the stage for the insurer-attorney relationship over the course of the litigation.
Insurers typically assign work to attorneys through the use of a letter of transmittal. The extent and quality of assignment letters vary greatly from insurer to insurer. At one extreme, the letter consists of a few brief sentences typically telling the attorney to file an appearance and do whatever is necessary.
This type of letter does not restrict, define or limit the attorney’s activities, nor does it provide the insurer’s assessment of the claim and plan for future activity. As a result, it invariably produces a multiple-page letter of first impression from the attorney in which the attorney reviews the file which he has just received from the insurer. There is no benefit to pay someone to tell you what you already know. These “feedback” letters conservatively cost between one and two hours of attorney time charges, or from $130 to $260 for every suit assigned.

The assignment letter

Insurers who effectively manage and control litigation use a very detailed, case specific letter of assignment which tells the attorney how to proceed instead of leaving the assignment open-ended and undirected. Following are some guidelines regarding the specific points that should be included in the insurer’s initial assignment letter:
  • Coverage. Identify the coverages and limits of liability of the policy involved in the case. Discuss any coverage questions or state affirmatively that there are no coverage issues.
  • Identification of plaintiffs and defendants. Review the relationships of all parties to the litigation and identify any additional parties to be joined.
  • Identify the insured defendant. Specify the defendant(s) for whom a defense is owed. If the defendant is other than a named insured, explain the basis for coverage and defense.
  • Facts. Review the facts of the claim, including physical evidence, official records, witnesses’ versions of what happened and the position of the plaintiffs and defendants.
  • Damages. Outline the claimed damages and provide an assessment as to the accepted damages.
  • Current evaluation. Give the insurer’s evaluation of liability and damages, including potential claims for indemnity or contribution.
  • What the insurer will do. List any additional activities planned by the company, including additional investigation to be obtained and a timetable plan for disposition.
  • What defense counsel will do. In addition to filing an Appearance and Answer, list the items of requested Discovery. Request that the attorney simply acknowledge receipt of the assignment and limit any further comments to only those parts of the assignment letter which the attorney disagrees with or finds deficient.
  • Request an estimate of defense fees and expenses. Require the attorney to submit a budget of expected future costs. This will allow the claim supervisor to compare his own expectations as to defense costs to those of defense counsel. A wide variance signals the need for discussion with counsel. The defense attorney’s budget should be compared to actual costs as the case moves forward. The budget should be updated over time.
This type of letter supports the goal to manage legal expenses. It also ensures that the file supervisor has performed an up-to-date assessment of the claim and has a clear plan for the future handling of the case. Both purposes served by the letter should ultimately produce financial benefits.

Lawyer and non-lawyer work

After the initial transmittal of the suit, the level of legal expenses is related directly to the amount of work performed by the defense attorney, which should be limited to only those activities which require the services of an attorney. These typically include the preparation and filing of pleadings and interrogatories, appearance at trials and motions and the taking or defense of depositions. The insurer should recommend or approve all affirmative depositions. As noted earlier, defense attorneys should not perform work which can be done by adjusters, such as ordering and obtaining items in investigation and conducting negotiations. Insurers can review their closed suit files and paid attorney bills to determine the extent of work performed by attorneys that could have been performed by staff claim personnel. By identifying line item time charges for work not requiring an attorney, the insurer can develop an estimate of the amount of money paid to lawyers for performing work not requiring a lawyer.

Estimating the savings

Step 1. To estimate potential savings on attorneys’ fees, first estimate the number of avoidable lawsuits each year and multiply that by the historical average defense cost per closed litigated claim.
Step 2. Figure what can be saved by writing comprehensive assignment letters and thus avoiding long attorney “feedback” letters by multiplying the number of lawsuits per year times the average hourly cost of attorneys. (This assumes that the attorney spends only one hour on the response letter.)
Step 3. Take the average estimated number of hours of work per lawsuit that was unnecessarily completed by a lawyer and multiply that by the average hourly attorney fee. If no improvement is needed in an area, enter zero and consider this to be unique, extraordinary, and illusionary.
The sum of these three figures provides an estimate of the money to be saved by eliminating unnecessary litigation and attorneys’ fees. Studies have shown that unnecessary attorney activities (step 3) average five to twelve hours of charges per case. At $130 an hour, this adds $650 – $1,560 to the cost of defense for the insurer for each case.
Alternatively, insurers can probably skip the process of reviewing files and estimating potential savings on defense costs. Savings are possible for every company. Carriers that adopt the policy of providing prompt evaluation and responsive communications to third parties, sending explicit letters of assignment and not paying lawyers to do work which can be done by staff or other non lawyer parties will find that the dollar savings are there.


Jim Cerone is an independent consultant to the insurance industry, former Executive Vice President of the Travelers Property Casualty Corporation and former President of the technical services division of their claim organization.  Prior to joining Travelers, Mr. Cerone served as Senior Consultant and Equity Principal of Milliman & Robertson, Inc. (M&R). At M&R, he founded and directed the claims management consulting practice, specializing in consulting with management on a wide range of strategic and organizational issues. His background also includes service as Vice Presidents and Consultants with Tillinghast, Nelson & Warren, Inc.; Kramer Capital Consultants; and senior executive positions with three other U.S. insurers; Commercial Union, John Hancock, and American Reserve. In these positions, he was responsible for organizational design, acquisitions, automation, training and education, and the general management of large scale claim operations.

Straight-through processing: A best practice comes of age in the insurance industry

Can I buy it in a box? While strangely reminiscent of a whimsical Dr. Seuss children’s book, it’s a very serious question posed by all constituents within the insurance industry. Straight-through processing, or STP as it is more commonly known, has been the Holy Grail of the insurance industry since computers were first introduced decades ago. The goal has been the same from the start – streamline business processes to reduce friction along the value chain and lower transaction costs for all stakeholders.
A study done some years ago concluded eight cents of every insurance dollar is spent on redundant, non value-added activities performed at each link in the value chain. In an industry where combined ratios hover at or above 100 percent, reducing the expense ratio by even two to four percent can mean the difference between a profit and a loss for many companies. In an industry seemingly perpetually mired in the dynamics of soft and hard market cycles, maintaining profitability through pricing is almost impossible. The only controllable component is the expense side of the equation. STP has the potential to make a significant impact on reducing expenses for the entire industry.
But, while streamlining the insurance process is a great concept and one flag that everyone dutifully salutes, there is no one-size- fits-all STP solution for all organizations engaged in the business of insurance. Based on history, geography, lines of business, coverages, core administration systems, IT infrastructures, and even the demographics of individual policyholders and books of business, each insurer is different – which makes finding a single silver bullet almost impossible. However, there are “many paths to enlightenment” as Chinese proverbs have been known to preach, and STP is but one way insurers can begin to work the kinks out of the process, so to speak.
As we venture down this path, it is important that STP is fully and understandably defined and that the obstacles to the achievement of STP are identified and overcome. Just keep in mind as you embark on any STP initiative, achieving success is dependent upon a number of factors.
Defining STP
There are many misconceptions still lingering about STP, including the thought that perhaps it is something one can go out and buy, straight out of the box. Unfortunately, you won’t find a nicely packaged STP CD and manual on the shelf at your local Best Buy.
The main idea behind STP is simple: The completion of insurance processes from beginning to end with minimal, if any, human intervention. In other words, STP is the fully-automated initiation and completion of an insurance transaction from start to finish. It is data centric, event-driven and requires minimal human intervention. And ultimately, it is dependent upon the integration of systems, data standards and data requirements.
The beauty of STP is also too often in the eye of the beholder. Back in the heyday of mainframes, insurance carriers conceived the bright idea of installing terminals in the agents’ offices so agents could do data entry instead of clerks at the carrier’s office. It was a great deal for the carrier, but not so great for the independent agent repeating the process for each carrier he or she represented. Agents’ offices soon began to look like computer museums with a line-up of different terminals connected to the various carriers they represented.
Today, this multiple terminal approach has been replaced by carrier websites. Instead of installing expensive hardware and providing costly data connections, the carrier simply has to provide an Internet portal for the agent. The agent still performs the data entry function for the carrier, and, they also get to pay for the Internet connection. This is often described as STP by insurance carriers, and as something entirely unprintable by agents.
This situation specifically gave rise to STP’s constant companion, SEMCI, or Single Entry Multiple Company Interface. Until these two concepts are reconciled, STP will not be a truly industry-wide solution with benefits for all parties participating in the process.
Consider that when a single insurance policy is sold, multiple parties are instantly involved. Information about that single policy must often be shared and exchanged between agencies, brokerages, MGAs, carriers and even third-party data providers. In a manual process situation, that means each and every touchpoint is just one more opportunity for something to go wrong due to human error, the differences in the way systems format data, or simple incompatibility. In an STP scenario, the data is passed from one entity to another without human intervention, without re-keying of data or without system interpretations. Data integrity and security are maintained while policy processing speed is improved and resources once tied up in a time-consuming process are freed for work on other critical projects.
At its heart, even though each company may go about achieving it differently, each STP initiative must involve automated decision making, automated workflow, integrated production systems, integrated external data sources, and integrated internal data. While overall operational benefits can result for the entire insurance organization, the areas of customer service, claims and underwriting will yield the most dramatic results from an STP initiative.
There are a number of new technologies and standards that can help facilitate the implementation of STP and are key enablers that simplify the process.
BPMS & other enablers
Business Process Management (BPM) has been enabled by a class of software designed to systematically manage the flow of units of work through an organization as defined by the steps required to complete them. Successful completion of these units of work, or tasks, may be dependent upon documents, data, or both and may include both manual and automated steps in a process. BPM software applications incorporate a number of tools for modeling, designing, executing, managing, monitoring, and optimizing all aspects of a business process. Most leading vendors in the BPM market incorporate all of the component functions of BPM into suites, collectively referred to as Business Process Management Suites (BPMS).
This category of software is growing rapidly and is expected to generate significant growth in sales and revenue over the next five years and beyond. It is the focus of specialized annual reports by analysts such as Gartner and Forrester due to its increased popularity and the demand for information about it. It has also attracted a large number of vendors and the current products in the market generally trace their roots to one or more applications that have evolved as the core components of a BPMS.
The basic components of a BPM Suite include:
  • Business Process Optimization (BPO). BPO involves using simulation tools that allow analysts to design workflows and test them with various assumptions about the volume, time, and resources required to process tasks. Once implemented, data from the implemented processes are fed back into the model (referred to as round tripping) to identify opportunities for improvement based on actual results.
  • Business process design. Business process design is the graphical representation of the conceptual model designed during the optimization process which can be fleshed out to include business rules, links to other applications, and any user interfaces to capture information during the execution of the workflow.
  • Business process execution. Business process execution is the workflow or orchestration engine that executes the workflow and handles all the integration with other applications, including user interfaces, if required.
  • Business Process Intelligence (BPI). Deploying automated workflows without the means to monitor and measure performance is like flying an airplane without any instruments. The tools of BPI include standard reports available on-demand and dashboards to monitor real-time activity in order to proactively manage work as it flows through the business.
  • Enterprise Content Management (ECM). ECM provides for storage and retrieval of electronic documents, including digitized paper files, faxes, emails, voice recordings, video, photos and other graphic images and data files or any electronic file used to support the processing of the business.
BPM Suites generally fall into two major categories: those that primarily target human-centric processes, including associated documents, and those that support system-centric or event-driven processes. The former represents the current state of the insurance industry in which transactions typically involve documents, manual processes, and decisions made by people in specialized roles and often with specialized skills, e.g. underwriters and claims examiners. The latter will become increasingly important as the industry evolves toward STP. But for the foreseeable future, BPM Suites for the insurance industry will need to incorporate both models.
In addition to BPM, a number of advances in system design and functionality are available to create STP applications without the need for extensive programming to interface with other applications. These include Service-Oriented Architecture (SOA) and Web Services. Most vendors support both SOA and Web Services in their latest offerings.
Case study: Strickland Insurance
Greg Ricker, vice president of information systems and chief information officer for Strickland Insurance Group, leads his company’s real-world efforts to achieve STP. The ongoing project has put Ricker’s more than twenty years of experience working in various information systems roles, product and application development positions, as well as management, infrastructure and technical services roles, to the test. During his tenure with Strickland, the company has implemented a number of key systems initiatives to automate and streamline their insurance processing and brokerage operations.
“The goal was for Strickland to be able to process business without duplicate entry or manual intervention,” said Ricker. “I wanted to be able to get it all done within the same day, most of the time within minutes or seconds in fact. I wanted automated processing from end-to-end.”
In order to accomplish those goals, Ricker knew there were certain steps he would have to take to achieve STP at Strickland. Ricker’s solution incorporated standards, infrastructure and architecture, and a set of applications already in house at Strickland – including the ImageRight content management and workflow system and other solutions running policy administration, policy issuance, rating and underwriting. The project also involved the integration of those applications and the establishment of a monitoring and reporting system that lets Ricker keep a close eye on company performance.
“What we’re really trying to do is drive the cost out of the transactions, right?” asks Ricker. “Then to get started you need to focus on how to eliminate redundancy and manual processes, while creating and routing only those tasks that need manual intervention. It is important to route tasks to the most efficient operator, whether that is a large account manager, an underwriting specialist or what have you.”
Ricker emphasizes the importance of documenting existing processes at the outset of any STP initiative so that you have a baseline or benchmark to measure against and also just in case you need to revert to an old process or workflow before the project’s completion. Ricker and his team identified steps in critical company workflow, including exception processing, since it can impact both internal and external customers.
Once the goals were identified and existing processes were well-documented, Ricker and his team focused on data standards, which is the place Ricker passionately argues any STP initiative must start.
“You have to make sure you are using a consistent format of data, whatever that may be – XML, a proprietary format, whatever,” said Ricker. “You have to confirm data integrity and make sure that you are getting all the information you need the first time, because you can’t massage the data once you finally do get it all. And, you need to verify your methods of obtaining data. Is it a daily electronic upload? Does it happen with each new transaction? Finally, you need to document and communicate all the standards at work within your company and processes, especially if you are working with third parties.”
At Strickland, Ricker also went through an evaluation process to determine if the company’s infrastructure would handle the demands of an STP initiative. Ricker’s team’s tasks included ensuring capacity, implementing redundancy and confirming bandwidth. Additionally, Ricker feels that having automated scheduling software in place is a must to resolve and work around issues like timing dependencies including batch print and file generation for third parties.
For Strickland, which operates in both the admitted and non-admitted markets, the next step was system design. They needed to know exactly which applications would be involved and how they would be integrated. And, they needed to establish the flow of work through the completed system.
“You need to leverage what you’ve got,” said Ricker. “Take an inventory and decide what you already own and what you already have implemented. Plus, you need do deal with the change management issues involved. You know, you get employees who will look at this and say, ‘How can a computer make business decisions?’ and ‘What about my job?’”
Business rules engines are the answer to the question about decision-making. Business rules engines are one of the critical components in any STP initiative, as they can validate business rules and continue the processing of an insurance transaction. Most business rules engines can accept data from multiple sources, including a policy production system, an agency management system, a document management or imaging system and a data warehouse as well.
As you can imagine, Strickland’s push to achieve STP has been a complicated process, and one that Ricker concedes is still not 100 percent complete. However, for Ricker and Strickland, the benefits far outweigh any pain involved in the implementation.
“We have reduced cycle times,” said Ricker. “We’ve improved accuracy and today we have the absolute best resource working each task. Now manual intervention occurs only on tasks that truly require manual intervention, and that means more transactions are processed faster.”
And don’t think that Ricker is sitting back today enjoying the fruits of his labors – he’s watching the process very carefully.

 

“It’s important to keep score,” said Ricker. “You can’t manage what you can’t measure, right? So metrics are crucial. I want to be able to see at a glance whether this is working or not, and if I know that, I can publish those results and share them with management, employees, key vendor partners and with other constituents downstream as well.”
Square pegs and round holes
Unfortunately, the differences in carriers discussed above mean there is no single method that will help every insurance organization achieve STP, no silver bullet. The process for every carrier will be different, and it can, or more probably will, involve different technologies, departments and processes.
“It’s precisely in that kind of environment in which standards flourish because as long as all parties within the value chain have implemented standards, the path doesn’t matter,” said Rick Gilman, vice president for Pearl River, New York-based ACORD. “One carrier may want to have its agents work through their website, pulling data from the agent’s management system; another might want to keep the agent centered on their agency management system and have the data fed into the carrier’s system. Either way, or for any other scenario, standards support those choices.”
But standards have also faced adoption challenges in the insurance industry and differences in systems, integration, communication and processes continue to proliferate. Could those very differences be the reason STP has failed to go mainstream in the insurance industry?
“I believe the word ‘failed’ is a strong word to describe the state of STP in insurance for underwriting,” said Deb Smallwood, co-founder of Smallwood, Maike & Associates, a boutique strategic advisory and consulting firm providing services to both insurance carriers and solution providers to the industry. “Slow to embrace, adopt and realize the full potential and power of STP are more accurate descriptions. However, this is for underwriting. In the world of claims, for example, the industry is further behind and the word ‘failed’ is probably more accurate.”
In spite of Smallwood’s protests, STP has not achieved marquee status within the insurance industry as a mandatory part of insurance processing. Many insurance organizations are still working with outdated systems and developing manual workarounds on a daily basis, and the insurance industry’s tendency to view technology with a wary eye has not been helpful either. Moving toward system integration, consolidation of workflows and processes, a lack of useful data standards, and the possibility of elimination of manual intervention brings change management challenges along for the ride.
“We’ve simply got to address standards,” says Ricker. “We’ve got to stop talking about it, embrace them and deploy them.” ACORD’s data standards, and services that provide the translation of those standards into the format of the receiving system, are available to facilitate STP today. These standards and services can alleviate the need for insurance carriers to invest in the wholesale replacement of legacy systems.
“Data standards are important to any insurance company looking to share information with business partners, customers, or even internally between systems,” said Rick Gilman, vice president for Pearl River, New York-based ACORD. “At the core, standards are an agreement on terms and definitions, i.e., What do you mean by ‘premium’ and how do you transport that information? If a company is looking to adopt STP, which by its definition is entering information into a system once and then moving it through the value chain without having to re-enter the same data, then you need standards. The alternative is having to build one-off solutions for each and every system and/or business partner that information needs to reach.”
The payoff
A recent Celent study on STP defined some of the benefits for carriers, and indicated some carriers can reduce cycle times by up to 80 percent, improve hit ratios by 20 percent, reduce workload by up to 75 percent, and reduce paper costs by up to 40 percent. Those are some pretty significant numbers for carriers looking to squeeze more blood from the proverbial stone.
Additional benefits include improved customer service, stronger data integrity, better resource utilization, and ultimately the Holy Grail of ROI, reduced operational costs. However, none of this will occur without the willingness to invest in the technology to implement it and the adoption of standards that enable it.
There are certainly segments of the industry that are poor candidates for STP. Highly complex commercial risks and reinsurance are typically treated as one-off deals which do not lend themselves to extensive automation. Nor is it necessary to engage in a wholesale replacement of the current systems and infrastructure. There are a number of small steps that can be taken incrementally to introduce STP into specific business units and/or product lines that will have a positive impact on the bottom line. A small step here, a small step there, and pretty soon you’re well down the path.
“I believe the business and technology leaders understand the need, but it takes time and money to clearly re-engineer the business processes, enhance the systems and harvest the data necessary to develop business rules, define workflow and to enhance the rating engines, implement predictive analytics to service and price accurately using STP along with system integration to back end systems,” said Smallwood. “I have seen companies do the first pass in less than one year where STP represents around 20 to 40 percent of completion. But it is taking companies up to three years to implement all of the pieces to get up to 80 or 90 percent STP.”
The moral of the story: The shortest distance between two points is a straight line, so even if the road to achieving straight-through process is rocky, it is worth it.


Phil Hargrove is insurance technology advisor for ImageRight, a leading provider of content management and workflow solutions for the insurance industry. He has over 35 years of experience in insurance operations, information technology, and intellectual property.  Prior to joining ImageRight, he served as a vice president of business development for GE Insurance. Hargrove also has many years of experience as the senior IT leader for a number or organizations from Fortune 50 to entrepreneurial ventures, including the commercial insurance division of GE Insurance Solutions and Johnson & Johnson Health Management, a subsidiary of Johnson & Johnson.  He has held management positions with major software vendors and his broad experience provides him with a unique perspective for his role at ImageRight as a champion of innovation in the company’s products and services. He can be reached for comment or further information at phargrove@imageright.com.

Complemented Core Capabilities: How small insurers can adapt and thrive

Our products are services
Insurers are service businesses. Although insurers use the same vernacular as manufacturers and refer to their “products,” they do not create a tangible product. Instead, insurers agree to provide services to their clients when certain events occur. The original insurance service, in its simplest terms, was fulfilling a promise to pay. This simple promise has expanded over the years into a broad array of services. To deliver these services, many insurers developed internal personnel and technology infrastructures that were substantial and complex. Whether large or small, successful or struggling, established or start-up, almost every insurer operates within this proprietary service business model: to provide services they build, own, and control the infrastructure and resources that provide the services.

4-1

This proprietary business model has been a competitive advantage for organizations large enough to create the needed service delivery infrastructure, and a barrier to entry for start-ups and for insurers seeking to open new lines of business, develop product variations, or expand geographically. But this past advantage has now been turned on its head. Technology and market forces have converged in recent years to offer small insurers an affordable opportunity to control their service infrastructure – without having to build and own the resources. Small insurance businesses can now effectively deploy a different business model: they can staff their core functions internally and use technology and insurance service providers as a key strategic factor, at a variable cost, to complement and extend their core capabilities.
Technology and insurance services can be dedicated to the insurer as though they are part of its internal infrastructure, but matched just to the extent of the insurer’s needs rather than drawing resources as embedded overhead. This Complemented Core Capabilities approach enables smaller insurers not only to manage infrastructure costs effectively but also to compete, grow, and thrive in ways that were previously beyond their grasp. This business model would have significant strategic and structural cost advantages even in the older, quieter insurance business of decades past. In the current competitive business environment, which includes market forces such as rapidly changing technology, increasing difficulties in recruiting and retaining insurance talent, and tightening regulatory restrictions, the model becomes even more compelling.

4-2

The competitive environment
The property and casualty insurance industry faces a deepening and potentially long term soft market. The soft market appears to be firmly entrenched across all lines of business.[i] Some analysts characterize the current market cycle as “painful and destructive.”[ii] Moreover, the market may stay this way until 2015 or 2016 and inevitably produce impairments in insurers that are less able to compete.
In addition to this soft market, costs are increasing on several fronts. This includes, for example, the effect of regulatory changes such as the Gramm-Leach-Bliley Financial Services Modernization Act and Sarbanes-Oxley requirements, disaster planning regulations after Hurricane Katrina, changes in accounting standards, and more that have added layers of regulatory and market-conduct burdens. The possibility also looms of a federal regulatory role increasing the industry’s already expensive and cumbersome regulatory environment.[iii] These changes and others present challenges for insurer staff and their technology. Unfortunately, both the ability to add expert staff and the readiness of legacy technology are problematic.
The insurance industry faces a rapidly and significantly shrinking employee base, and the competition for talent has become acute.[iv] The numbers are sobering. Deloitte Consulting notes the following:
[T]here is an impending shortage of “critical talent” in the insurance industry – the talent that drives a disproportionate share in a company’s business performance. Depending on an insurer’s business strategy and model, these can be the underwriters, claims adjusters, sales professionals, actuaries, and others who can make the difference between 10 percent and 20 percent annual growth – or between underwriting profit and loss. The looming talent crisis is about to become much worse due to two emerging trends: the retirement of Baby Boomers, who begin turning 62 in 2008, and a growing skills gap.[v]
In 2006, 80% of the chartered property and casualty underwriters and 70% of property and casualty claim adjusters were over 40. And replacements aren’t arriving in large enough numbers. By 2014, Deloitte predicts the industry will face a talent gap of 23,000 underwriters and 85,000 claim adjusters. This would be a crisis in any business environment, but the current soft market means that an insurer’s ability to compete will depend on finding flexible sources of talent and expertise. The shortage of talent is occurring at all levels, including executive and middle management. This crisis affects all insurers, but smaller businesses with fewer resources to compete on salaries and other incentives will have an acute disadvantage.

4-3

In addition to the increasing shortage of insurance expertise, insurers face a technology bind between new and legacy technologies. The average policy administration, claim, and billing system is 24 years old.[vi] Like investors increasing holdings in a stock that has dropped in value, companies have continued to add enhancements and modifications to their legacy platforms rather than making a stop-loss move to Web services systems with a clearly brighter upside. Insurance executives do see the problem. KPMG’s 2007 survey indicated that improving technology is second only to strategic acquisitions as a target for capital deployment and a major factor affecting their capabilities for future growth.[vii]
Small mutual insurers have great difficulty allocating the threshold amount of capital to join the “club” that benefits from rapid changes in technology. The National Association of Mutual Insurance Companies identified this capital deficit as a developing crisis for small insurers a few years ago,[viii]and if anything the situation has worsened. Adding insult to injury, customers expect more. IBM recently surveyed more than three thousand property and casualty policyholders and noted that insurers must change their traditional business models and technology to reach increasingly Internet-savvy customers who have increasing expectations for instantaneous transactions and information.[ix] Web-based product distribution presents both a significant competitive challenge to insurers who lack access to this customer channel and a significant cost, deployment, and maintenance challenge to insurer technology infrastructures and staff.[x]  The event horizon of the new market created by these forces is rapidly moving closer. According to the Gartner Group, just five years from now only insurers that overcome the challenges of increasing regulation, an aging talent base, and inflexible systems will remain competitive.[xi] Without a solution to enable them to remain competitive, small insurers will simply slip further behind their larger competitors. The solution path for small insurer survivors and “thrivers” lies in a Complemented Core Capabilities approach that leverages technological opportunities, internal core insurance strengths, and external availability of variable-cost insurance expertise.
Complemented Core Capabilities
A Complemented Core Capability approach builds on shifting business process outsourcing or BPO (retaining another company to perform distinct business activities for you) from a tactical to a strategic component of your business model. BPO has proven to be a successful strategy in financial service industries other than insurance.[xii] In deploying a Complemented Core Capabilities strategy, an insurer focuses its limited resources and talents on its core competencies while complementing those capabilities with leading technology and insurance services from outside the company. This is a strategic response to the market forces specified in the first part of this article. Insurers have looked outside their internal resources for help on individual service issues for decades. The claim process has particularly been an area where insurers have utilized outsourced services. Independent adjusters and appraisers have long been a staple of the property and casualty industry. Although the insurance industry has outsourced some services on a tactical basis, it nonetheless has lagged far behind other industries in deploying BPO to develop a strategic advantage. Smaller insurers facing today’s fierce competitive pressures now have an opportunity to leverage a Complemented Core Capabilities strategy to transform their capabilities to compete and succeed.
Technology capabilities
Transformational opportunities have emerged for insurers over the past five years due to two key fundamental changes to property and casualty insurance technology: (1) maturation of Web services architecture and (2) business process management tools.[xiii] Indeed, 80% of 2008’s technology development projects may be focused on these technologies.[xiv] With so many businesses now investing in these technologies, the changes they bring are inevitable. They are particularly suited to enable insurers facing resource challenges to extend their core capabilities and maintain and grow their market position. Small insurers, however, are traditionally cautious and change-averse. We’ve all heard decade after decade of hype about the great changes the newest new technology will bring. But a cynical approach is strongly contradicted by the facts on the ground – the performance of the technology in real work settings – and is especially counterproductive in this environment, for inaction actually produces the real risk of an insurer’s obsolescence and inability to compete as more and more competitors adapt.[xv] These  technologies have demonstrated capabilities, robust performance, and cost savings in crucial areas such as implementing and changing business rules. The question is no longer whether these technologies will dominate the market but rather which insurers will embrace the opportunity.
Web services technology takes what used to be complex business operations represented by millions of lines of code and breaks them down into reusable building blocks. Previously, to change the business process, add a product, enter a state, or implement any significant change, the insurer had to invest substantial time and resources changing those lines of code and testing those changes. Web services architecture allows insurers to capitalize on the fact that the business of insurance consists of patterns that repeat throughout business requirements across the various departments and functions of the insurer. This architecture provides very generic, reusable building components that produce a simple, easily configured environment that focuses on the business operations and facilitates rapid and flexible product development. Where business processes previously were forced to match system capabilities, now systems are easily configured and changed to match the business process. Business process management technology automatically tracks and coordinates transactions and processes, allows and triggers manual intervention as required, extracts data and transfers it to appropriate users, executes transactions across multiple systems, and facilitates straight-through processing of transactions without human intervention under defined criteria. The heart of business process management solutions are process and workflow automation in accordance with rules, maintained in a rules engine, that define the sequences of tasks, responsibilities, conditions controlling the processes, and process outputs, among other aspects of the processes.
Business process rules engines have even transformed the cumbersome process of configuring and maintaining rating engines.[xvi] Systems streamline processes by limiting human involvement to just those aspects of transactions that require exception decisions and actions by automatically handling transfer and execution of process tasks in accordance with defined conditions. By reducing the time and resources required to complete processes, the system reduces cost. Moreover, reducing manual touches reduces transaction time, improves service, and reduces errors.[xvii]
Straight-through processing (STP) is just one of many areas where the combination of Web services architecture and business process management solutions is making an impact on the property and casualty insurance industry. STP means the end-to-end execution of a business process, such as policy rating, quoting, and issuance, with little or no human interaction.
According to John Del Santo of Accenture:
[T]op performing carriers are now turning STP into reality and profiting handsomely along the way. These carriers are implementing rules-driven platforms that enable STP across the entire insurance policy life-cycle – from sales illustration to policy administration. The scalable technology with which these platforms are built is enabling these carriers to drive major transformational initiatives – a feat that their competitors are racing to repeat.[xviii]
Access to business process management technology is the obvious essential first step in winning that race.
Core capabilities
Increasing the focus on core competencies to increase business value is not a new concept to anyone who has ridden in an elevator with a newly minted MBA. The Complemented Core Capabilities approach builds on the basic core competency argument that capabilities reflected in skills and knowledge sets define the unique elements of a business and its competitive position in the marketplace. Non-core competencies when performed to expectations, on the other hand, do not offer an opportunity for significant differentiation from your toughest competitors.
Non-core does not, however, mean unimportant or unnecessary. Poor execution of non-core functions can, of course, impair competitive position, but there is no business advantage in your doing it well, if someone else can do it well or better for you as their own core competency. This risk of a non-core business done poorly has led a naturally risk-averse insurance industry to sequester non-core capabilities in house. That, in this market, is a strategic mistake and is based on a fallacy that only ownership delivers control. A model that delivers control without ownership, such as the Complemented Core Competencies approach, can increase focus on core competencies with controllable risk. The risk, indeed, lies elsewhere. In an environment where talent in both core and non-core functions is becoming harder to find and more expensive to acquire and retain, insurers become particularly vulnerable; one commentator articulated this as follows:
Given the limited corporate resources and executive attention, if you focus on core competencies, who focuses on the other non-core but necessary elements of the value chain?[xix]
Stated differently, if an insurer’s success depends upon its core capabilities, should it divert its resources and energy from those core competencies to maintain capabilities in non-core functions? Complemented Core Capabilities as strategy is more than tactically shifting staff time from non-insurance tasks. For example, automation of tasks and transfer of knowledge through the creation of system rules in business process management systems may alleviate some pressure on an insurer’s staff.[xx] Nevertheless, increasing competition, rapid technology changes, an evolving regulatory environment, and demand for ever more innovative insurance products will still challenge the capabilities of the insurers’ employees.[xxi] Insurers who are willing to realistically assess their inadequacies and needs, and turn to experts outside their organization to complement their core capabilities, will be in a better position to survive and prosper than those that continue to stretch their executives and staff to cover broader and broader areas of responsibilities and execute processes and tasks beyond their core skills.[xxii] In addition, by looking outside the company for resources to enhance expertise and capabilities, insurers can access talent on an as-needed, variable-cost basis rather than adding to overhead or, alternatively, proceeding without the expertise because a full-time resource cannot be justified financially or acquired competitively.
An opportunity: Reduced-risk transformational change
The maturation of Web service and business process management technologies, the increasing availability of outsourced insurance talent and services, and the convergence of those capabilities into variable cost options for insurers present small insurers the opportunity to transform themselves into formidable competitive platforms. Michael Sutcliffe of Accenture characterizes the challenge and opportunity as follows:
High-performance businesses revisit and adapt their operating models as required to sustain competitive advantages over time. Outsourcing can allow companies to build new business capabilities rapidly, expand into new geographic markets and change internal systems and processes to support new business models. It reduces the risk associated with implementing transformational change.[xxiii]
Market forces will inevitably transform small insurers over the next five years. The crucial question for each company is whether it executes that transformation purposefully and strategically, or shifts reactively to wherever the market forces drive it. Each insurer has an opportunity to rethink the way it does business and find ways to extend its current capabilities and develop new ones. Some are embracing that opportunity.
Case study: Unity Life[xxiv]
Unity Life of Canada has embraced a business model that enabled the Toronto-based mutual life insurer to grow from $2 million to $50 million in settled premium in only four years. The company
has been transformed from a struggling life insurance company into an innovative provider of unique insurance products through unique distribution channels. It accomplished its transformation by focusing its staff exclusively on core competencies while enhancing its capabilities in all processing and non-core functions by establishing strategic relationships with outside experts.
4-5
“About five years ago, we decided if we were going to survive and prosper in an environment where the larger mutuals had de-mutualized and there were mergers and acquisitions going on, we had to do something significantly different,” says Tony Poole, senior vice president of sales and marketing at Unity Life. So Unity Life management decided to create a virtual insurance company, spinning off its back-office operations into a separate company, now called Genisys.
“We recognized this was a completely different business model,” Poole says. “It would free us up to really do what we do best – our core competency – which is the manufacturing and distribution of products.” The idea was to transform Unity Life’s back office from an expense-driven operating division of a life insurance company into a revenue generator, Poole says. With Unity Life as its original customer, Genisys – an end-to-end business processing outsourcer (BPO) to the life insurance industry – has since attracted several more customers, including CIBC, BMO Life, Gerber Life Insurance Co., and Manulife Financial. Unity recently finished its transformation by divesting itself of Genisys to better execute its new business model. Freed from day-to-day back-office operations, Unity Life also outsourced human resources and legal, valuation, and actuarial services. “We said, ‘What is our core expertise? It’s manufacturing, marketing and distribution of products,’” Poole says. By outsourcing the valuation and actuarial functions, Poole noted that Unity Life obtained best-of-breed talent that it otherwise couldn’t afford as a small insurer. Indeed, Unity Life now functions very successfully with a core group of executives and employees who focus on developing profitable new business and retaining profitable current accounts, while complementing those core capabilities with technology and insurance services from expert providers. Unity Life has successfully executed a Complemented Core Capabilities strategy to transform itself from a small, struggling insurer to a thriving competitor. The strategy seems particularly suited to markets characterized by commodity products, tight margins, and industry consolidation, according to Mike McGuin, senior marketing specialist at Toronto-based Genisys. He notes, “When you look at the landscape, insurance companies need to redefine their core competencies to continue to be viable down the road. That’s why outsourcing is an option they should look at – to reduce expenses, redefine their processes, and leverage best-of-breed technology that they would not be able to afford otherwise.”
Case study: SureProducts Insurance
SureProducts Insurance Agency, a Monterey, California, property and casualty program manager, has employed an aggressive Complemented Core Capabilities strategy to profitably underwrite and service approximately $10 million of California-based property and casualty insurance. Utilizing a rule-based platform provided by its sister company, ISCS, Inc., SureProducts manages and services the business with just four employees: a senior executive with deep underwriting expertise, a senior executive with substantial claim expertise, a field underwriter, and an office manager. All other functions are performed on an outsourced basis.
SureProducts has integrated its business rules into the ISCS system rules engine to facilitate a modified straight-through processing approach. “Our system enables us to function almost completely on an exception basis, “says Ernie Weilenmann, vice president, Underwriting. “We spend our time making decisions to assure that we write good business, not processing policies or managing a backend infrastructure.” Steve Broom, vice president, Claims, manages the claim function in a similar fashion. He notes, “We rely heavily on outside adjusters to perform claim tasks, but I can make the key decisions on our claims and be confident that our service requirements will be met.”
Managing a $10 million book of business with just four employees may seem like a fantasy when companies of similar size require 20+ employees, but SureProduct’s track record proves the model works. It has consistently written to a combined ratio of less than 65% over the last five years. Moreover, in the critical area of operating expenses, its total cost for the company infrastructure and outsourced services is 10%.
Getting there
Conceptual barriers can keep an insurer from “getting there.” Although the reasons vary from company to company, major factors are fear of losing control and internal cultural resistance to outsourcing.[xxv]These barriers can seem insurmountable until forcibly shattered by market forces and it is too late to adapt. But research shows that fears of outsourcing are most often not realized: contrary to placing their businesses at risk by seeking expertise from outside sources, a large majority of companies report that their processes and capabilities improved, according to research by Accenture.[xxvi] The researchers also observed that “Outsourcing provides the opportunity to reach beyond a company’s typical boundaries with internal staff and leverage new thinking and alternative ways of effective change.” Moreover, outsourcing unquestionably introduces a rapid infusion of advanced technology and ongoing access to enhancements. These are positive views of specific changes and increased competitive positioning. Still it is change, and addressing cultural resistance to change can be solved only by effective insurer executives who are committed to assuring that their carriers can compete in the future and leading their companies through that change.[xxvii] Anxiety over loss of control should diminish considerably once the insurer understands the capabilities provided by business process rules engines. As Andy Scurto, President of ISCS, Inc. relates,
Insurers are just beginning to grasp the potential of Web services and rules-based technology. Many executives think that because they now have a Web-enabled front end to their system that they have all the functionality they need. But if they deploy a Web services and rules-based platform, they can have every person who services their business, whether an employee, agent, outsourced service provider, or vendor, work on the insurer’s platform through a Web portal and then assure through business rules in the rules engine not only that those individuals meet service standards but also that exceptions to those standards are immediately escalated to company management. That is a more reliable and controlled environment than they have now.
It may seem counterintuitive to those still wedded to the owning-is-control model, but the reality is that the small insurers that are laboring to compete with client-server technology, that even have deployed a Web front end, and are asking staff to stretch themselves across diverse functional areas have less control over their businesses than those moving to a Complemented Core Capabilities strategy utilizing advanced Web services and business process management technology. We must remember that while we call what we create “products,” we are more accurately executing business processes that deliver services according to designed rules. With the latter framework, we can better adapt to the changing market. The small insurers that thrive in the immediate future will be those that get beyond their cultural barriers, adapt to new business models, and embrace the transformational opportunities now available to them. The reward is a competitive business delivering value to its customers, gainful employment for staff, and significant return on investment for its owners. That’s worth striving for.
References


[i]See, “All Signs Pointing To Firmly Entrenched Soft Market”, National Underwriter Property & Casualty, October 29, 2007, p.8
[ii]See, “P-C Industry First-Half Profits Way Up, But Flat Premium Growth Raises Concern”, National Underwriter Property & Casualty, October 1, 2007, p.8.
[iii]See, “Regulation of the Property/Casualty Insurance: The Road to Reform”, Public Policy Paper, National Association of Mutual Insurance Companies.
[iv]See, ”How Insurance Companies Can Beat The Talent Crisis”, Deloitte Development LLC, 2006.
[v]See, “Waging a War for Industry Talent,” Insurance Journal, September 3, 2007.
[vi]“3 Reasons To Replace Legacy Systems”, Best’s Review, May, 2007, p.82.
[vii] See, “KPMG’s Annual Insurance Industry Survey,” KPMG LLP, September 11, 2007.
[viii]See, “Focus On The Future Options For The Mutual Insurance Company,” National Association of Mutual Insurance Companies, January 1, 1999. One option that NAMIC suggested was for small mutual insurers to move from a business model where they owned their service platforms to an environment that enabled them to share services with other insurers.
[ix]See, “Climate Change,” Insurance Journal, September 3. 2007. See also, “The Steady Evolution of Online Service,” Insurance Networking News, November 1, 2007.
[x]See, “Outsourcing to Play Larger Role Among Insurance Companies,” Outsourcing Center, January 2003.
[xi]See, “Staying Competitive,” TechDecisions, November 2007, p.30.
[xii]See, “The great transformation: Business process outsourcing as the next step in the evolution of Financial Services,“ The Point (Accenture), Volume Three, Issue 6, 2003.
[xiii] See, “Tipping Points in Insurance Automation,” ISCS, Inc., 2006.
[xiv]See, “Unlocking the Power of SOA with Business Process Modeling,” CGI Group, Inc., 2006, citing predictions by the Gartner Group.
[xv] See, “3 Reasons to Replace Legacy Systems,” Best’s Review, May, 2007.
xvi]See, “Rating Systems Move Out Into the Open,” Insurance Networking News, October 16, 2007.
[xvii] See, “A User’s Guide to BPM,” Doculabs, 2003.
[xviii] “Welcome to STP,” Best Review, October 2007, p. 121.
[xix] “Outsourcing Helps Firms to Focus on Core Competencies,” International Association of Outsourcing Professionals, 2006.
[xx] See, “3 Reasons to Replace Legacy Systems,” Best’s Review, May, 2007, p.83.
[xxi]  “BPO in Insurance Sector: Pains and Prescription,” Wipro Technologies, 2002, p.5.
[xxii]See, “Outsourcing to Play Larger Role Among Insurance Companies,” Outsourcing Center, January, 2003.
[xxiii]See “Creating an operating model for high performance: The role of outsourcing,” Outlook(Accenture), May 2004.
[xxiv]The Unity Life of Canada Case Study is excerpted from “Industry Moves toward Global Sourcing,”Insurance Networking News, February 1, 2005.
[xxv] “BPO in Insurance Sector: Pains and Prescription,” Wipro Technologies, 2002, p.6.
[xxvi]“Driving High Performance Through Outsourcing: Achieving Process Excellence,” Outlook(Accenture), 2005, p. 5.
[xxvii]“Driving High Performance Through Outsourcing: Achieving Process Excellence,” Outlook(Accenture), 2005, p.2.

Tom Trezise is president of Convergent Insurance Services. Tom possesses an extraordinary depth of experience in the property casualty business, including operational areas, technology, financial concerns, and contractual issues facing insurers, reinsurers, third party administrators, and intermediaries. From start-up insurers to large international insurers, Tom has served the insurance industry for over 28 years as an insurance executive, general counsel, and trial attorney. He has led organizations with more than 1,000 employees and managed multimillion dollar budgets. His roles have included VP Liability Claims with USF&G and VP Commercial Claims St. Paul Companies/USF&G. With XL Vianet, Tom was a member of the senior management team that launched an Internet-based commercial insurance business start-up, from business plan development through business process design, technology platform decisions, Web-tool design, and business operations.

The Claims Management Process: Outsourcing for competitive advantage

Introduction
Competitive success in the insurance industry is predicated on the innovative efforts of today designed to favorably impact the operating models, processes, products and customer relationships tomorrow. An abundance of operational support alternatives permits executives to compare providers, analyze the insurance value chain and identify efficiency, enhancement and organiza-tional transformation opportunities. Outsourcing is one mechanism to achieve these gains. An outsourced claims management process can transform an insurance company, but requires thoughtful planning, partnering, transition and execution with great skill.
Competition and change
The insurance industry is competitive and change is constant. The competitive playing field is global and the competition is robust. Industry leaders deliver shareholder value by growing revenue, generating profit and producing above average returns on equity in a constantly changing environment. Predictions for the nature of change expected over the next ten to fifteen years are diverse, insightful and highlight the need for new ways of thinking about insurance operations. In a recent survey of insurance executives, 70% of respondents expect significant change while 30% of respondents expect incremental change during this period; not one respondent envisioned the future to be the same as the present nor expected the need for change to go unnoticed. The survey suggests that “mega trends will force the industry to innovate; old modes of thinking threaten the industry’s ability to innovate; interlopers will increasingly disrupt traditional insurance operations; industry leadership will require experimentation in operating models, processes, products and customer relationships; and strategic investment in innovation today is critical to success” in the future. Innovative change starts with the most senior executives. A recent survey of global CEOs reported that more than 40% of respondents indicated that, within their organi-zations, they lead business model innovation, while 38% lead operational innovation efforts and just over 30% lead product and service innovations. Executives must provide leadership within their organizations to prepare for the future. The role of the executive, need for innovative change and the need to experiment with operating models and processes to remain competitive in the long-term should drive insurance executives to evaluate the insurance value chain for opportunities to leverage innovative sourcing alternatives.
Sourcing alternatives
“The modern value chain is the collection of processes and services that are linked together to create, develop, sell, deliver, process and service an insurance policy over the life of the contract.” Organizations can leverage outsourcing relationships to complement internal processes to transform their businesses and produce operational and performance gains. Outsourced processes are typically made available through either “full service”, “prime vendor with subcontractors” or “selective” outsourcing arrangements. Organizations that outsource can gain process capabilities through joint ventures, consortia and other resource-pooling models. Further, an organization that offers an internally-built process to the marketplace in effect creates an independent profit center, responsible for delivering services internally and externally.
Sourcing alternatives are categorized based on the location of the process or service. The in-house domestic model describes when processes are internally built and maintained with personnel and resources in the same country as that of the service recipient. In the in-house non-domestic model, also known as captive offshore, an organization launches its own operation in a different country, including both near-shore and offshore outsourcing. When an external provider conducts the process with personnel and resources in the same country as the service recipient, the outsourced domestic model is being used. An outsourced non-domestic model refers to the performance of processes by an external provider that delivers the service with personnel and resources in a different country than the service consumer.
Due to licensing and oversight requirements, claims management service providers have traditionally been outsourced domestic service providers. The market for claims management service providers in the United States is fragmented with a small number of firms holding considerable market share. Claims management service providers have been providing outsourced claims man-agement solutions to insurance companies for more than forty years.
Claims management service providers as a sourcing alternative
The existence of a small number of large, well qualified commercial property and casualty claims management service providers in the United States may be attributed to the explosive growth of the alternative risk financing market during the latter part of the 20th century, and the subsequent broader adoption of these services by other bearers of underwriting risk. During this period, we saw large individual commercial insurance buyers self-funding increasingly larger amounts of underwriting risk in the form of self-insured retentions and large deductibles. As a result, they came to desire total control of the claims management process in order to gain process flexibility, fee and expense transparency, and reduced loss adjustment expenses. A few savvy insurers recognized a market opportunity and spun off their (superior) claims operations to create stand-alone profit centers that today are viable alternatives to the traditional insurance company claims operation.
Good claims management service providers offer competitive advantage through the maximization of efficiency, enhancement and transformation gains produced by their own sourcing strategies. Inasmuch as the claims management process is comprised of several sub-processes, including claims investigation, loss reserving, financial management, litigation management (litigation planning), medical cost containment (managed care strategies), recovery management (subrogation, salvage, second injury fund), settlement, regulatory compliance and information management, each represents an opportunity to provide a superior service that positively influences loss and expense ratios. In addition, other sub-processes involving call center functions, independent adjusting, loss fund administration (loss payment issuance), defense work, medical bill review, nurse case management, PPO network management, structured settlements, and others may offer additional gain if managed effectively. The most successful outsourcers focus on planning for business outcomes, partnering for performance, strong governance for smooth transition and execution with innovation.
Planning and the claims management service provider
The outsourcing planning process considers the desired level of impact the outsourcing engagement will have on the organization. Outcomes are commonly classified into three broad categories: efficiency, enhancement and transformation. Planning centers on creating efficiencies by identifying opportunities for cost reduc-tions while maintaining high quality and high process availability. This approach leverages the service provider’s scale of operations, technical capabilities and management proficiency. The overarching objective is to optimize a process in a way that gives an organization a tangible advantage or a new degree of functionality not previously available. Transformation is the most ambitious objective and directly affects the fulfillment of business strategy by altering the organization through significant changes to the business model. Outsourcing may produce all three outcomes, however, transformation is the ultimate objective of the most successful outsourcing engagements – experienced outsourcers recognize that “value lies in turning costs into capabilities” – a product of successful transformation. Transformation requires an alignment of the service recipient’s and service provider’s business strategies. A successful transformation results in the ability to “innovate and dramatically improve the very competitiveness of the organization by creating new revenues, outmaneuvering the competition, and even changing the very basis on which a corporation operates.” Transformation involves a much higher level of risk and is generally approached as a partnership of equals; the planning, implementation and realization of benefits requires high level interaction, investment and trust.
Claims management service providers can deliver transformational change to commercial property and casualty insurance companies at all stages of the insurance company’s life. Well-planned and executed arrangements favorably impact income, profit and return on equity during the launch, growth, maturity and decline of an insurer. For example, during launch, an executive team outsourcing the claims management process can focus on strategy and tactics that increase value rather then developing a claims management organization; the expertise, scale and scope of the claims management service provider are available for immediate implementation. The fixed expenses of developing a claims operation are avoided as fixed expenses are converted into variable expenses, thus freeing the organization to invest capital in activities producing the highest return on investment. This approach is most successful when speed to market is imperative. One real-life example involves an insurance company that had committed to the “virtual insurance company” model to launch and underwrite property and general liability insurance on a single-state basis. The executive team understood the benefits of the model, and by contracting with a claims management service provider was able to immediately access the deep expertise, scalability and a broad scope of claims-related services that the provider offered. Additionally, the company’s capital was not tied up in office leases for claims staff, their salaries and benefits, advanced claims management information technology, errors and omissions insurance and other related expenses. Service fees paid to the claims management service provider were variable based on a percentage of gross written premium with a very manageable fixed expense component.
During the growth phase, an executive team may choose to outsource the claims management process as it expands product lines, geographic range or total premium writings. To use another real-world example, let’s examine an insurance company that has committed to an all-lines (workers’ compensation, general liability, automobile liability and property) national expansion strategy after years of regional operation as a workers’ compensation insurer. The challenges of expanding the existing regional monoline claim operation to manage claims arising from new product lines on a national basis are many. By using a third-party claims management service provider, the existing claims organi-zation’s regional capabilities simply augment the outsourcing arrangement without disruption to the existing operation.
For a mature insurance company confronted with the constraints of legacy information systems, outdated claims processes, underperforming managed care arrangements and other inefficiencies the outsourcing of the claims management process offers a viable alternative. A third example is an insurance company that had a claims organization yet empowered policyholders with the option of selecting an approved claims management service provider instead. This is common to the individual risk management account market. The mature insurance company continues to underwrite profitable policyholders with the loss of nominal incremental revenue on claims management services.
Insurance companies with discontinued operations are confronted with the same challenges as the start-up. As claims volume declines, the allocation of capital to claims office leases, professional compensation, management information systems, errors and omissions insurance and other related expenditures must be scaled back. Consider an insurance company in runoff. A claims management service provider is capable of assuming claims handling responsibility for all open and closed claims. The runoff insurance company shifts much of the fixed expenses to variable expenses while gaining all of the advantages of the world-class claims management process offered by the claims management service provider. It is possible for the claims management service provider to carve out the existing claims organization, assume staff and facilities, and manage the runoff claims activity as a standalone unit.
Large, competent claims management service providers deliver transformational change to insurance companies. Designated adjuster, dedicated adjuster and dedicated unit staffing models based on traditional scale and scope of control caseload models provide great flexibility to executive teams. Claims management service providers are capable of generating mutually beneficial partnerships.
Partnering and the claims management service provider
The focus now shifts to the selection of a service provider through a formal vendor selection process and contract negotiation. Successful outsourcers no longer view external service providers as cynically as perhaps they once did, as “the enemy – dishonest, untrustworthy characters, totally focused on sucking as much money as possible” from the service recipient. Instead, these organizations “create mutually beneficial relationships with trusted providers who understand their industry, respect their corporate cultures, and put mutual interest before self interest.” They frequently consider these factors above price. Proper evaluation of service providers requires the creation of a detailed statement of work and contract as part of a provider evaluation packet. Ideally, the packet is offered to fewer than four relevant service providers. Responses should define a detailed approach to the work and a price for doing so. Proposals should be scored based on predetermined criteria and negotiations can thereafter begin with finalists. This differs from the traditional request for proposal (RFP) process that offers a list of open-ended questions that invite widely disparate answers that are often difficult to evaluate objectively. The service provider and service recipient learn little about each other this way, and vendor selection becomes a highly subjective process. Experienced outsourcers, however, approach service providers with proven capabilities and structure the evaluation process to actually test the most important selection criteria. Notably, experts “ranked the contract as the most important management tool in the first year of a[n outsourcing] relationship.” Arrangements aimed at organiza-tional transformation involve the most risk and as such should focus on the successful alignment of organizations to achieve a well-defined, mutually understood vision. Relationship management, service delivery, service level expectations and price must also be considered. The most successful relationships are conducted as quasi-partnerships with mutual benefits acknowledged by each party.
To generalize, and perhaps oversimplify, in our experience the less successful outsourcing relationships are approached as typical vendor/buyer arrangements. Although a best practice involves some form of the evaluation process described above, most continue to be undertaken through a rigid but less effective RFP process or worse, some form of informal decision making based on who has the best sales pitch.
In their responses, prospective claims management service provid-ers should clearly define how they would align their operational capabilities with the insurer’s strategic vision and describe how they would complement the existing claims organization, leverage existing service provider and service recipient relationships and adapt the claims process to better serve the insurer’s needs. A plan describing options for the integration of claim reporting processes and call center services, data exchange processes for policy verification and coverage determination and data interface processes should also be presented to the insurer. The claims management service provider should then determine to what extent independent adjusting firms, managed care services organizations, defense counsel and others should be brought to bear to produce a truly integrated solution. Any changes to the claims process must be evaluated by the insurer with their accep-tance explicitly confirmed. Throughout the selection process, both parties should involve a team of stakeholders including executive management, claims operation leadership, information technology experts and others. Claims handling expectations should be memorialized in a formal set of claims handling guidelines. Contract terms should stress the strategic alignment of both organizations through service level requirements. Once the claims management service provider is selected, the focus shifts to transition.
Transition and the claims management service provider
The most successful outsourcers recognize transition as a critical part of the outsourcing lifecycle. Identifying an executive sponsor, managing change, defining governance measures and developing an exit strategy are critical to the success of the engagement. Experienced outsourcers realize the importance of identifying executive and general managers with the requisite skills to manage all phases of the outsourcing lifecycle. The most successful understand that “it is best if sponsorship includes the most senior leadership possible – one or more individuals at the c-suite level with a deep passion for the long-term goals of the arrangement.” Some organizations deploy organizational structures dedicated to sourcing projects. These units are typically staffed with relationship managers, performance managers and contract managers. A relationship manager is responsible for developing and prioritizing requirements, managing issue escalation, monitoring performance, acting as account-level liaison for the multiple service providers (to the extent they’re engaged) and as a liaison for business unit relationship managers. The performance manager is responsible for operations oversight, service integration, incident management, and performance management. The contracts manager is responsible for the management of contract terms and conditions, the management of projects and bids and the enforcement of contracts and schedules. Transition-ing to a service provider involves a great deal of change. The fundamentals of change management suggest that the change agent clarify the need for change, outline a vision for the future and provide a logical first step to achieve the desired outcome. Committing to a comprehensive change management plan that, through sound communication, reinforces the vision and quantifies the strategic objectives sought is a key success factor. Governance measures are applicable to all phases of the outsourcing engagement and include the “formal and informal structures a company and its service provider use to monitor, manage and mediate their collaborative effort.” Some service recipients establish formal sourcing units responsible for relationship, performance and contract management. Others rely on operational review committees, capability review committees, joint review boards and compliance committees. These formal committees meet monthly or quarterly. The informal daily contact and interaction create an environment of open and honest exchange and are not to be underestimated. As needs change, service recipients may move work from one service provider to another. Experts contend that it is “important to consider how the next transition will occur if and when the outsourcing relationship ends.” Such planning protects the organization in case a contract is terminated and provides a plan for the next transition.
In practice, claims management transitions are complex and time consuming exercises. Participation expands beyond the claims function to include operations, finance, regulatory compliance, legal, underwriting, actuarial and information technology functions. As a function critical to customer perception and satisfaction, executive level sponsorship by the chief executive officer, chief operating officer and/or chief claims officer (or equivalent) is crucial. The claims officer is perhaps best positioned to assume the executive sponsor role, articulate business objectives and internally champion the transition. The claims officer is best suited to communicate the message that the insurance company expects to experience more rapid and profitable growth and plans to be able to develop and service new product lines by outsourcing the claims management function, without discontinuance of the existing operation. The claims officer would need to address the existing claims operation’s concerns of obsolescence by emphasizing the strategic importance of the operation in the long-term. The claims officer should assemble and work to closely align an internal team with the claims management service provider’s implementation team. A well-considered collaborative implementation plan will specify key objectives and mutually acceptable deadlines. To execute the plan, the claims management service provider should appoint an implementation manager responsible for leading the overall transition and any related function-specific projects. Related projects may include integration of call center services, adaptation of the claims process to address oversight requirements, coordination of independent adjusting, legal and salvage services, integration of medical cost containment processes, establishment of loss funding mechanisms, development of policy and claims data interfaces and others. The implementation team should include members with experience in each of these areas. During transition, team members generally interact with their counterparts at the claims management service provider while the implementation manager steers the project. Periodic status meetings and other communications should be used to gauge progress, address open concerns and define next steps. The claims management service provider’s relationship manager supports the implementation manager during transition and takes the lead once the transition is complete.
Although the claims management service provider and the service recipient operate closely, process alignment is achievable without complete process integration and the alignment can be undone without catastrophic impact on either firm when the relationship no longer makes sense. Such dissolution of the relationship should be spelled out as an exit strategy in the services agreement between the parties.
Execution and the claims management service provider
Throughout the operational phase of an outsourcing engagement, service recipients often begin to realize significant performance improvements, cost reductions and capability gains. The primary focus of the engagement thereafter shifts to optimizing outsourced processes. According to Accenture, the most successful outsourcers “seek to build on their immediate gains by setting new standards and by seeking out new opportunities to capture value and raise performance.” These outsourcers produce real value by stressing continuous improvement by seeking broader use of process automation and performance improvement strategies. Tremendous value is created when the environment shifts from one in which the service recipient dictates requirements to the service provider to an environment in which the service recipient taps the service provider’s knowledge of best practices. Continuous improvement may be included as a contract feature by adding terms that require defined performance gains – based on key performance measures that are continuously monitored – on an annual basis.
An insurance company in growth mode could derive substantial benefits by engaging a claims management service provider, as support for independent adjusting, medical cost containment, defense, salvage and other critical process would be immediately available and the claims function would be eminently scalable. Such scale and scope of a claims solution is rarely available on a “build-only” basis. The variable expense structure afforded by outsourcing arrangements frees capital for other purposes. Additionally, continuous performance gains often result from the claims service provider’s own pursuit of operational excellence.
Conclusion
Insurance executives can transform their organizations by successfully outsourcing processes within the insurance value chain. Planning for transformational change by closely aligning with a claims management service provider – and leveraging their unique qualities and capabilities to benefit your own organization – sets the stage for meeting or exceeding expectations.


Derek D’Onofrio is an Account Executive in the National Sales Division of Gallagher-Bassett.

Activity-based costing: Innovative methods to decrease costs and increase profitability

Challenges facing insurers today
Insurance carriers face a myriad of challenges as they drive their businesses towards profitability. Volatile market conditions in the investment and real estate communities, typically bellwethers for insurance investments, are driving organizations to further examine their own costing structures and methods for identifying and understanding the costs that make up their business model. Combined with unprecedented customer churn, new entrants and consolidation in the market, and recent significant man-made and natural disasters, insurers need to adopt new methods to ensure that underwriting is profitable.
Impact of costs and expenses
Typically, insurance organizations are divided into two distinct areas.  One is corporate (or staff), with functions such as finance or actuarial. The second is field operations, with functions such as regional, district or branch profit, underwriting, and claims or service centers. Both need access to reality-based costing information.
Each profit center must identify and model premium, loss-and-expense driven relationships by line of business, region and branch, and plan and budget for both direct and indirect channels.
For corporate functions, the primary challenges involve process control and slow response times.  In field operations, the focus is on agent, policyholder, or claimant-facing activities which include the need to control resources efficiently, manage different revenue and cost areas within the business and meet a range of reporting requirements. Often hampered by a lack of financial analysis of true costs within their operations, field managers find it challenging to drive financial accountability.

  • Claims and loss adjustment costs, including both direct claims or benefits cost and loss adjustment expenses, represent up to 75% of the incoming premium dollar.
  • Business acquisition and underwriting costs, including commissions paid to agents and brokers, auto or property inspection expenses, credit and motor vehicle report fees, medical examination fees, and rating and underwriting expenses, represent 15-25% of the premium dollar, depending upon the type of business and insurer distribution model employed.
  • General and administrative costs, including taxes and general expenses other than underwriting/acquisition and loss adjustment expenses such as rent and utilities, information technology and labor, represent 5-10% of the premium dollar.

Once primary cost categories are identified and captured, the financial information must be translated into industry-specific operational key performance indicators (KPIs). Cost reduction and profitability across customers, products, channels, and markets typically play a predominant role in these KPIs.
Enter activity-based costing
As companies struggle to gain a more complete understanding of customer, product and channel profitability, they realize the importance of using activity-based costing (ABC) to correctly calculate costs. Activities can be defined at the macro level (such as direct premium payment) or at a more detailed level (such as payment at an agent’s office, recognizing that costs will vary by product and by delivery channel as well as geographically). One of the key challenges for insurance organizations is to understand how their infrastructure resources are consumed. This can only be reliably understood by using an ABC methodology to calculate how products, customers and channels consume activities and how activities consume resources and costs.
Activity-based costing defines and measures the cost and performance of objects, activities and resources. Cost objects consume activities, and activities consume resources. Resource costs are assigned to activities based on their use of those resources, and activity costs are reassigned to cost objects (outputs) based on the cost objects’ proportional use of those activities. ABC helps organizations to understand the activities and resources associated with cost objects. This allows for examination of a firm’s financial and operating data and enables the monitoring and analysis of performance metrics.
An example of ABC benefit
Net written premium is significantly impacted by the expenses incurred in processing and brokerage.
A large healthcare insurer in the Northeast worked with an assumed costing model for its claims process. Before ABC, it was assumed that the cost for processing a claim for one customer was the same as processing a claim for another, regardless of the methodology used for processing.
Using ABC, they started to examine critical points such as the number of claims received and the method by which they were processed. They reviewed the number of claims that were automatically processed and how many were received electronically versus paper claim submissions – critical drivers that impact the costs associated with each customer. Through the use of activity-based costing, they discovered that all claims processes do not result in the same cost structure and that some customer bands were costing significantly more than others.
Consequently, they arrived at a number of important discoveries regarding claims processing.  First, auto-adjudication (automatic processing of claims) proved to be an area of key cost differential. Accordingly, improving processes to increase the use of auto-adjudication and reduce human intervention resulted in significant savings.
They also focused on rework – those claims that had to be reprocessed as a result of being paid incorrectly. ABC provided a method to assign a value to claims handling and rework efforts. Once a dollar value was assigned to claims processing rework (and these values can run into the millions of dollars), employees and management began to pay attention. This type of analysis helped to focus improvement efforts.
This information had a significant ripple effect across the organization. As the company’s customer base expands, they can easily determine an appropriate allocation of costs for customers and products they are acquiring. It allows the insurer to understand levels of profitability associated with each customer band enabling them to negotiate contracts that drive more revenue or lower costs.
One of the key challenges for any large organization is to understand how their infrastructure resources are consumed. This can only be reliably understood by using an ABC methodology to calculate how products, customers and channels consume activities and how activities consume resources and other costs.
Activity-based costing methodologies
Activity-based costing involves the collection and collation of system and non-system data. This often involves interviewing staff and documenting activities and cycle times associated with those activities.
One of the main deterrents from implementing ABC has been the amount of time and cost involved in collecting and collating non-system data, often involving manual interventions that include interviewing multiple staff members and examining mountains of output from paper-based systems. This has led some to seek other methodologies for allocating resource costs to activities.
We can now turn to a review of the strengths and limitations of each of the methodologies used in ABC, including time-splits, time-capture and time-driven costing, and provide a working example of each.
The choice of methodology should be based upon characteristics of the specific activity being costed and the availability of reliable and robust data. In practice, this means implementations will rarely, if ever, be based on a single methodology. Organizations should ensure the software they select can easily support all three methodologies and includes the tools required to easily update their models.
Below is a simple insurance scenario that demonstrates how activities are costed using each of the methodologies reviewed:
Activities
This scenario is based on a department that carries out two activities: processing applications and handling claims.
Driver volumes
During the month studied, the department processes 5,000 applications and 1,000 claims.
Resources
There are four staff dedicated to the department, working seven hours for 20 working days per month, totaling 560 hours (33,600 minutes) of available capacity. In addition, a supervisor spends 60% of her time managing the department. The remainder of the supervisor’s time is spent managing another department.
Cost
The direct expense (salary, benefits etc.) incurred for staff running the department during the month is $16,800. The supervisor adds an additional $5,600. An estimated 60% of working hours are spent processing applications and 40% on claims.
In addition, there are costs of $4,200 allocated to the department each month for indirect costs such as facilities, IT and HR. Indirect costs are split between the two activities based upon the resources they consume. In our example, the “claims processing” activity involves making extensive use of outbound telephone calls. Therefore, this activity receives a larger cost allocation (70%), while the “process applications” activity receives less (30%). We’ll see that all of this information is required for costing, regardless of the methodology used.
Methodology 1: Time-splits
Time-splits are the simplest ABC methodology to understand. Managers are simply surveyed to find out what proportion of working time is spent on various activities. This proportion is used to allocate expenses to activities.
Calculated example of costing using time-splits
The manager responsible for the department needs to provide three numbers only: the proportion of time spent processing applications, the proportion spent on processing claims and a figure for any excess capacity.
For example, as the team processes each day’s applications until this activity is completed (generally early in the afternoon) and then processes claims until the end of the workday, the manager is sure that little or no excess capacity exists and that a fairly reliable split for the activities is 66% for processing applications and 33% for processing claims.
Calculation. The calculation has two stages: first we assign resource costs to activities, and then we assign activity costs to cost objects.
In Step 1, time-splits are used to assign resource costs to activities. In Step 2, volume drivers are used to calculate activity unit rates.
Step 1: Assigning resource costs to activities

ACTIVITIES

Process Applications

Process

Claims

Total

Time-split

66.7%

33.3%

100.0%

Assignment of direct cost

($16,800 x 66.7%)$11,200

($16,800 x 33.3%)

$5,600

$16,800

Assignment of supervisor cost

($5,600 x 60% x 60%) $2,016

($5,600 x 60% x 40%) $1,344

$3,360

Assignment of indirect cost

($4,200 x 30%) $1,260

($4,200 x 70%) $2,940

$4,200

Total cost of activity

$14,476

$9,884

$24,360

Step 2: Calculating activity unit rates

     ACTIVITIES

Process Applications

Process

Claims

Total

Total cost of activity

$14,476

$9,884

$24,360

Volume driver

5,000

1,000

Calculation

($14,476 / 5,000)

($9,884 /1,000)

Assignment of indirect cost

($4,200 x 30%) $1,260

($4,200 x 70%) $2,940

$4,200

Activity unit rate assigned to cost object

$2.90

$9.88

Strengths of time-splits
Ease of implementation
Costing using time-splits is very straightforward and only requires data found in the general ledger and data collected from each responsibility center. As such, it is frequently used for pilot studies, where early results guide the methodologies used for model refinements.
Implementing ABC based on time-splits involves working with each responsibility center to develop a dictionary of the activities they carry out and allowing them to routinely report the amount of time spent on each activity. This allows managers to directly participate in the project and review results with the knowledge that they contributed to them. Consequently, there is likely to be greater commitment to the success of the project.
Weaknesses of time-splits
Data collection and collation
Resurveying contributors every time a model is refreshed can be laborious. However, the advent of web-based ABC applications that allow data to be entered directly into the database and the deployment of work management tools that expedite routine data collection has eliminated many of these issues.
Failure to identify excess capacity
When asked to submit time-splits, few managers will willingly reveal large amounts of excess capacity and idle time. This means that substantial excess capacity is rarely revealed when time-splits are used.
Supposed lack of accuracy
Because of its simple empirical approach, time-splits are viewed as being less accurate than other methodologies. However, in those responsibility centers where there is reliable data on how staff spend their time (e.g., customer contact centers), managers acknowledge the value of this information. This will produce results that are no less reliable than those generated using other methodologies.
Methodology 2: Time-capture
Time-capture is a particularly useful method for ascertaining how staff split their time between projects and customers. The value becomes evident when applied to functions such as research and development, IT, or in professional service organizations, where activities are rarely repetitive. As a rule of thumb, wherever the time-capture method is already being used in an organization, it should be considered the appropriate method for ABC costing before any other is considered.
Calculated example of costing using time-capture
The amount of time staff  spend on each activity might be captured from the systems they are using, from a specific time-capture application or from time sheets submitted by staff. In the example below, the figures indicate that 336 hours were spent processing applications, 168 hours processing claims and 56 hours unaccounted for, which the manager records as excess capacity.
Calculation
In this example, the actual hours are used to assign resource costs to activities in Step 1. But in Step 2, volume drivers are used to calculate activity unit rates, just as in the first example.
Step 1: Assigning resource costs to activities

ACTIVITIES

Process

applications

Process

claims

Excess capacity

Total

Time spent (hrs)

336

168

56

560

Assignment of direct cost

($16,800 x 336/560)

$10,080

($16,800 x 168/ 560) $5,040

($16,800 x 56/560) $1,680

$16,800

Assignment of supervisor cost

($5,600 x

60% x 60%) $2,016

($5,600 x 60% x 40%) $1,344

$3,360

Assignment of indirect cost

($4,200 x 30%)$1,260

($4,200 x 70%)$2,940

$4,200

Total cost of activity

$13,356

$9,324

$1,680

$24,360

 
Step 2: Calculating activity unit rates

ACTIVITIES

Process applications

Process claims

Excess capacity

Total

Total cost of activity

$13,356

$9,324

$1,680

$24,360

Volume driver

5,000

1,000

Calculation

($13,356/ 5,000)

($9,324/

1,000)

Activity unit rate assigned to cost object

$2.67

$9.32

Strengths of time-capture
Where blocks of time are dedicated to specific projects or customers, and where activities aren’t repetitive and time-capture is already in use, time-capture is the preferred methodology for allocating resource costs to activities.
Weaknesses of time-capture
Exposing excess capacity
Unless time-capture is completely automated and not dependent upon an individual triggering a recording of time spent working, it is unlikely to accurately expose excess capacity (although it is more likely to do so than the time-splits methodology).
Staff resistance
If a time-capture system is already in use for billing or cross charging, using the data for ABC costing is unlikely to generate dissent among staff. However, introducing a time-capture system where none previously existed requires strong change management skills.
Methodology 3: Time-driven ABC
Time-driven costing involves allocating costs based on the practical capacity of the resources supplied by measuring (or estimating) the amount of time taken to perform an activity. The volume of transactions is fundamental to the calculation of time-driven costing:

  • Transactional cost drivers are counts of the number of times an activity is performed. Examples include the number of purchase orders processed, the number of inbound telephone calls answered and the number of delivery drops made. By definition, a transactional driver is used whenever a repeatable activity is completed in a similar amount of time.
  • Duration drivers are measurements (or estimates) of the time required to perform a task or activity. Examples of duration drivers include the time taken to answer a telephone call or process an application. In certain responsibility centers, duration drivers may be reliably accessed (e.g., in most customer contact centers, the amount of time to handle a call is automatically recorded). Using logistics operations as another example, duration drivers may be captured from hand wands at the time of collection and delivery.

An early exponent of ABC, Dr Robert Kaplan, promotes time-driven costing as being “… simpler for estimating and maintaining an ABC model, and also more accurate.” While time-driven costing undoubtedly has a place in ABC and is the preferred methodology in certain situations, it has its limitations.
Calculated example of costing using time-driven ABC
Time-driven duration drivers for our two activity examples are system-generated. The processing system provides the average duration for application processing (4 minutes), while call accounting functions of the telephone system provide the average duration of time to handle a claim (10 minutes).
Calculation
In this example, the department’s resources consisted of four staff members working 7 hours per day for 20 days. Including holidays and sick days reduces the available time by 10% to accurately reflect the true resource availability.
Step 1: Calculating the unit cost of available time

Total

Direct cost

$16,800

Time available(mins)

((4 x 20 x 7 x 60) x 90%) 30,240

Cost per minute

($16,800 / 30,240) $0.555

Step 2: Calculating activity unit rates

ACTIVITIES

Process

applications

Process

claims

Total

Volume driver

5,000

1,000

Cycle time (mins)

4’00

10’00

Total time used (mins)

(4 x 5,000) 20,000

(10 x 1,000) 10,000

30,000

Cost of time used

(20,000 x $0.555) $11,100

(10,000 x $0.555) $5,550

$16,650

Assignment of supervisor’s

costs 

($5,600 x 60% x 60%) $2,016

($5,600 x 60% x 40%) $1,344

$3,360

Assignment of indirect cost 

($4,200 x 30%) $1,260

($4,200 x 70%) $2,940

$4,200

Total activity cost

($11,100 + $2016 + $1260) $14,376

($5,500 + $1,344 + $2,940) $9,834

$24,210

Activity unit rate

($14,376 / 5000) $2.88

($9,834 / 1000) $9.83

Excess capacity (mins)

(30,240 – 30,000) = 240

Cost of excess capacity

(240 x $0.555) $133

† It should be noted that while time-driven ABC is effective, it should not be used in isolation. In this example, supervisory and indirect costs are not suited to time-driven costing, therefore, costs are based on the resources consumed by each activity.
Completing the calculation reveals that $133 of resource cost, equivalent to 240 minutes (4 hours) of the available resource time, is attributable to excess capacity.
Strengths of time-driven ABC
Surfacing excess capacity
When people estimate how much time they spend on a given list of activities, they invariably supply percentages that add up to 100%, as very few individuals will say that any of their time is unused or idle. As such, cost driver rates calculated from this process may incorrectly assume that resources are working at full capacity. Time-driven ABC effectively overcomes this problem and reveals differences between the total amount of time needed to carry out activities in a responsibility center and the actual amount of time available given its current resources. (Note that this can lead to time-driven ABC becoming closely associated with time and motion studies, which are viewed unfavorably by many workforces.)
Weaknesses of time-driven ABC
Availability of reliable and robust duration drivers
Unless the data are readily available, robust and reliable, time-driven ABC can generate as many problems as it purports to solve.
If the data come from reliable systems such as automated call handling systems, and are regularly updated, they will be accurate. However, if they are out of date or based on estimates, they could result in substantial errors; the difference between an estimate of four minutes and four minutes ten seconds to handle an inbound telesales call may not seem like much, but when one considers volumes of 100,000 calls or more it becomes quite substantial. Therefore, a time-driven methodology requires as much data collection as any other methodology if it is to be robust and reliable.
In any organization there will be responsibility centers, such as marketing, legal, research and areas of IT, where activities are far from homogeneous and repetitive and duration drivers are simply not available. In these instances, a different methodology must be used.
Understanding variances in duration drivers
Duration drivers can be used at the aggregated or individual level. Where duration drivers are available for each individual transaction, a time-driven methodology can be used to calculate a unique cost for each instance. For example, if the system logs that it takes an agent 8 minutes to handle an inbound telephone call, it would pick up twice as much cost as a more typical call that takes only 4 minutes to handle.
The cost is valid if this is a more complex call for a different type of service: the type that would be identified as a separate activity under any other ABC methodology. However, if the call took 8 minutes simply because it was taken by an inexperienced agent, then the charge is invalid and will provide erroneous results.
The above discussion is not intended to provide a definitive answer on the use of time-driven costing. Rather, it’s provided to illustrate that even in those situations where hard data such as duration and cycle times are available, their use in calculating costs and profitability need to be carefully considered if inappropriate allocations are to be avoided.
Data collection
It is frequently suggested that time-driven costing eliminates the need for surveys and data collection; this is not the case. Each time a model is refreshed and recalculated, duration drivers must be updated; even the most repetitive processes change. Contact center agents are frequently provided with new scripts in attempts to up-sell and cross-sell other products and services, and all such changes impact the length of a call. These changes need to be acknowledged by either extracting the data from a transactional system or asking process owners to provide updates. This is easily achieved with web-based ABC applications and work management tools that expedite data collection.
Importantly, if reliable systems are not in place to capture cycle times, there may be a dependency upon surveys, and survey subjects are likely to alter their normal working patterns so as to appear to be more productive than they may actually be.
One also needs to consider what happens if the computation of driver volumes and activity cycle times suggests that a department is working above its theoretical capacity, as this would surely cast doubt on the reliability of any ABC model and lead managers to question the validity of the reports.
Volume of data
Costing individual transactions using a timed-based methodology quickly generates enormous amounts of data, which require large databases and powerful analysis and reporting tools to derive meaningful reports.
Before going to this level of granularity (e.g., using a time-driven methodology to calculate the cost of every transaction for every customer), it is worthwhile understanding exactly how managers in the organization intend to use the information to inform their decision making. Other than for key accounts, the focus of most strategic and operational decisions is at the customer segment level, and so it may be more useful – and considerably less burdensome – to provide analysis at this higher level.
The hybrid model
While each of the methodologies discussed previously has its own particular strengths, none is perfect for every activity in every responsibility center. In practice, models will be hybrids, with different methodologies being used for different responsibility centers. Even then, it is unlikely that reliable data are available for every activity; in certain instances it may be necessary to resort to approximations using weightings.
Nevertheless, whichever methodology is chosen, it is essential to refresh non-system driver data each time a model is calculated. Web-based ABC applications make this remarkably easy and there is no reason why ABC data should not be produced every month as part of the traditional reporting package.
It is unlikely that a single methodology will be appropriate for all activities in a model, so it is essential organizations choose an ABC application capable of supporting all the methodologies, together with the flexibility to incorporate any special requirements for unique situations.  Excess capacity should be identified and costed, but it’s also important that future periods where capacity may be exceeded are identified early enough to be able to take action.
Moving forward with activity based costing
Activity-based costing offers a number of significant improvement opportunities within insurance organizations. Here are some best practices to consider when looking to undertake an activity-based costing initiative.

  • Pick a key area in your organization that would likely benefit from a better understanding of costs and devote a meaningful amount of time in that area implementing activity-based costing principles. Once you’ve been successful in that area, use your success as a sales tool and a model to expand into other areas within the organization.
  • If you are new to activity-based costing, it’s worth the investment to bring in a consultant to help during the setup and establishment of ABC methods. Look for consultants with experience in activity-based costing as they can help move you through the process more effectively and help accelerate “time-to-results.”
  • Senior management buy-in is critical to ensure that you have the needed support to be effective.
  • Target areas include areas that staff know could use a good dose of improvement. When you are able to assign a real dollar amount to the cost of inefficiency, it tends to grab the attention of management.
  • To be reliable, profitability measurement must be based on an ABC methodology.
  • Analysis of only one dimension of profitability (e.g., customer profitability) will not provide adequate insight for decision-making.
  • Companies need to ensure that their ABC initiative fits seamlessly into their existing data schema. Frequently, this means costing individual transactions and individual customers.
  • Always use a costing methodology appropriate for a specific activity within a specific department.
  • Companies should ensure that the methodology used in their chosen ABC application uses a single-step, multi-dimensional allocation of activity costs to cost objects.

The real value from activity-based costing projects is the ability to manage costs and report actionable information. Such information creates a huge level of awareness within an organization regarding cost drivers. Conscientious managers and staff are hungry for this type of information; it gets them engaged in mapping processes to derive an understanding of the activities related to that process and their associated costs. This helps manage budgets with far greater reliability. The result is a more collaborative approach to the business – and its bottom line.


Richard Barrett is Director of Operations in Business Objects’ Center of Excellence. He started his career in pharmaceuticals, received an MBA in 1981 and became a Fellow of the UK Chartered Institute of Marketing in 1990. He has worked in consultancy, national and international positions in consumer marketing, insurance, and business-to-business marketing. He first became involved in planning and budgeting as Planning Manager for DHL Worldwide Express in Europe in the late ’80s and has continued his interest in the topic ever since. In 2000, he joined ALG Software, a leading provider of software for activity-based costing, which became part of Business Objects in 2006. He regularly speaks at performance management events and presents courses on customer profitability and driver-based budgeting for the Chartered Institute of Management Accountants (UK).

Spreadsheet services: An efficient approach to implementing business logic

Introduction
Modeling, managing and pricing risk are among the most important priorities for every insurance company. Sophisticated proprietary models are developed by actuarial, underwriting, and finance units to perform these tasks. From a technical standpoint, a common, flexible, and easy-to-use analytical platform was necessary to build those models. As a result, spreadsheets have emerged as the preferred platform used by the vast majority of insurance professionals. The visual nature and step-by-step auditing capabilities have separated spreadsheets from more traditional programming environments such as Visual Basic, Java, C++ or mathematical programming tools like Matlab and Mathematica. Today, almost every insurance company uses spreadsheets to manage their risk one way or another. However, as an increasing number of  insurance companies streamline and automate their business processes (including those complex models) they must deal with a major downside of spreadsheet technology. Spreadsheets are designed for single user desktop environments and do not scale in an enterprise environment, which serves a large number of users concurrently. Facing this challenge, most insurance IT departments attempted to rewrite those spreadsheets in a more scalable programming environment. Taking into consideration the complexity of their models, this approach has often been very expensive and time consuming. In most cases, by the time IT departments complete the rewriting phase, business units have already modified their models to keep up with changes in the marketplace. This leads to never-ending projects that are vastly over budget, and significantly reduces the agility of insurance organizations who are less able to react to changes and opportunities in the marketplace.
This paper presents a technological alternative that enables insurance organizations to integrate their spreadsheet models with enterprise applications without having to rewrite and convert them to another platform. As a result, insurance organizations can experience substantial cost savings, react to changes in the marketplace more quickly, and take advantage of opportunities before their competitors do. It also encourages superior collaboration between business units and IT departments, enabling each to concentrate on their core functions.
Challenge
To stay competitive, insurance companies must constantly face the challenge of properly managing their risks. Managing risk requires collective effort from all parts of the organization. In particular, a collaborative effort involving the actuarial, underwriting and finance departments is crucial. Sophisticated models are built to better understand and properly price their exposure. “What if” scenarios are executed to understand the effect of model variables. Rules- based models are designed for underwriting. To illustrate this further, following is a partial list of complex models that are used in insurance organizations:

  • data validation and scrubbing;
  • actuarial pricing;
  • rating engines;
  • reserve calculations;
  • product selection rule engines;
  • predictive models; and
  • underwriting engines

Highly capable analytical platforms are necessary to build, test, and execute risk models. Traditionally, spreadsheet software has been used by insurance carriers for this purpose. There are multiple reasons that support the notion that spreadsheets provide an ideal platform for analytics:

  • Almost every insurance professional knows how to use spreadsheet software.
  • Hundreds of built-in functions simplify developing sophisticated models.
  • The familiar grid interface and built-in auditing tools enable users to visually follow complex algorithms.
  • Simple import/export features allows data manipulation.
  • Easy debugging is possible using built-in tools.

While spreadsheets are extremely powerful analytical tools, the fact that they are designed for single user desktop environments is a major disadvantage. This becomes more evident and critical as insurance companies move to web-based platforms that require integration of complex business logic with calculations that currently exist in spreadsheet format.
Traditional approach
In its most simplified form, there are three major components in any enterprise insurance software (Figure 1). They are the data layer (database), business layer (business rules and calculations), and presentation layer (user interface).

A1

The business layer is where complex spreadsheet models need to be integrated. In general, insurance companies chose to rewrite spreadsheet models using traditional programming languages. This is a long and expensive process (seeFigure 2); the process typically starts with business units writing specification documents, describing in extreme detail how their algorithms work, a tedious process that insurance companies either handle internally or outsource to a consulting firm to develop a specification document. Once finalized, the specification document is delivered to the IT department. Software developers then have to understand the algorithm and code it. Considering most software developers are not equipped with skills and experience to understand complex insurance calculations, this process is often protracted and error-prone. After the code is completed, it is delivered to QA teams for testing. Considering the analytical nature of this code, business units, in conjunction with QA teams, are ideally involved in testing. When testing, original spreadsheet models are used as reference points, and results obtained from the application are compared with those obtained from the spreadsheet models. A large amount of test cases are typically used to ensure that every aspect of the insurance algorithm has been triggered. Inconsistencies between spreadsheet models and the application are reported to IT units. At the risk of generalizing, these inconsistencies are often difficult for software developers to resolve as their understanding of the algorithm tends to be somewhat limited. As a result, testing becomes a long, iterative process that consumes valuable resources from business units and the IT department. At the end of the process, after all inconsistencies are resolved, business units sign-off on the application and it is – finally – ready to be rolled out.

B2

Unfortunately, this is only a part of the process. Business units continue to adjust their algorithms to stay competitive in the marketplace. With each algorithm adjustment comes a related need to be implemented in the insurance application. A process similar to the one described above is repeated to for all such changes.
The traditional process of implementing business logic and calculations is not only time consuming but very expensive; it negatively impacts an insurance organization’s ability to roll out new products faster.
An efficient new approach – spreadsheet services
Software products have recently become available that process spreadsheets in a server environment and integrate them with other enterprise applications. These products eliminate the need to rewrite spreadsheet models in traditional programming environments. Further, existing spreadsheets can be used “as-is” or with minimal modifications in order to integrate with other insurance applications.
Figure 3 below illustrates this new approach, which we dub “spreadsheet services.” The spreadsheet engine is the central component, essentially replacing the functionality of desktop spreadsheet software. The majority of insurance carriers use Microsoft Excel as their desktop spreadsheet software. However, using Excel in server environments is not recommended by Microsoft; unstable behavior and deadlocks are some of the problems that Excel can cause when run in server environments.[1] A spreadsheetengine can be used to process spreadsheet files in a server environment without depending on the spreadsheet software with which they were created.[2]
C2
Due to the fact that web applications require concurrent access by a large number of users, a spreadsheet engine can be designed and optimized to handle a high volume of requests and perform in multi-threaded environments.
The interface between the spreadsheet engine and software applications is another important component worthy of discussion. There are different ways to handle this interface. With recent developments in Service-Oriented Architecture (SOA), insurance organizations are moving to implement applications that support Web Services. A web service interface between the spreadsheet engine and the insurance application makes it easier for carriers to implement this new approach.

D1

How do you select the right technology?
There are already several products on the market that allow spreadsheet models to be run in a server environment and be integrated with enterprise applications. While each has many features designed for different applications, it is important to identify those criteria that define the right technology for your insurance application:

  • Web services. Designs based on Web Services have proven to be a valuable architecture for building enterprise applications in insurance organizations. As such, it is important to select a technology that can integrate with existing Web Services platforms. Aside from technological advantages, identical spreadsheet models can be used by multiple applications, making it easier to build within an SOA environment. For example, one rating engine can be used by internal quoting and underwriting systems as well as broker applications developed by external vendors. Having a Web Services-based rating engine that can be accessed internally as well as externally makes it easier to maintain and eliminates rating inconsistencies between the two.
  • Platform independence. Many insurance companies utilize Linux and Unix servers for their back office operations. Accordingly, platform-independent solutions provide the best alternative from a maintenance and operational point of view.
  • Performance. Running complex spreadsheet models in a server environment is a performance-intensive process that consumes significant CPU resources and memory. Performance-optimized solutions will therefore meet  concurrency and response-time requirements of enterprise applications, without needing to scale up with additional hardware capacity.
  • Maintain the integrity of spreadsheet files. There are products available that convert spreadsheet files into program code (e.g., Visual Basic, C++ or a propriety file format). This approach requires the involvement of software developers to integrate the code with the overall application every time business units update their spreadsheets. This could slow down the rollout process and increase testing requirements. Converting spreadsheet files into proprietary formats makes spreadsheet management more difficult as the number of files increases over time.
  • Small footprint. Processing spreadsheet models in a server environment is a back office operation that consumes significant server resources. Therefore, general purpose products offering spreadsheet processing as an additional feature will consume valuable server resources and leave limited CPU capacity and memory for executing spreadsheets. As a result, additional server capacity is often needed to meet performance requirements.
  • Grid computing. Insurance applications accessed by a large number of users typically require multiple servers to operate. Solutions that support grid computing will enable carriers to scale up their applications by simply adding new servers.

Benefits of the new approach

Short term
By adopting the spreadsheet-based approach, insurance organizations realize the benefits of accelerated application development and cost savings.
The spreadsheet services approach completely eliminates the time-consuming coding of insurance algorithms and their testing. Coding complex business logic and algorithms tends to be the most time consuming part of the development of any insurance application; eliminating the need for such tedium can have a profoundly positive impact on the project development cycle.
Traditionally, business units utilize business analysts to write specifications and test applications, while IT staff write the actual code and quality assurance teams perform extensive tests to validate the accuracy of the code. Using spreadsheet services virtually eliminates this process and substantially reduces project costs.
Another important benefit of the new approach is a better collaboration between business units and IT; each unit can focus on their core business functions, improving efficiency throughout the enterprise.
Medium term
Maintaining applications by periodically adjusting business logic using the traditional approach requires heavy involvement from all parties, as the specification writing, coding, and testing processes have to be repeated each time business units update their models. With spreadsheet services, business units need only provide IT with updated spreadsheet models. New algorithms can be implemented with minimal system testing.
Insurance organizations also benefit from faster time to market, as updates in business logic and calculations are rolled-out in days rather than weeks or months using the traditional approach.
Long term
In the long term, insurance organizations benefit from this superior architecture as spreadsheet services pervade the organization and an increasing number of business units start adopting the approach. As it is based on SOA, multiple enterprise applications access common Web Services for certain similar calculations and rules. Algorithms can be served from a single point, eliminating redundancy among unit applications used within the enterprise.
Typical insurance applications
Spreadsheet services can be utilized wherever spreadsheets are used, or can possibly be used, to model complex business logic and calculations. Actuarial pricing, underwriting and product rules engines, broker commission calculations, reserve calculations, and predictive modeling are only a few of the critical insurance processes where the new approach adds value.
Rating engines
Rating is typically a self-contained process in the policy lifecycle. Rating engines are simply software programs that return results based on programmed logic for a given set of inputs. In some cases, they require database connectivity; in others, they stand-alone.
An ideal insurance rating system may be characterized as follows:[3]

  • It supports all lines of businesses;
  • It easily handles algorithm changes;
  • It has strong decision-support capabilities;
  • It supports customization, including state- or company-specific deviations;
  • It easily integrates with existing systems (i.e., policy administration); and
  • It supports multi-line operations.

The spreadsheet services approach meets all of these characteristics. The modeling capabilities of spreadsheets used in conjunction with the many built-in formulas enables the development of rating algorithms for even the most complex lines of businesses, providing a single source for all rating regardless of complexity .
To respond to the dynamism of the insurance industry, carriers need the ability to quickly adjust their rates. Insurers often allocate sizeable maintenance budgets in IT departments to handle ongoing rate changes. A spreadsheet-based approach significantly reduces the burden on IT departments, frees up budgets and enables carriers to adjust their rates faster.
Conclusion
Solid risk management principles are crucial for every insurance organization. Sophisticated proprietary models are developed by actuaries, underwriters, and financial professionals to properly manage and price their risk. Most modeling is typically done in spreadsheet environments because of the familiarity, flexibility and features provided. Traditionally, the business logic already built into spreadsheet models is rewritten when integrating them with enterprise applications – typically a long and expensive process.
The spreadsheet services approach completely eliminates the need to rewrite business logic and calculations, while enabling business units to maintain control of their models by keeping them in a familiar format.
The spreadsheet services approach significantly reduces the costs of developing applications utilizing business logic promoting a more collaborative relationship between business units and IT by allowing each to concentrate on their core competence. Business users remain in full control of business logic, enabling faster time to market and greater profitability.
References


[1] Microsoft (2007). Considerations for server-side automation of Office. Retrieved fromhttp://support.microsoft.com/kb/257757/en-us
[2] Microsoft (2007). Considerations for server-side automation of Office. Retrieved fromhttp://support.microsoft.com/kb/257757/en-us
[3]  Stephenson, S. (2004). Insurers need to rate their rating technology. National Underwriter, Property & Casualty, Issue 45.


Ugur Kadakal is Chief Executive Officer of Pagos, Inc., a software and IT consulting firm specializing in helping its clients to integrate spreadsheet-intensive functions with enterprise applications. Insurance companies commonly use Pagos products to build web-based rating systems based on existing spreadsheet rating tools. Other applications include pricing, underwriting and reserving where sophisticated spreadsheets models are used. Prior to co-founding Pagos in 2002, Ugur held positions at Air Worldwide, Inc. a leading catastrophe modeling company. Ugur holds a Ph.D. from Northeastern University.