Key Things to Know Before Getting Into the Insurance Business

Authors: Mark Nawrath, PMP, MBA, and Dean Ferdico
Today’s technology advancements have the potential to transform businesses across industries. Aging systems and increased demand for new and innovative products mean insurance is ripe for disruption, but new solutions are not always as easy to implement as they may seem. Insurance is both complex and highly regulated: a double hit for Insurtech or non-insurance companies looking to break into the space. That said, there are endless opportunities for your company to make major waves in the industry…if you take a careful approach.
Based on our decades of insurance consulting along with our experience helping numerous Insurtech startups over the last several years, here’s what you should know as you break into the insurance market.

The insurance industry is highly regulated

Many of today’s Insurtech companies emerge from the finance world, where modern technology has transformed everything from customer service to the nature of banking itself. While U.S. banking must comply with a single federal charter, insurance products are subject to disparate rules in 51 jurisdictions, multiplied by 20 to 30 lines of business that each has its own individual coverages. The number of details required for each product filing can be staggering and small errors have the potential to stall the filing on the path-to-market.
Partnering with a seasoned insurance technology consulting company with state filings experts enables you to achieve a clear roadmap of what to expect, potential pitfalls, and areas to consider before you get too far along in the product development process. Experienced partners will provide you with a clear understanding of the playing field and help you draft a realistic strategy for rolling out your product.

Add insurance executives to your team

State Departments of Insurance (DOIs) look favorably upon companies with proven histories in insurance. They have no time to teach inexperienced technology companies or non-insurance companies the ins and outs of creating a compliant filing. Bringing a seasoned insurance executive onto your team – and partnering with proven insurance consultants – helps sidestep avoidable regulatory pitfalls and adds instant credibility to your organization in the eyes of the regulators. The same also applies when raising capital. Venture capital firms feel more comfortable investing in firms with experienced in-house teams and insurance consulting experts onboard.

Primary insurers are skittish

Around eight years ago, many primary insurance companies started issuing paper to unproven Insurtech companies – a move that ultimately damaged their standing with state DOIs. Since then, primary insurers (as well as reinsurance companies) are more discerning about with whom they will do business. After all, their reputations and licenses are on the line. This is where working with a seasoned insurance technology consulting company with state filings experts pays off. Having insurance consultants on your team to thoroughly review and pressure-test your proof of concept will help you stand out to primary insurers and reinsurance carriers.

Insurance compliance is full of hurdles

Receiving approval from state DOIs and remaining compliant also means your policy, billings, and claims administration systems must all meet regulatory standards. These standards include everything from how your products are priced to how you advertise to consumers to how data must be reported. Some requirement documents are thousands of pages long, a difficult task to manage for teams short on insurance experience.
Whether you are implementing an Insurtech solution or offering ancillary insurance along with your primary service offerings, insurance product development is a tricky process. Even bureau-based products that lean heavily on Insurance Services Office (ISO) or National Council on Compensation Insurance (NCCI) content are extremely complicated to interpret and adopt in a compliant manner. Seasoned insurance consultants like the team at Perr&Knight know this content and the related regulatory requirements inside and out because we work with them daily.
We help new Insurtech and non-insurance companies understand how to consume the content to develop an insurance product, how to structure the content for systems development and testing, and how to implement a compliant operational process from the outset. Building compliant systems and communications from the ground up protect your company from speed to market issues or costly re-work while avoiding potential fines for your carrier partner.

Use professional “matchmakers”

Primary insurance companies and reinsurers have what Insurtech companies and non-insurance companies need: approved licenses from state DOIs and capacity. Insurtech/non-insurance businesses have what primary carriers are looking for: fresh ideas, technologies, and access to new markets. Both must vet one another, a daunting task if neither company can accurately verify the validity of the other party’s credentials.
Experienced insurance consultants like the team at Perr&Knight can provide an insurance-focused perspective to determine whether the partnership will be beneficial for both parties. Evaluations from unbiased insurance professionals can increase your confidence that your prospective partner can deliver.

The future is full of opportunity

Technology and consumer product development move with lightning speed. Insurance, on the other hand, is extremely sluggish. The merging of these complementary industries opens a plethora of opportunities for proactive companies, but success is never guaranteed. Re-framing your expectations, working with experts, and adopting a calculated approach to your new insurance offerings are the most effective ways to improve your position. Start exploring “what you know you don’t know” with seasoned insurance experts before you get too far down the road.

Considering launching a new insurance product? Talk to the team at Perr&Knight first.

Why IT Projects Fail…and How to Prevent Yours From Collapse

Authors: Rob Berg SCPM, CSSBB and Mark Nawrath, PMP, MBA
As expert insurance technology consultants with many years of experience under our belts, it feels like we keep seeing the same story over and over. It goes like this: Company has a need for an IT project, Company hires vendor to develop software or processes, project seems to start off strong, but ultimately gets further and further off track until the timeline is blown, the budget has skyrocketed, and Company has invested major time and money—with no useable solution yet in sight.
Though each company has its own unique circumstance and different players involved, we’ve seen the same types of issues derail project after project. Here are some of the most common reasons IT projects fail and how to keep yours going strong.

Reason 1: Lack of involvement from leadership

As the initial (and often final) decision makers, the executive team holds a significant amount of power regarding a project’s success. As we’ve all heard before, “With great power comes great responsibility.” It’s up to the executive team to not only carefully consider their choices and motivations when committing to a project, but to remain actively involved during every phase. The company’s leadership must thoroughly understand the project needs and goals and actively work to support that vision. New IT project implementation brings significant change throughout an organization. If leadership is not onboard at the outset and throughout development, those who are in charge of managing the project will struggle and the initiative may ultimately fail.

Reason 2: Change is challenging

New ways of doing business are often met with some level of resistance. The insurance industry is notoriously slow-moving and many of the people who work in this business have occupied their position for decades. Their experience brings value to an organization, but often those who have been accustomed to doing things the same way for the majority of their careers don’t realize the benefit of change. They may consciously—or unwittingly— undermine efforts to implement new systems.

Reason 3: Poor project requirements

As mentioned in a previous blog, ambiguous requirements are often both a cause and a symptom of a project that is destined to struggle. When project requirements are handled haphazardly, individuals involved in bringing the project to completion spend too much time trying to decipher what is being asked of them, they head down paths that seem right but don’t fit into the greater scope of the project, and they waste time trying to sort out the results of initial poor planning. By clearly and correctly articulating project requirements from the outset, projects stand a much greater chance of successful completion.
Read more: How the Right Requirements Can Make or Break Your Next IT Project.
We were brought into one such stressful situation. One of our clients was struggling with a failing implementation project, and about to head into an expensive arbitration. To assess the situation, we conducted a thorough review of contracts, requirements, and project management artifacts which revealed that their problem originated with the contract itself: it was far too ambiguous. One of the deliverables stated in the vendor contract was to “implement an underwriting module.” But what exactly did that include? The answer was up in the air, and neither our client nor the vendor could agree on what was to be delivered. Furthermore, requirements were scattered within emails, there was no evidence of a project plan or charter, and no regular status updates other than the ad hoc discussions that took place periodically.
Based on our written report, the client invited us to testify at the arbitration as an expert witness. After being certified by opposing counsel and the arbitrators, defending the findings in our report under cross-examination and delivering testimony that stood up under rigorous scrutiny, our client prevailed. However, their victory was a double-edged sword. They were able to recover their funds from the contract but had to pay the arbitrators, experts, and a stenographer – in addition to their own time lost in travel and testimony. They were also back to square one when it came to the project itself.

Reason 4: Sloppy project management practices

In insurance project management, keeping an eye on details is crucial. Projects should be overseen by experienced project managers or insurance technology consultants, not well-meaning junior staffers with an Excel spreadsheet and an Outlook calendar. During the course of development, many high-stakes pieces of information must be juggled, including status reports, baseline tracking, dependencies, and change requests. We’ve seen companies dive head first into costly IT projects with zero analysis of how long it’s going to take, what the final cost of implementation will be, or how it will affect other activities that occur downstream. All of these crucial pieces of information must be identified at the outset, tracked throughout development, and evaluated at project completion to gain a true understanding of whether or not the project is a success.
Read more: Common Mistakes Carriers Make When Implementing New Systems
We were called in to rescue a project for a medium-sized regional carrier who had sunk millions into a policy administration system that was months over deadline, with no end in sight. We discovered that not only was there no onsite project manager, there were no formal documented requirements (just raw materials like rating worksheets and product filings). Organized status reporting was also nearly non-existent: the Vice President of IT would hand-draw a set of pie charts on a piece of paper each week, roughly approximating the level of project completion—with no verifiable data to back it up.
We recommended that they put substance behind their completion tracking, so we created a formal project schedule for each component. We installed an onsite project manager who took documentation seriously. What had previously been a project that was almost at the point of litigation turned around quickly as all parties were able to assess and achieve their distinct requirements. This intervention ultimately led to a successful implementation.

Reason 5: Ignoring key stakeholders

Leaving out the people who will be using these systems every day is a grave error that will almost always cause problems down the line. If the ultimate end users are not invited to participate in system selection and configuration, not only is the project likely to face resistance, it could lack certain key features that might help these individuals perform better. Involving all levels of stakeholder during all phases of project development gives each a sense of ownership over how the project proceeds. It also provides stronger justification for each person or department to support its eventual success.

How to tell where your project is struggling

We have been called in countless times to help companies who have succumbed to the traps listed above. To prevent such catastrophes, we conduct an in-depth readiness assessment – preferably in advance of the implementation effort – to determine weak areas of the project, then develop a plan to reinforce those issues. For starters, here’s what we look for:

  • Personnel: Is the project adequately staffed? Are those staff members adequately trained with appropriate competencies?
  • Internal processes: How does work flow through the project? What is the process for approvals? What is the process for procurements that take place outside the scope of the project’s main deliverable?
  • Supporting technology: How are status reports obtained and delivered? How are people logging their time against the project? How does this compare against the baseline? Are the appropriate environments (e.g., development, staging, testing, production) set up?
  • Metrics: What metrics are being monitored? How is project success being defined?
  • Overall governance: What project management methods are being utilized? Are they appropriate for the project scale and scope and composition and distribution of the team?
  • Physical environment: Is the project effort being performed with the right facilities? Do people have enough space, equipment, and easy access to the resources they need?

Though every project has its own unique quirks and challenges, most implementation problems stem from one or more of the above factors. Only after a thorough discovery can you develop an appropriate plan to shore up the project. The truth is that there is no stock answer. This is why there’s a high failure rate. However, thinking ahead and crafting a laser-focused plan while maintaining flexibility to combat the inevitable changes and challenges you’re sure to confront gives each project the best shot at success.

If your project is running over time or over budget, Perr&Knight can help. Contact us for a no-obligation readiness assessment to determine where your project might need further support.

How the Right Requirements Can Make or Break Your Next IT Project

Authors: Rob Berg SCPM, CSSBB and Mark Nawrath, PMP, MBA
According to the Standish Group’s Chaos Report, an alarming 19% of all IT projects fail. Meanwhile, a full 60% fail to meet expectations. Stats like this show that tech projects run the risk of failing more often than they succeed. For insurance companies, these numbers should be alarming. The amount of capital—both human and financial—invested in IT projects for insurance companies mean that missteps in implementation planning and execution translate directly into significant waste, both of time and money.
We have found that one of the most impactful ways to shield tech projects from serious setbacks is to invest in establishing a clear set of requirements well before system configuration begins. If you’re not paying attention to the fact-finding and requirement-defining phases of your project, you may be unwittingly setting yourself up for failure.

How do you define “good” requirements?

In a single word: unambiguous. This means painstakingly translating information from subject matter experts into a language that can be easily consumed by developers or those configuring IT systems. Too often, subject matter experts get mired in their own vernacular. They forget that terms and definitions that seem obvious due to daily use and a shared language among colleagues are not likely to be fully understood by the developers—who also communicate through their own shorthand. Insurance technology consulting partners must not only record requirements, but they must also take the time to outline specifically what each requirement means and communicate how it fits into the greater context of the project.
Read more: The Importance of Unambiguous Product Requirements.
From there, consistency is key. By taking ad hoc approaches to write their own requirements from scratch at the outset of each project, we’ve seen insurance companies fail to capture and organize important information along the way. The formatting of the project outline plays an important role, even down to details like consistent sentence structure and naming conventions. When each project looks like an entirely new animal, developers and project managers are forced to spend more time taking in the basics, instead of capitalizing on their expertise to document an exhaustive set of requirements and spot areas of ambiguity that require more clarification.

The cost of hazy requirements

We’ve all heard horror stories about IT projects at insurance firms that have been drastically delayed or simply abandoned because the train got so far off-track that it couldn’t be redirected. For example, we saw a company spend millions buying a new policy admin system, trusting the assurances of their software development vendor that the project would take six months to implement. After six months passed and the project was still around halfway from completion, Perr&Knight was called in to try to salvage the struggling project and stave off a lawsuit.
We discovered that the project was doomed from the start because even basic requirements weren’t clear. Obvious problems included project requirements outlined in disparate emails, multiple stakeholders weighing in at different stages and actuaries who simply passed along a rating algorithm, assuming that programmers could just use it to program for a different state or new line of business. Lack of context was stifling each party’s ability to deliver their full level of expertise. 

Far-reaching consequences

Poor product definition – the core requirements that document the use of insurance product rates, rules and forms – has the potential to become a quadruple whammy that can hurt the company on multiple fronts, not just the IT or actuarial department. Here are some of the ways badly defined insurance product requirements leak out across departments and damage the company as a whole.

  • Lost time – When product parameters require constant clarification from underwriting and regulatory staff, project phases extend to weeks or months instead of days, and the delay drains time that could be better spent elsewhere. This lack of efficiency leads to…
  • Frustration – Projects that proceed at a snail’s pace drain morale and lead to increasing personal frustration as teams struggle to deliver.
  • Skyrocketing budgets – Resuscitating shaky IT projects that are midway through development often requires throwing good money after bad. It’s the only way to justify the expenditure up until his point and salvage the project.
  • Regulatory implications – For projects that do reach completion, those with poorly written product requirements often fail to address the true standards required by state departments of insurance. Configuring incomplete or incorrect forms or rates leads to a significant regulatory risk that can take a toll on the company’s reputation and seriously impede their speed-to-market objectives.

The key to establishing clear requirements

In an industry that depends so heavily on specialized expertise, one of the smartest ways to ensure that your product definitions are clear is to approach the requirement through the eyes of a layperson. It sounds counter-intuitive but boiling down needs and specifications to their most basic, plain-English functions enables project managers to deliver the vital translation between insurance company and vendor described above. Additionally, it helps insurance companies gain a deep understanding of what amount of “out of the box” functionality will truly apply to software products they purchase, versus the amount of customization that will be required to achieve the final end-product.
Though it seems like a safe bet to hire the big guys, we find that when insurance companies go directly to big technology consulting firms for development, they often end up with well-meaning tech developers who may lack adequate insurance domain knowledge. This specificity plays an important role in translating the needs of insurance companies into software that not only improves productivity and speed-to-market objectives, but supports compliance in an increasingly burdensome regulatory environment.
Hiring an insurance technology consulting company to painstakingly outline and define your entire project scope may require a slightly higher initial outlay but think of it this way: you’re in the insurance business. Taking the time to do it right the first time is your own insurance policy against expensive delays and demoralizing headaches.

If you have questions about the adequacy of your stated IT project requirements, our expert insurance software consultants can help.

6 Essentials Every Insurtech Company Must Know

The avalanche of data now available to insurance companies is rapidly changing the capabilities of an industry that has historically relied on manual processes. Insurtech entrepreneurs are discovering new advancements in data capture and analysis that allow insurance companies to do their jobs more efficiently and effectively.
However, it’s important not to leap before you look. It’s wise to prepare for the business realities and regulatory scrutiny that are inherent in the insurance industry. We at Perr&Knight have provided insurance technology consulting for many emerging tech companies to assist them in advancing their product in the complicated insurance industry.
Here are some essential guidelines to keep in mind as you proceed through development and rollout. Disregard these guidelines now–and you may end up paying dearly later.

Do your homework

In the heavily-regulated insurance marketplace, product rollout is not nearly as fast as in other industries. Constraints including privacy rules, compliance requirements and standards that vary by state can throw a wrench into the best-laid plans. We suggest partnering with insurance technology consulting experts who can help you navigate the tricky regulatory environment.

Think bigger

You might have developed a product to help claims but its functionality has the potential to improve accuracy in fraud detection or streamline marketing efforts. Qualified insurance consultants can evaluate your product and inform you if there are other uses for your technology beyond its original application.

Test your tech on real data

Feed real data into your technology to demonstrate specific outcomes that you can share with potential clients or investors. Measurable results can also support your case when submitting to regulatory bodies, especially if your product is entirely new to the industry.

Prepare for regulator review

Complicated regulations apply to the collection, evaluation, and sharing of data in the insurance industry. It’s smart to prepare for the range of questions you will be asked by regulators (keeping in mind that rules not only vary by state but also by line of business) BEFORE you are ready to submit your product for approval. Preparing ahead of time enables you to respond to regulator inquiries quickly and completely.

Be ready to train

Even insurance companies that embrace new technologies will undergo a transition period where staff needs to become comfortable using your product. The most successful Insurtech companies put support staff in place to train users, answer questions and help companies through an integration of the new technology with their existing systems and processes.

Use expert evaluation to validate your product

Before investing in major upgrades, insurance executives want to be sure that new technologies will deliver results that justify time and expense. Insurance technology consulting partners help secure buy-in from execs with data analytics used to quantify and support your innovation.
Your product might be fantastic and your user interface might be seamless, but those are only a few pieces of the Insurtech puzzle. Experienced insurance consultants can complete the picture by providing you with insight and preparation for the complexities of the insurance market as it relates to your product or service, saving you from headaches, hassle, and wasted resources.

For more information about how Perr&Knight supports Insurtech entrepreneurs, contact us at (888)201-5123 ext. 3.

Billing modernization: Strengthening customer satisfaction to build a competitive advantage

A Case for Change
Billing is a necessary function of the insurance transaction, and it is precisely that necessity that creates opportunity. A customer may go months or years without filing a claim, but every customer receives bills on a regular schedule. A customer may never read marketing materials insurers send—or even insurance policies themselves—but they will examine the bills they receive. Additionally, the majority of customer service calls a carrier receives are billing-related.
Therefore, billing represents a chance to build satisfaction and loyalty by delivering customers—and agents—flexibility, accuracy, and prompt resolution of discrepancies. In an Ernst & Young paper on billing transformation, authors David Connolly and Rick Raisinghani point out that, as an insurer’s first and most frequent touch point with its customers, billing presents an opportunity to create a positive experience and to build longstanding relationships.
Insurers do understand the direct link between billing and customer satisfaction. In 2008, Guidewire Software surveyed a wide range of Property and Casualty insurers in North America about the current state of their billing operations, how well current systems support their needs, and how they see their billing operations evolving in the future. In that survey, most carriers reported that billing is “important” or “very important” to customer satisfaction.
However, there is often a disconnect between carriers’ understanding of the importance of the billing function and their investment in technology to support the billing department. Carriers continue to run their billing operations on aging, legacy systems that simply cannot support emerging customer needs and expectations, let alone go beyond those expectations to provide competitive differentiation.
Guidewire’s survey found that few carriers believe that their current billing systems offer the flexibility required to support customer service excellence. Few systems can support multiple payment channels or payments by credit or debit card. Carriers may wish to correct these and other system shortcomings, but report that legacy billing platforms are simply so inflexible that functional enhancements are not feasible. Perhaps this is why so few of the carriers surveyed are confident that their systems will continue to support them when new demands inevitably arise in the future.
Understanding that billing is a customer service opportunity is an important first step. However, this step must be followed with a strategic investment in modern billing technologies that deliver process improvement, enhanced customer and agent service, and better control of and visibility into the billing operation. In fact, research firm Gartner says that for insurers, replacing legacy billing applications is a “strategic imperative.”[1]
A Legacy of Challenges
In the Guidewire survey, the overwhelming majority of companies use mainframe-based billing systems, including 84% of large companies (defined as over $1 billion in written premium). One-quarter of all respondents—and half of large companies—have billing systems that are more than 20 years old.
Part of the reason for the longevity of these systems is that carriers have worked to maintain, enhance, and modify them over the years to continue to meet business needs. However, legacy platforms tend to have several key architectural shortcomings:

  • They are typically hard-coded, often in archaic programming languages that are increasingly difficult to support.
  • They may not be a consolidated system but, instead, a collection of different applications purchased over time to perform different billing sub-processes and cobbled together with inflexible, point-to-point integration.
  • Business logic and workflow is embedded in years of coding, making it difficult to change and leading to manual workarounds to overcome system limitations.

More troubling than these architectural limitations, however, are the business challenges created by legacy billing systems. In fact, in Guidewire’s survey, only 23% of respondents said that current billing platforms met their needs “very well.” Dealing with inefficient legacy platforms creates a host of problems.
Poor Customer Service. Regardless of the type of insurance they provide or the distribution channel they use, every carrier has a common opportunity for contact with customers: the bill. In fact, the bill may well be to be the only piece of carrier correspondence an insured actually takes time to read and the only one they call to discuss. Therefore, billing is a vital opportunity to build customer relationships.
Carriers understand this, with 54% of all carriers reporting that billing is “very important” to customer satisfaction, and another 26% considering it “important.” They also understand that customer satisfaction is directly related to customer retention: a full 100% of large carriers surveyed believed that billing impacts retention (see graph below). However, over half of survey participants (56%) believe that their current billing systems and processes inhibit their ability to provide superior customer service.

graph1

Do you think billing affects customer satisfaction?

Inflexibility. Survey respondents were asked if the ability to offer flexible billing options and a variety of billing programs to customers would be a source of competitive advantage. In aggregate, 85% “agreed” or “strongly agreed.”
However, carriers reported that their current billing systems did not allow them the flexibility they needed to offer these options. Over a quarter of survey respondents (26%) reported that enhancements to their primary billing system are so difficult that they are no longer made. In addition:

  • 54% reported that their systems lacked support for credit card payments,
  • 69% reported that their systems could not handle debit card payments, and
  • 59% reported that they had difficulty administering new billing plans.

graph2

Functionality of primary billing system

In today’s business climate, these limitations put carriers at risk of customer attrition. Customers expect options and flexibility, including electronic bill presentment and payment (EBPP) and the ability to choose payment schedules that best meet their needs. They are accustomed to using a variety of payment methods. Insureds who hold multiple policies with a single carrier expect to pay one consolidated invoice each billing cycle. Legacy systems instead force customers to select from a limited set of options—or choose another carrier that can be more flexible.
Billing Leakage. Insurers commonly think of leakage in the context of claims. However, leakage also occurs in billing when profit is lost as result of inefficiency or when a carrier fails to collect all that is owed in the form of premium payments.
Billing leakage includes “free” coverage provided because of faulty cancellation procedures, the inability to apply cash received quickly and automatically to the right accounts, high bad-debt reserves and write-offs, and inaccurate revenue reporting on earned premiums due to irreconcilable differences between different systems. The Ernst & Young paper also points out that billing errors, such as mistaken cancellations, lead to higher call volume and can have a direct negative impact on an insurer’s financial performance.[2]
The cause of all this leakage can be directly traced to legacy billing systems that rely on manual processes, contain only parts of the needed end-to-end billing functionality, and are difficult and costly to modify.
Inefficiency. Some carriers contend with a variety of billing systems, acquired to meet different needs over time. In fact, in Guidewire’s survey a third of large carriers reported using four or more billing systems throughout their organization. These systems are often nonintegrated, lacking a common user interface. Dealing with different systems makes it difficult for billing representatives to locate customer information quickly.
Legacy, “green screen” billing systems also lack the intuitive, web-based design with which today’s generation of users is most familiar. They also don’t support direct navigation, instead requiring users to page through many screens to retrieve the right information, which further diminishes efficiency. Legacy systems lack flexible workflows, leading to manual workarounds and “desk processes” created by billing center staff to solve common customer problems. Physical “sticky notes” to track tasks are an all-too-common sight at billing centers that contend with these platforms.
Increased System Maintenance Costs. Hard-coded, legacy billing systems require more IT resources than modern platforms in order to maintain the application and to modify it to accommodate new products. When those systems are written in arcane programming languages, with little or no documentation, this problem is intensified. In Guidewire’s survey, 75% of large carriers reported having more than five full time resources to maintain their billing systems.
Poor Agent Service. Agents are a valuable business partner for insurers, and carriers have deployed agent portals for rating and policy underwriting. Insurers have also worked to integrate the systems supporting those portals to agency management platforms to make it easier for agents to do business with them. Today, in addition to these sales-focused capabilities, agents are also demanding additional and more flexible billing management options, including details on commission and incentive plans and information on scheduled commission payments.
However, this information is often locked in mainframe systems and is difficult to expose to external agents. Legacy billing systems offer little or no native integration capabilities with agent portals or agency management systems, preventing agents from maximizing on online business functionality. In order to access billing and commission information, agents must instead contact the carrier and request it. They must then work to resolve any discrepancies in a series of subsequent calls. Additionally, the calculation and payment of commissions to agents is seldom automated in legacy billing systems, leading to payment delays and calculation errors.
Manual processes related to agency billing management waste time that agents would rather spend on sales and service. Carriers that cannot meet agents’ expectations around billing management will ultimately find themselves at a competitive disadvantage as agents remarket their existing book to other companies and steer new business to carriers who can ensure that they are paid on a timely and accurate basis for all of the business they produce.
Lack of Visibility into Billing. Survey respondents were asked how difficult their billing systems are to balance, and more than half of small carriers (under $100 million in written premium) reported that this is a key shortcoming in current billing systems. In other words, legacy billing systems are failing at their most basic function: managing the receivables process and recording details of these financial transactions.
Legacy billing systems are also not designed to provide reporting capabilities to management about the billing process itself and its impact on overall business performance. In a compliance-focused environment, this limitation is becoming increasingly troublesome. The Ernst & Young paper notes that billing is an area where the impact of regulatory compliance is becoming a concern and that “some degree of transformation may be more a requirement than an optional pursuit.”1 Insurance companies need to address regulatory compliance matters in their billing areas to avoid non-compliance penalties.
Benefits of Billing Modernization
Insurers are coming to realize that the billing function must be modernized to mirror operational improvements made in other areas of the enterprise, such as underwriting, rating, and claims. Modernized billing departments will be characterized by flexibility, efficiency, and visibility, and will be supported by a modern billing administration platform. Compared to legacy billing systems, modern billing platforms feature:

  • An open, standards-based architecture rather than proprietary systems hard-coded in languages that are increasingly difficult to support,
  • Web-based, yet enterprise-grade, designs that minimize the “footprint” on user desktops and feature intuitive navigation,
  • Automation and workflow modifiable via a configurable rules engine rather than locked in application logic, and
  • Web-service APIs that enable integration into a service-oriented architecture (SOA), seamless connection to other core systems such as policy and claims administration, and support for agency and customer portals.

In contrast to legacy platforms, modern billing systems are designed to make it easier for insurers to provide faster resolution of customer questions, better management of agent commissions, automation of the billing lifecycle, flexible designs of billing, payment and delinquency plans, and painless integration with external systems.
Modern, web-based, enterprise-scale billing systems have proven to deliver insurers quantifiable business benefits in several key categories.
Enhanced Customer Service and Higher Retention. According to the Ernst & Young paper, when billing is properly managed, it can be a significant factor in preventing customers from switching insurance carriers. In contrast, when billing is poorly managed, an insurer could be placing its customer relationships at risk.
Customers understandably expect accurate statements and timely resolution of billing discrepancies. To resolve discrepancies faster, a modern, consolidated billing system serves as the “single source of the truth” for customer service representatives fielding billing-related calls. Once the customer record is located, customer service representatives can enter search parameters to jump to the precise information they seek, or they can navigate to that information using tabs or menu bars. Representatives can quickly and easily find the information they need to resolve a customer issue, reducing customer wait time and enhancing customer satisfaction with each interaction.
Modern billing systems also provide control surrounding customer interaction. Rather than handle exceptions outside the system with manual processes and sticky notes, modern systems support exception processing and provide automated dispute resolution to ensure that tasks are followed up on and completed. Visibility into the billing process enabled by modern systems also provides billing supervisors the information they need to intervene if necessary and resolve problems to customers’ satisfaction.
Increased Flexibility. Beyond accuracy and fast resolution of problems, customers expect flexibility in billing. They want many payment options designed to meet their individual needs and the ability to make payments using both their payment method and payment channel of choice.
Modern billing systems offer the ability to provide customers multiple bill plans and payment plans. These plans can be custom-tailored to meet the needs of individual customer segments, policy types, or regions. Plans can be configured to determine invoice timing, level of invoice detail, and assessment of fees. Invoices can be suppressed for amounts that fall below a configured threshold. Customers’ payment plans can be changed to accommodate a change in demand, and new billing and payment plans can be rapidly created and deployed at any time through system configuration, rather than requiring custom coding by IT. Modern systems are also designed to provide customers with self-service online bill review and payment.
This flexibility benefits not just customers, but an insurer’s marketing efforts as well. For instance, insurers that have already made an investment in modernizing policy administration have seen the benefits of being able to bring new products to market quickly. However, it is not uncommon for those same carriers to discover that their multi-million dollar investment in a new policy administration system may enable them to get products to market faster, they quickly discover that same level of flexibility and support of new product features does not extend to the billing system.  When migrating to modern policy administration systems, insurers should also consider the new payment, invoice and statement options required to support these new and innovative products.
Improved Efficiency. Carriers need a billing system that is not just easy to use and understand, but a  modern billing platform designed to put the most important and current information at the fingertips of customer service staff and enable billing representatives to retrieve information quickly, unlike systems that lack search and “jump-to” navigation capabilities. Particularly for companies that replace multiple legacy platforms with a single billing system, having a “single source of the truth” for customer information doesn’t just enable billing representatives to provide better customer service, it also increases their speed and efficiency.
Business process management capabilities that are native to modern billing systems also increase staff efficiency. Systems include task-oriented features such as inboxes, to-do lists, and trouble tickets to ensure that service tasks don’t fall through the cracks. Additionally, rather than locking business process logic into hard-coded routines, modern systems extract this logic and provide rules-based workflow that can be modeled and modified easily to reflect changing business practices.
And, when a billing system is intuitive and easy to use, internal staff proficiency is a much more achievable goal, with some carriers noting that training on their modern billing system required only four weeks compared to a six month training effort requirement in the legacy environment.
Improved Agent  Service. Providing superior service to agents is as important to a carrier’s long-term success as providing service to insureds. Whether carriers use captive or independent agents, modern billing systems enable them to significantly improve agent service levels. Unlike legacy platforms, modern systems are natively designed to present information about agent commission structures and payments through a web interface. They are built to perform within an SOA and incorporate web services integration technology to connect to agency portals and agency management systems and bridges.
Combined with automatic commission calculation and configurable business rules around these calculations, modern billing systems expedite payments to agents, thereby increasing agent satisfaction. The agency bill process can be further automated by the electronic transfer of statements between agent and carrier.
Easier Maintenance and Modification. The Ernst & Young paper points out that implementing a modern billing application offers insurers an opportunity to simplify their IT application architectures, and that architectures based on SOA principles provide an adaptable and scalable model for integrating the billing system into the existing environment.
Billing applications integrate with many other core systems, including the general ledger, policy administration system, and claims management system. A modernized billing application will allow a carrier to eliminate multiple, hard-coded interfaces with these systems.
Furthermore, when a carrier chooses a billing system built on the same platform as other administration systems, that carrier can then leverage a common set of skills and knowledge across its entire core systems portfolio. Business and IT analysts who are able to configure one core application can easily work with any other. Additionally, systems built on the same platform, being seamlessly integrated, reduce both overall implementation time and the ongoing cost of system maintenance and management.
Visibility into the Billing Process. Legacy billing platforms obscure the billing process by locking process logic into application code, by lacking system documentation around design, and by lacking sufficiently understood security and control mechanisms. These problems are exacerbated when there are multiple applications within a billing systems environment. Modern systems provide improved visibility that, in turn, leads to better service, reporting, and compliance.

  • Service. Customer service representatives no longer waste time searching for customer information that is difficult to locate or housed in different systems. Instead, they have clear insight into customer data from a single user interface. This information is also presented in natural-language format, rather than being abbreviated and codified because of legacy system data-field display constraints. As a result of this visibility, not only is customer service improved, but representatives’ job satisfaction is increased.
  • Reporting. Legacy systems make it difficult for companies to extract data and generate reports, particularly ad hoc reports. Modern billing systems provide prebuilt reporting capabilities and provide easier access to data, enabling insurers to mine customer information for business intelligence purposes that range from targeted marketing to overall operational improvement.
  • Compliance. The visibility afforded by modern billing systems into billing processes greatly simplifies insurers’ compliance efforts and, in today’s environment, is quickly becoming a business necessity. By providing clear insight into processes and controls around processes, modern billing systems help reduce an insurance carrier’s cost and time related to testing of internal controls. They improve a company’s ability to reconcile billing data with the policy administration system, financial ledger, and other systems, and provide flexibility to adapt to accounting standards changes.

Reduced Billing Leakage. Manual processes and workarounds required in a legacy environment introduce more opportunity for human error into the end-to-end billing process. These errors cost carriers in terms of free coverage provided to non-paying customers during the cancellation process, write-offs of amounts that do not reconcile, and premium calculation mistakes.  When carriers need to rely upon manual reconciliations typical of a legacy environment, the result is often found in excessive billing leakage.
Improved efficiency, reduced errors, and optimized collection activities minimize billing leakage. Modern billing systems automate many common tasks, increasing accuracy and allowing billing staff to focus on exception processing. These systems allow carriers to incorporate best practices into their billing systems and instill process consistency. Additionally, integration with both portals and other core administration platforms eliminates reentry of data, further reducing the chance of errors.
In modern billing systems, collections are also improved. First, providing a wide array of flexible billing options makes it more likely that customers will be able to find a plan that best matches their financial situation, thereby minimizing the chance of delinquency. Better visibility into the collection and payment processes also allows carriers to project cash flow more accurately based on current invoice data, rather than historical data, which is particularly important as economic conditions fluctuate. Receipts are predictable and manageable, and carriers are better able to manage collection activities.
Increased Sales Opportunity. Finally, because the billing statement is a piece of correspondence that customers are likely to read, it is to an insurer’s advantage to maximize the value of this correspondence. However, legacy billing systems offer little support for customized invoice messaging, and customers will ignore marketing messages that are not targeted specifically to them.
Modern billing systems connect to document production systems through flexible, standards-based interfaces. This integration enables carriers to drill down into customers’ accounts and create customized marketing messages based on what they know about individual policyholders, the types of policies customers already have, and whether or not a customer is a desirable target for up sell or cross-sell.
Case in Point
A $2.5 billion specialty lines carrier contended with a decades-old billing system that constrained its ability to increase efficiency and improve customer service.
Problematic for the insurer’s customers, the system supported only two payment plan options and lacked the ability to process credit card or recurring ACH payments. For its billing staff, complex screens made it difficult to navigate the system, locate information, and answer questions in a timely manner. For company management, the system had limited reporting capabilities, lacked robust security provisioning, and required manual reconciliation with the general ledger. And finally, the aging platform was experiencing internal balancing issues and unexplainable system failures.
Replacing its legacy platform with a web-based, enterprise billing system delivered a host of business benefits:

  • Payment plan options were increased from two to twenty.
  • Customers can now pay with credit card or recurring ACH.
  • Billing representatives can provide rapid response to customer inquiries and faster dispute resolution.
  • The company’s agents can view commission information online and in real time.
  • Call volume from agents and internal business units has been reduced.
  • System training time for billing staff was trimmed from six months to four weeks.
  • The system provides automated journal entries, drillable reconciliation reports, ad hoc report generation, and a detailed audit trail for control and compliance.

Bringing Billing to Light
Most carriers understand that the impacts of billing processes are not limited to the back office. Billing affects customer satisfaction and customer retention, and a customer-focused billing strategy can create real competitive differentiation.
However, legacy billing systems are not compatible with delivering customer-oriented billing service. These systems lack the ability to put vital information at the fingertips of billing center staff. They cannot support flexible billing options and channels that customers expect. They cannot connect to agent portals or agent management systems. And the inflexible, often proprietary architecture of these systems makes them difficult to change in order to extend new capabilities to staff, customers, and agents.
Few carriers Guidewire surveyed believe that their legacy billing systems are a suitable platform for meeting emerging needs. This realization, combined with the proven benefits delivered by modern billing systems, is prompting more and more carriers to investigate and invest in new solutions.
The Ernst & Young paper points out that “custom-built billing solutions are a thing of the past.” When looking to modernize their billing systems, carriers have an array of solutions in the marketplace from which to choose, and it can be difficult to evaluate and select a billing platform that best meets their needs. Guidewire provides a free Billing Starter Kit, including a detailed Request for Information document, which carriers can use to guide their selection process. The kit is available at http://www.guidewire.com/our_solutions/billing_starter_kit.
In a competitive environment, carriers look for any edge that can make their service stand out from the competition. A modern, customer-focused billing solution provides that edge.
References


[1] Weiss, Juergen, “Replacing Legacy Billing Applications is a Strategic Imperative for Insurers,” Gartner, Inc., December 2008.
[2] Connolly, David, and Raisinghani, Ricky, “Building the Case for Insurance Billing Transformation,” Ernst & Young, February 2009.


Kimberly Morton brings over a decade of insurance expertise in her role as Global Product Marketing Director at Guidewire.  She was successful in bringing PitneyBowes Insight (formerly MapInfo Corporation) into the Property & Casualty market and then spent a few years with the financial services analyst firm, TowerGroup before joining Guidewire. She has been published in top insurance magazines and enjoys working closely with carriers and industry analysts to discuss industry trends and thought leadership topics.

SERFF liberation: The System for Electronic Rate and Form Filing needs competition

Introduction
The property and casualty insurance policies that most Americans buy depend on a system by which insurers file rates—the fees they charge for insurance policies—and forms—the language and forms insurers use to describe those policies to consumers. All 50 states and the District of Columbia have separate laws concerning these rates and forms. Increasingly, these rates and forms flow through a computer program called the System for Electronic Rate and Form Filing (SERFF), which is owned and operated by the National Association of Insurance Commissioners (NAIC). Nineteen states require that all filings go through SERFF.
This article explains the System for Electronic Rate and Form Filing’s structure and raises questions regarding its usefulness. The article’s first section provides a broad overview of the “admitted” or “standard” insurance market, and describes why rate and form filing are essential to its continuation in its current form. The second section describes the history and function of SERFF. The third section discusses three major problems with SERFF. The fourth and final section proposes a series of solutions that would solve these problems. SERFF, as it currently exists, raises serious practical, equity, and legal questions—particularly relating to the delegation of taxing authority—and needs reform.
Rate and Form Filing: The Admitted Market Described
Most Americans buy insurance in the “admitted” or “standard” market. Two fundamental features distinguish this market from the “non -admitted” or “excess and surplus” (E&S) market: “utmost good faith” sales and a near  certain guarantee that claims will be paid. These two features imply a level of third-party oversight of rates and forms.
Utmost good faith refers to the circumstances under which nearly all insurance policies are sold. Essentially, it means that buyer and seller agree to disclose all pertinent information to each other in an honest and forthright fashion. Insurance consumers must disclose all pertinent risk information to their agents and agents must provide accurate, straightforward, common sense descriptions of the products they are selling. Agents do not have to perform detailed investigations of their customers’ lifestyles and risk factors and consumers do not have to understand every legal detail of the policy language. In other words, when a customer tells an agent that a roundtrip commute is 40 miles, the agent can simply assume that is true. When an agent tells a customer that a policy will cover theft from a car, the customer can rely on thefts, as they are commonly understood, being covered.
A regime of utmost good faith contracts in a common law system requires broad consensus on the meaning of specific contract terms. To facilitate standardization, a private, national organization called the Insurance Services Office (ISO) maintains standardized forms that serve as the basis for almost all insurance policies.All states have different laws governing insurance, so these general standard forms must be modified for every state. Different companies, furthermore, modify these forms to gain a competitive advantage or to serve their customer base. (For example, one auto insurer that began by serving government employees continues to provide special discounts for most people who work for the government, while another insurer that focuses on the military provides special coverage for military uniforms.)
These standard forms require state level reviews in order to bring them into compliance with various state insurance laws. Without such reviews and a broad agreement on the meaning of policy language, any ambiguity or dispute would require significant legal wrangling. Maintaining both state specific insurance regulation and an utmost good faith system requires that someone at the state level check forms for compliance with state laws and regulations, but it does not necessarily need to be government doing so. Form review and regulation can be handed over to private parties—some states, including California and Virginia, contract out some aspects of it.
The admitted market also provides a near ironclad guarantee that insurers will pay all legitimate claims. It carries out this guarantee through solvency regulation and a system of state level guarantee funds.
Solvency regulation, also known as actuarial adequacy regulation, is essentially a post facto effort to prevent fraud. It is a way of making sure that companies can actually pay the claims for the policies they write. Since insurance is mainly a promise to pay in the event that something unexpected and adverse happens, companies making those promises must have reasonable assurance that they can keep them. This, in turn, requires that someone oversee insurance company investments—insurers could not, for example, put all their money into penny stocks—and make sure that they charge rates high enough to pay the claims they can reasonably expect. In the excess and surplus market, contracts and detailed examination largely accomplish this. In the admitted market, solvency regulation does it.
Actuarial adequacy regulation requires that someone monitor the rates being charged. This does not mean that government has to approve them or has any authority to say that they are “too high”—in some states, including Illinois, Wyoming, and Vermont, government officials have little or no say over how high rates should go—but it does mean that someone must stop rates from going below the level needed to pay claims. Even states that do not require filing of rates still require that companies keep information to justify their rates open to inspection.
In addition, all 50 states maintain state guarantee funds. With the exception of New York’s fund, these guarantee funds function as industry run associations.2 Insurance companies that want to operate in the admitted market must participate in the guarantee fund. When and if an insurer proves unable to pay its claims, the guarantee fund imposes a special tax, called an assessment, on all companies writing insurance policies in the admitted market. The system certainly implies some moral hazard, but given that insolvent companies face a severe penalty in that their assets are liquidated in full, the moral hazard from guaranteeing payment of their claims does not seem that severe. Guarantee funds do not always assure 100 percent payment of claims and few cover very large claims from very wealthy individuals or business.3
For insurers and consumers who do not feel they need the assurance of the admitted market, it is almost always possible to do business with excess and surplus companies, which do not have to submit their forms or rates to any state authority.
The E&S market is not chaos. In fact, it can—and sometimes does—function a lot like the admitted market. Two parties in the excess and surplus market can swear they will deal with one another on an utmost good faith basis. All states, furthermore, have laws mandating that excess and surplus companies charge adequate rates. Although all excess and surplus lines policies are unique, some relatively common types of policies— coverage of collections of exotic cars, for example—function very much like policies in the admitted market and may even draw on the same ISO forms.4
SERFF and Its Owner
The System for Electronic Rate and Form Filing took on its current form in the mid 1990s. The system, says its owner, the National Association of Insurance Commissioners, “is designed to enable companies to send and states to receive, comment on, and approve or reject insurance industry rate and form filings.”5 It does this, but not very well.
NAIC is an unusual organization. It has some aspects of a government entity and some aspects of a private one. On the one hand, NAIC describes itself as a private organization, and has some features of the same. It is registered under section 501(c)3 of the Internal Revenue Code, does not report directly to any particular government any more than any other non profit, does not need to follow any government hiring and purchasing rules, and is not covered by freedom of information laws. Like other associations, NAIC works to advance the interest of its members, through model legislation and lobbying.6
On the other hand, NAIC has significant government like features. First, all of its members are jurisdictional – usually state -insurance commissioners. Twelve are state wide elected officials and all of the others are reasonably important state level officials. Second, it has some powers that broach on lawmaking, including its administration of large parts of the Interstate Life Insurance compact, which harmonizes life insurance standards and practices around the country and sets technically voluntary “standards for accreditation” to which almost all states adhere. Therefore, NAIC has enough power for it to deserve the same scrutiny that one might apply to a government, especially since it owns and manages SERFF.
How SERFF Works
The System for Electronic Rate and Form Filing is a paperwork flow management tool. SERFF creates a universal interface for dealing with correspondence between insurers and insurance regulators. It assigns a unique number to each filing and provides a standardized place to manage correspondence between rate examiners and insurance company employees.7
For more than a decade, SERFF has managed the paper flow for insurers and state insurance departments alike. The training manual that NAIC publishes for SERFF says that the system “promotes uniformity and has the added benefit of supporting the flexibility states need to accommodate their differing requirements and laws.”8 SERFF pursues its first goal by making use of standards – uniform forms and product codes – that NAIC and ISO have introduced and through its administration of the Interstate Life Insurance Compact.9 As noted, nearly everything – including some of the standardized forms – remains subject to state level oversight and changes in order to conform to various states’ laws.
The NAIC’s management – which ultimately reports to state insurance commissioners – has total ownership over SERFF. Currently, a joint industry government board of 13 members – seven from government and six from industry – oversees SERFF. The board requires a supermajority of 10 to make most decisions. However, NAIC has often acted without the board’s approval. In 2007, for example, NAIC introduced a premium tax filing companion to SERFF called OPTins without ever even mentioning it to the board.10  In 2008, the NAIC culminated this trend when it announced plans to take away nearly all of the board’s power and demoted its status to that of an “advisory group.”11
NAIC remains the sole owner of all SERFF trademarks and intellectual property. The system has found widespread adoption. As of early 2009, 19 states mandated its use and all others used it in some respect.12 Every national insurer and every domestic insurer operating in those 19 states must use it and pays its filing fees.
SERFF’s Revenue
SERFF supports itself through fees paid by the industry; NAIC sets these fees on its own. SERFF sets a standard filing fee of $7 per filing and allows companies to buy “blocks” of filings at prices that can go down to $6 each. State insurance regulators pay no actual fees to participate in SERFF.13 The NAIC and SERFF’s board can vary these fees without any consent from state authorities. Being mandatory, SERFF makes a lot of money for NAIC. Business Insurance Magazine reports: “At a December 2007 SERFF board meeting, the NAIC provided financial data through Oct. 31, 2007, that showed nearly $2.46 million in SERFF revenues and nearly $1.29 million in operating expenses, resulting in a profit of about $1.17 million.”14
During 2007, NAIC’s best year ever financially, this comprised about 20 percent of the $5.5 million in surplus earned by NAIC – what a private company would call profit. For 2008, no hard data are available but it appears that NAIC’s surplus will total only about $120,000 according to industry data made available to the Competitive Enterprise Institute. According to NAIC, SERFF processed over 500,000 filings during 2008 and, charging a minimum of $6 per filing, this would have produced at least $3 million in revenue.15 However, since $6 is only a floor for fees charged, many transactions would have netted more than that.
Problems with SERFF
For as much money as SERFF makes for NAIC, the program does not accomplish its job particularly well. It has rarely been updated, its profits appear to constitute monopoly rents, and its structure may well violate several state constitutions. This section describes the problems.
SERFF is Out of Date. In essence, SERFF is a reasonably simple, customized database application. As a piece of software, it is not complicated or expensive to create. The interface appears to be something that someone familiar with the software could create in a few days with an off- the- shelf rapid development tool such as Oracle’s Application Express.16 (Building and coding queries, however, would take more time.)
SERFF does not fully automate the process of rate filing. Many otherwise standardized – or semi -standardized – forms and supporting data must be submitted via attached PDF documents, rather than through a fully interactive interface.17 The software is not up to date. It uses Microsoft Internet Explorer 6—an eight year old Web browser—as its default client interface.18 Users are advised to use Adobe Acrobat 6, released in 2003, to deal with documents submitted through SERFF. In short, as a computer program, SERFF provides nothing exceptional. SERFF announced no major upgrades to its software during 2008.
SERFF’s value comes from its standardization and the work that state insurance departments – and their industry clients – have put into making their forms available online. Given the software’s enormous profits, it is odd that NAIC has invested so little in it and failed to bring it up to date.
SERFF Is Unfair. The “profit” that SERFF earns is what economists term a “rent” –  surplus revenue obtained due to a third party’s interfere in an otherwise mutually beneficial bilateral exchange. As noted, nineteen states require that all filings go through SERFF and thus require insurers to pay NAIC’s fees. These fees would be called taxes were they to flow to state governments. Instead, the NAIC collects the fees and spends the money on purposes that it never fully discloses to the payers. The excess profits can fairly be described as a tax for private purposes since insurers have no choice in many states but to pay them. It is fundamentally unjust to mandate the payment of a tax to a private party. People and corporations deserve choices. The states themselves do not share in NAIC’s revenue from SERFF. The money it earns goes to NAIC, not to the state insurance departments that must pay to comply with it.
SERFF Ought to Raise Constitutional Questions. Several (though not all) states that mandate the use of SERFF have provisions in their constitutions that ought to raise questions about the legality of the system. Many state constitutions allow only the “state” or the “legislature” to collect taxes. Thus, a serious question exists whether SERFF’s fee might be considered an authorized “tax.” The fee, after all, is collected by a private party and set without direct control or oversight by any legislature. Insurers and others who pay SERFF fees may have grounds to launch a legal challenge to the system. Eight states that mandate SERFF filing have provisions that might be used to challenge SERFF.19

  • Georgia: “Except as otherwise provided in this Constitution, the right of taxation shall always be under the complete control of the state.”20
  • South Dakota: “No tax or duty shall be imposed without the consent of the people or their representatives in the Legislature.”21
  • Rhode Island: “All taxes…shall be levied and collected under general laws passed by the General Assembly.”22
  • Minnesota: “The power of taxation shall never be surrendered, suspended or contracted away.”23
  • New Hampshire: “No subsidy, charge, tax, impost, or duty, shall be established, fixed, laid, or levied, under any pretext whatsoever, without the consent of the people, or their representatives in the legislature, or authority derived from that body.”24
  • Massachusetts: “No subsidy, charge, tax, impost, or duties, ought to be established, fixed, laid, or levied, under any pretext whatsoever, without the consent of the people or their representatives in the legislature.”25
  • Michigan: “The power of taxation shall never be surrendered, suspended or contracted away.”26
  • Oklahoma: “The power of taxation shall never be surrendered, suspended, or contracted away.”27

SERFF Does Not Perform Its Central Function Very Well. SERFF’s central function is to facilitate exchange of information on insurance rates and forms across states, but in some instances, the data exchanged through SERFF seems scanty. For example, in addition to some check boxes, SERFF’s property and casualty rate filing. Web forms require only eight discrete pieces of data – which essentially amount to “How much do you want to charge?” and “How many people will this impact?”28 That sort of data will satisfy few, if any, state regulators alone; all states have regulations beyond this.29 Nearly all states require additional data justifying the rates based on loss experience, impact on the company solvency, fairness to various protected groups, and compliance with numerous other state laws.
Conclusion: A Proposed Solution
Rather than maintain these mandates, NAIC could best advance its own mission by opening SERFF to competition. In establishing a series of uniform standards for data exchange relating to files and forms, NAIC has done the job most consistent with its non profit mission. However, in earning monopoly rents, failing to update its software, and maintaining a fee structure that may violate some state constitutions, NAIC behaves in a questionable manner. It should strive to improve SERFF for states and insurers alike by separating its functions and creating a flexible “open source” license for SERFF.
As long as NAIC acts like a government in many respects, it merits the same scrutiny and oversight as governments do. A reform process for SERFF would involve three actions:

  • Separation of SERFF’s intellectual property from its operations;
  • Creation of an “open source” license for SERFF; and
  • Allowing free competition between providers of “SERFF standard” software. Essentially, SERFF would become a standard rather than a specific application.

SERFF reform would require splitting SERFF into two entities – at least one of which should be independent of NAIC. The first entity would administer SERFF as it currently exists. It could be a wholly independent, investor owned company, a for-profit subsidiary of NAIC, or some other private entity. As a private company, it would collect all fees owed for SERFF filings under the current system, set its own prices, and be able to do anything else that the law does not specifically prohibit.
Another entity, a non profit consortium independent of NAIC – perhaps controlled by an industry regulator board – would own SERFF’s intellectual property. It would license the SERFF trademark, oversee a “standard” SERFF code base, and certify privately produced software as “SERFF compatible.”
This code base would be governed under an “open source” license.30 Like all open source licenses, it would grant programmers the right to modify, redistribute, and profit from the SERFF source code. Anybody who wanted to create a product and market it as SERFF compatible would have to subject it to a review process overseen by the consortium. (The consortium members could agree to use only products that passed this review process.) The process would provide assurance that various SERFF compatible products could exchange data freely, work with one another, and share common filing tracking numbers. Review fees would fund the consortium’s operations. Such a process has worked for dozens of other Internet applications – HTML/HTTP (for Web pages), MIME (for e mail), Rich Text Format (for word processing documents) – are all “open” standards maintained through consortia. Many parties market and distribute applications that use them and all of the applications, for the most part, work pretty well together. States and companies wishing to depart from the SERFF standard could do so.
The opening of the SERFF source code would solve most of SERFF’s problems. Most importantly, the questions about delegation of tax responsibility would be resolved. SERFF would clearly be a private market product and no state or company would have any specific obligation to pay money to NAIC or to anybody else in particular.31
States and insurers satisfied with the NAIC’s current management of SERFF could continue using the same software they use now. On the other hand, those states and individuals who have problems with the system could choose from a variety of new products that would spring up in the wake of the opening of SERFF’s current business model. Some operators might simply license the product to insurers and allow unlimited use for a flat fee. Others might continue with NAIC’s pay -per- use filing system. Some might charge nothing for the product and make money off of technical support, sales of related products, or even (as is the case with the Linux operating system) the notoriety gained through having developed the product. Since NAIC would no longer have a monopoly on the product, no constitutional questions would exist. As different developers create new applications that serve the same functions as SERFF, people dissatisfied with the old software’s progress could finally take their business elsewhere.
In addition, a more open version of SERFF would bring market forces to bear. Having the choice among multiple ways to file forms and make actuarial adequacy information available would make it easier to create new products within the admitted market. Constitutional questions about the delegation of tax authority would also vanish.
SERFF as it exists does not work, and therefore a better system is worth considering. A competitive, open -source SERFF system would work better than the existing system and would increase freedom for insurers and consumers alike.
This article was originally published May 1, 2009 in issue no. 155 of the Competitive Enterprise Institute’s OnPoint series.
References


1 For example, nearly all homeowners’ insurance policies for single family detached houses get written on the basis of a form called the “HO 3” which covers 16 named perils and everything else that is not specifically excluded.
2 New York has a pre funded guarantee fund managed by the state as an insurance company. Its functioning is, in many respects, similar to the Federal Deposit Insurance Corporation.
3 Florida’s insurance guarantee fund is typical. The fund covers claims up to $500,000 for homes and $300,000 for most other claims. See “About FIGA,” http://www.figafacts.com/faq.asp. For another example, New Jersey offers coverage up to $300,000.http://www.njguaranty.org/infoCenter/faq.asp
4 For reasons that lie beyond the scope of this paper—probably related to the transaction costs implicit in duplicating the current features of the admitted insurance market without a governmental rate overseer or mandatory guarantee funds— very few individual consumers choose to buy policies in the excess and surplus lines markets. Most well known insurers do not operate in the excess and surplus lines market and those that do typically do so through subsidiaries that maintain distinctive, independent brand identities.
5 National Association of Insurance Commissioners/SERFF, “About SERFF,” 2008, http://www.serff.com/about.htm.
6 NAIC does much of its lobbying through its D.C. office. NAIC’s major policy positions include opposition to national regulatory modernization for insurance and support for global solvency standards.
7 Ibid, p. 15, pp. 169 224.
8 NAIC, SERFF Version 5: Industry Manual, 2007, p. 4.
9 Ibid.
10 Ibid.
11 Meg Fletcher, “Stoked to Carve SERFF: NAIC Proposal Called ‘Hostile Takeover,” Business Insurance, August 11, 2008.
12 NAIC, “List of States that Mandate SERFF,” http://www.serff.org/index_state_mandates.htm.
13 SERFF rates are not published in any widely available source; industry sources reported the fees. State insurance departments do have some costs. They must have computers to handle SERFF filings and NAIC strongly recommends that they use Adobe Acrobat Professional. Acrobat Pro lists at $160 but is available for $140 on several websites.
14 Fletcher.
15 NAIC, “SERFF Surpasses 500,000 Transactions,” December 6, 2008,http://www.naic.org/Releases/2008_docs/serff_500000.htm.
16 In fairness, Web based Rapid Application Development frameworks did not exist when SERFF’s first version came online.
17 Ibid, p. 94.
18 Microsoft Corporation, “Windows History: Internet Explorer History,” 2007,http://www.microsoft.com/windows/WinHistoryIE.mspx. See NAIC (2007) for requirements.
19 In all of these states, “workarounds” exist that could make it possible for the current system to continue. In the four states that reserve the power of taxation to the legislature, the legislature could simply pass a statute mandating the payment of SERFF fees. However, states that forbid the surrender, suspension, or contracting of revenue collection could face more significant problems—state courts could consider the ability of NAIC to set fees on its own as an instance of “contracting away.”
20 Constitution of the State of Georgia, Article VII, Section 1(I).
21 Constitution of the State of South Dakota, Article VI, Section 17.
22 Constitution of the State of Rhode Island, Article VII, Section 1(I).
23 Constitution of the State of Minnesota, Article X, Section 1.
24 Constitution of the State of New Hampshire, Article 28.
25 Constitution of the Commonwealth of Massachusetts, Article XXIII.
26 Constitution of the State of Michigan, Article XI, Section 2.
27 Constitution of the State of Oklahoma, Article X, Section 5.
28 Ibid, pp. 88 89.
29 Regulators have not specifically complained about this because they typically work to enforce their own state laws.
30 NAIC would likely select a given license from the long list of licenses that have gone through the Open Source Initiative’s Review Process. Open Source Initiative, “Licenses by Name,” http://www.opensource.org/licenses/alphabetical.
31 By way of analogy, consider common law court requirements for the format of legal briefs. Since any decent desk top publishing software can produce the same brief, the requirement does not impose any specific “mandate” or “tax” even though it may impose a burden of sorts.


Eli Lehrer is a senior fellow at the Competitive Enterprise Institute where he directs CEI’s Center for Risk, Regulation, and Markets. RRM, which operates in both Washington, D.C. and Florida, deals with issues relating to insurance, risk, and credit markets.  Prior to joining CEI, Lehrer worked as speechwriter to United States Senate Majority Leader Bill Frist (R.-Tenn.). He has previously worked as a manager in the Unisys Corporation’s Homeland Security Practice, Senior Editor of The American Enterprise magazine, and as a fellow for the Heritage Foundation. He has spoken at Yale and George Washington Universities. He holds a B.A. (Cum Laude) from Cornell University and a M.A. (with honors) from The Johns Hopkins University where his Master’s thesis focused on the Federal Emergency Management Agency and Flood Insurance. His work has appeared in the New York Times,Washington Post, USA Today, Washington Times,Weekly Standard, National Review, The Public Interest, Salon.com, and dozens of other publications. Lehrer lives in Oak Hill, Virginia with his wife Kari and son Andrew.

Harnessing network effects: A Web 2.0 primer for the insurance industry

 
Introduction
The ascent of man from simple hunter/gatherer to progenitor of global economy can be directly attributed to our innate and profound ability to build profitable and self-sustaining networks.  And as with all complex and dynamic systems, information is just as important a constituent in any man made network as the more tangible economic nodes such as buyers, sellers, goods and physical infrastructure.
To recap: Ancient trade networks helped us to survive harsh prehistoric times, and in turn contributed to the advancement of language.  More complex networks then gave rise to nation-states, artisan crafts and the elite classes for whom information was both privilege and power.  Even more complex networks eventually manifested through the industrial revolution to give us the enterprise, wherein information was monetized as intellectual property and principals of mass production.
But within the past century, parallel developments in Information Theory and digital technology, along with massive increases in computing power, have sparked a new paradigm—the information revolution.  In this new age, information is equally critical for production as other traditional commodities, and contributes directly to the value of products and services.
The dawn of the Information Age “can be seen globally as the surreptitious replacement of citadels—which tend to restrict the flow of information—by less viscous environments, and the subsumption of information within capital.”[i]In few industries is this as evident as insurance, where information derived from voluminous amounts of data drives every key decision from the boardroom to the underwriter’s desk.
The information revolution—powered by instantaneous modes of communication—has justly prompted a major shift in the very fabric of capitalism, such that we are now largely operating within a network economy.  Whereas ownership over physical property and ideas belonged solely to the enterprise during the industrial era, products and services are now created and value is added through large scale social networks. Economies of scale stem from the size of networks instead of the enterprise, and the value of centralized decision making and expensive bureaucracies is greatly diminished.
Newer, more agile business models are supplanting formerly rigid power structures as more pervasive networks blur the line between a business and its environment.  Value is now intrinsically tied to connectivity and the openness of systems.
In the network economy, “Understanding how networks work will be the key to understanding how the economy works.”[ii] Such an undertaking is greatly simplified when one understands network effects and Web 2.0.
Network Effects
A network effect (sometimes also referred to as a network externality) is simply the effect that a user of a product or service has on the value of that product or service to other users.  A product or service displays positive network effects when more usage of the product by any user increases the product’s value for other users, and sometimes all users.
The importance of such effects in building enhanced and profitable economic networks was spurred by 19th and 20th century innovations in communications, which gave us the telephone, the ethernet and the internet.
Bell Telephone employee N. Lytkins demonstrated the term network externality in a 1917 paper covering the importance of network effects in building the telephone industry.  The paper explained how more users of the relatively new invention would increase the value of owning a telephone for all users.
Robert Metcalfe, inventor of the Ethernet, furthered the study of network effects through Metcalfe’s Law, which states that the value of a communications network is proportional to the square of the number of connected users of the system (n2).
Network scientist David Reed; however, postulates that there are even greater values to be exploited, as explained in Reed’s Law.  According to Reed, the effects are more akin to 2n as opposed to n2 since benefits increase on the basis of the combinations among the users and the total many-to-many possibilities, made possible by the internet.
Metcalfe’s Law, according to Reed, only accounts for one-to-one possibilities.  In Reed’s Law, the utility of networks and social networks in particular, can scale exponentially with the size of the network.  Thus the internet is now the prime amplifier of network effects.
There are multiple types of network effects:

  • Direct network effects are the simplest type to recognize, wherein the value of a good or service increases as more people use it.  The most classic example of a direct network effect involves the telephone.  As the network of people using telephones swells, so too does the value of owning a telephone since there are more people available to call.
  • Indirect network effects are activated when the usage of a good spawns the production of complementary goods, which in turn adds value to the original product or service.  For instance, the addition and increasing quality of web-enabled software increases the value of the internet itself.
  • Cross-network effects are also referred to as two-sided network effects since increases in usage by one set of users increases the value of a complementary product to another divergent set of users.  Google exemplifies this effect since any increase in the number of users raises the value of placing advertisements on Google.  In turn, Google takes the money from advertisers and invests in additional services for the users.
  • Social network effects are also sometimes referred to as local network effects.  In this model, the value of products or services is not necessarily increased by the number of users.  Rather, each consumer is influenced by the decisions of a subset of other consumers connected through a social or business network.  The extent of network clustering and amount of information each customer possesses becomes relevant in this model.  Progressive’s MyRate program employs social network effects by enabling policyholders to compare their driving habits online to those of similar policyholders.

Such effects—especially when compounded—drastically improve the efficacy of n-sided markets, or those that connect two or more different groups of customers/users to sellers/partners.
The insurance industry is a prime example of an n-sided market.  Consider therein the multitude of networked mechanisms including insurance groups and companies, agencies, brokerage firms, risk retention groups, departments of insurance, technology providers, business consultants and policyholders – just to name a few.
Consider also the industry’s absolute reliance on data, the massive amount of potential information contained within that data, and the fact that information contributes to the overall value of goods, and therefore the collective system.  The amount of intrinsic information/value then in such a system is inherently vast, but that value can be further amplified and exploited by applying positive network effects.
And no other school of thought is enabling the application of positive network effects better than Web 2.0.
Web 2.0
The term “Web 2.0” refers to the current evolutionary stage of web principals and practices that amplify online collaboration and empower end-users to create valuable networks of shared information. Tim O’Reilly, well-recognized Web 2.0 thought leader, further explains that:

Web 2.0 is the business revolution in the computer industry caused by the move to the internet as a platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them.[iii]

The emergence of Web 2.0 was not planned. Rather, it’s core conceptual and technological underpinnings were derived from closely examining internet companies that survived the dotcom bubble of the late 1990’s and ultimately emerged as clear market leaders and innovators over the span of the last decade. But the recent codification of Web 2.0 principals and practices is enabling a bustling new era of user-centric, network enabled software applications. This Web 2.0 systemization dictates strategic positioning of the web as a platform, user positioning wherein users control data, and a broad set of core competencies, which include:

  • Cloud Computing, which describes the provision of computing resources (software applications, networks, servers and data storage) as a service delivered through the internet. This can be viewed in sharp contrast to more conventional and dated means of provisioning, wherein businesses manage their own networks, servers and data stores, and IT staff is required to install, update and trouble-shoot software on individual devices. There are three basic service models for cloud computing:
    • Software as a Service (SaaS), wherein a consumer uses a service provider’s software applications on-demand, running on a cloud infrastructure. In this model, consumers do not manage the underlying infrastructure of networks, servers, data stores, operating systems or individual application capabilities. Users can, however, control software configuration settings and add modular software components. Google Apps is a key example of SaaS.
    • Platform as a Service (PaaS), wherein a consumer uses a service provider’s cloud infrastructure to deploy software applications. In this model, consumers do not manage the underlying infrastructure of networks, servers, data stores or operating systems. Consumers do retain control over the deployed software applications and hosting environment configurations. To this end, SalesForce.com enables developers to create new applications that can either add to existing SalesForce.com functionality, or create new functionality.
    • Infrastructure as a Service (IaaS), wherein a consumer utilizes the fundamental computing resources of a service provider, including data storage and network capabilities. In this model, the consumer can deploy and run any software of its choosing, including operating systems and applications. The consumer has little control, however, over the infrastructure itself, except with respect to select networking components such as firewalls. Many companies, such as Google, provide this infrastructure in tandem with use of their software. Other providers simply provide this service via data centers. This model can be likened to renting physical warehouse space, wherein the consumer has complete control over physical goods, as well as itemization and inventory techniques and shipping mechanisms. The consumer has very little say though as to how the warehouse is operated by its owner.
  • Software above the level of a single device, which postulates that applications that are limited to a single device, such as a personal computer, are far less valuable than applications that integrate services across any device that provides internet access. Software that serves multiple platforms displays positive network effects.
  • Architecture of participation, which describes the nature of systems that are designed for user contribution. One of the fundamental tenants of Web 2.0 is that users create value by contributing information to systems as a side-effect of ordinary usage. End-users contribute by creating hyperlinks to connect disparate information sources, and by adding to online information bases and SaaS feature-sets. Programmers are also enabled to contribute to cheaper and more agile open-source code and software standards through the architecture of participation.
  • Harnessing collective intelligence, which involvesthesystematic collection, categorization and analysis of broad sets of usage patterns and user contributions to create actionable intelligence and increase value for all users. Collective intelligencesystems tap the expertise of a group rather than an individual for decision making purposes.  For example, PredictWallStreet.com focuses one million unique monthly visitors on predicting whether a stock will close up or down. Resulting algorithms are able to outperform the market, which individual analysts typically can’t do. Diversity of opinion, independence, decentralization and aggregation are required to effectively harness the wisdom of crowds.
  • The importance of data, which describes the increasing significance of proprietary data and associated databases as a core competitive advantage, as opposed to storage and transfer technologies. Such technologies are becoming cheaper, more agile and more ubiquitous by the day, enabling companies to produce more accessible and more participatory data sources that can be quickly and continuously augmented to increase their value.
  • Rich User Experience, which makes clear that web applications must be able to provide a user interface and base functionality that perform just as well as –  or better than – more traditional, device-dependent software.  Recent advances in mainly open-source technologies such as AJAX are enabling developers to build web applications that accomplish this directive. And by using the web as its platform, Web 2.0 systems are able to provide an enhanced set of network enabled, value generating features not typically found in non-web native software, including:
    • Blogs, or Web logs, which are online journals or diaries hosted on a Web site and often distributed to other sites or readers using RSS, or syndicated feeds. Blogs may be of a personal nature, or intended for a business audience. When used for business purposes, blogs are prime, easy to use enablers of thought leadership. Blogging software, such as WordPress.com and Blogspot.com, is often free, and enables blog subscribers or readers to post comments in an open environment for further discussion.
    • Mash-ups combine content from existing online sources to create new services.  For example, a mash-up might retrieve policyholder data from a networked database and display the locations of the policyholders elsewhere on a web-enabled Google map.
    • Podcasts are a multimedia form of a blog, typically containing audio or video content.  Podcasts are a method of broadcasting that does not depend on scheduled broadcasting times.  Rather, podcasts can be streamed or downloaded and played on demand.  iTunes is the most popular aggregator of podcasts. Because iTunes and the iPod were early enablers, the term “podcast” is a mash-up of the terms “broadcast” and “iPod.”
    • RSS (Really Simple Syndication) allows internet users to subscribe to online distributions of news, blogs, podcasts, or other information. Aggregators such as iGoogle and MyMSN combine RSS feeds from multiple sources to provide personalized access from a single portal.
    • Social networking refers to sites such as LinkedIn, which allow members to communicate, form groups, and access other members’ personal information, skills, talents, knowledge or preferences. Such sites have experienced explosive growth within the past few years, and collectively boast membership in the hundreds of millions. Social networking concepts can also be applied to other types of web applications.
    • Web services enable communication between disparate systems in order to automatically pass information and conduct transactions.  For instance, an insurer and an insurance agent might use web services to communicate over the internet and update each others’ various systems without the need for multiple, manual updates.  Web services also enable service oriented architecture (SOA) which builds interoperable services around business processes.
    • Wikis are systems for collaborative publishing, which allow many authors to contribute to an online document or discussion.  The foremost example of a popular wiki is Wikipedia, which greatly exemplifies the principals of architecture of participation and harnessing collective intelligence.

Also fundamental to Web 2.0 is the ability for users to create hyperlinks and post comments. To this end, Web 1.0 can be considered “read-only” from the end-users’ perspective. In the former model, web masters prescribed static hyperlinks to connect disparate web sites, or to navigate to different pages within the same website. Additionally, end-users were merely provided access to site content, lacking the ability to provide open and transparent feedback by way of comments and replies. This model ultimately reflected the limited role of consumers during the industrial and mass-media dominated eras.
Conversely, Web 2.0 can be considered “read-write” from the end-users’ perspective. In this model, users are afforded the ability – and indeed imbued with the obligation – to create hyperlinks. Or as the aforementioned Tim O’Reilly so elegantly writes:

As users add new content, and new sites, it is bound to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition and intensity, the web of connections grows organically as an output of the collective activity of all web users.[iv]

Web 2.0 end-users are also provided dialectically transparent feedback mechanisms packaged with site content, typically in the form of commenting functionality. Most Web 2.0 site content, and almost all blogs, includes the ability to post comments, thus removing the barriers between author and reader and between all readers. As such, content consumers can initiate further, elucidated discussion, and keep authors and content providers honest. Further to this, many web applications that enable users to share hyperlinks also provide mechanisms for commenting on the hyperlinked content.
Deriving Value from Web 2.0 Enabled Network Effects
Much as been written (and even lamented) about the insurance industry’s apparent sluggishness to adopt and implement new technologies, particularly in the realm of Web 2.0. In fairness, the insurance industry represents a massive and necessarily risk adverse n-sided market, subject to more rigorous standards and complexities than most other industries.
But we have reached a tipping point where much of the risk involving Web 2.0 has already been assumed by leaders such as Google and Amazon.com. Web 2.0 companies are thusly beginning to target the insurance industry with new technologies and methodologies at break-neck speed, and it is widely believed (for a variety of reasons) that companies who do not make the switch to Web 2.0 will ultimately suffer, mired in decreased competitive positioning. So let us discuss some of the ways in which insurers can derive value from web 2.0 enabled network effects:

  • Embrace the cloud. Cloud computing represents a sea change in modern business operations, and one which the insurance industry must embrace sooner rather than later. As it is, modern businesses must concentrate, first and foremost, on their core competencies. And no other facet of insurance operations is more distracting than maintaining dedicated, internal resources for software maintenance, network and server architecture, and data storage. Cloud computing provides software and operating platforms  locked in states of perpetual beta, in which improvements are constantly made and rolled out without interruption to the end-user. Additionally, the storage of data off site negates the need, almost entirely, for companies to employ and manage network administrators, and actually keeps data far more secure than any insurance company is capable of. The development of cloud computing now also means that business objectives are no longer limited to IT objections based on the availability of limited, internal technology or IT competencies. In past decades, such restrictions enabled IT staff to dictate the ultimate reach of many business decisions. But cloud-based systems are malleable, built and customized directly in support of business operations, not IT proficiency. Lastly, and perhaps most importantly, insurers in today’s economy must operate leaner and meaner than in more fiscally liberal times. Resources must be dedicated to business imperatives, and not unnecessary software, server and data storage licensing costs and expenditures. Cloud computing ultimately presents companies with enormous potential for cost savings. (To this end, the City of Los Angeles will save $13 million in software licensing and manpower costs over the next five years, simply by adopting Google Apps hosted solutions.)
  • Mobilize your workforce. Web 2.0 software that serves multiple devicestranscends geographic limitations, thus drastically increasing productivity and improving collaborative business processes. So make sure that your workforce is capable of performing any base function from anywhere (reasonably speaking) in the world. For instance, mobile claims adjustors, who can process claims real time and on-site, present drastic improvements in efficiency. Furthermore, any member of a workforce should be able to access and edit information or internal documents directly from a database through the web. This stands in start contrast to the former models of e-mailing documents back and forth between onsite and offsite workers, or downloading documents from the company’s server for use on a different device, which effectively creates new versions of the documents for each instance of use.
  • Establish architecture of participation. Again, one of the most beneficial aspects of Web 2.0 is user generated content, given that users create value. So implement systems that support and employ this effect. Encourage all nodes in your network, including consumers, to contribute to your total information base through wikis and other forms of online discussion. Enable your end-users the ability to comment on and create hyperlinks to all pertinent web pages and information bases, and watch the value of your systems and inherent information grow exponentially.
  • Harness collective intelligence. Collective intelligence is market intelligence, so make sure that your systems are capable of collecting and analyzing usage patterns, user feedback and other data created through use of your systems. A simple example to consider is the ease in which online surveys can be created, conducted and analyzed entirely online. Savvy companies are relying increasingly more on such efficient mechanisms to gather user feedback, and turn the wisdom of crowds into actionable business intelligence.
  • Focus on creating unique, hard-to-duplicate sources of data. Again, the increasing ease and ubiquity of data collection, storage and analysis is enabling companies to instead focus on the production of richer, deeper data sets as a competitive advantage. And in addition to using such data sets for improved rating and underwriting techniques, insurance organizations can leverage valuable data sets to sell to other organizations. Although this activity centers mainly on service providers, the potential spans all organizational types in an industry that views data as one of its most important and valuable assets.
  • Provide a rich user experience. Consumers are relying more and more on the internet to locate coverage options, receive and compare quotes and manage their policies all online in real-time. Further to this, consumers are looking for quicker, easier means of self-service, which Web 2.0 is enabling far better than past methods. So a rich user experience, when done right, can contribute both to new customer acquisition, and customer retention. Additionally, Web 2.0 is proving to be the most effective tool available to an organization’s branding and marketing wings. Contemporary, successful marketing and branding smartly focuses its attention to blogs, podcasts and other forms of interactive social media to reach target audiences faster, more efficiently, and cheaper than ever before. And by making the branding and marketing processes two-way, consumers are feeling more comfortable and providing more direct feedback in a climate where consumers otherwise cynically view non-inclusive marketing techniques as indifferent to their actual wants and needs. Furthermore, Web 2.0 enabled marketing techniques enable real-time analysis of the ultimate success of marketing efforts. And finally, with Web 2.0 web applications providing user interfaces that rival those of traditional software, replacing legacy systems becomes a much easier sell.

Conclusion
This article by no means attempts to provide a final, definitive resource for network effects and Web 2.0, nor does it claim to provide an exhaustive list of all pertinent elements and methodologies. This article does aim; however, to highlight the benefits of Web 2.0 enabled network effects for an industry that is primed and ready for major innovation.
The article’s author would urge the reader to use this information as a starting point – either for debate, or for new considerations for operational excellence. Readers who wish to pursue this topic further would be well served by reading Amy Shuen’s Web 2.0: A Strategy Guide.
References


[i] Hookway, Branden (1999). Pandemonium: The Rise of Predatory Locales in the Postwar World.  New York: Princeton Architectural Press.
[ii] Kelly, Kevin (1998). New Rules for the New Economy: 10 Radical Strategies for a Connected World.  New York: The Viking Press.
[iii] Web 2.0 Compact Definition: Trying Again(2005, September 30). California: O’Reilly Media Inc.  Retrieved on October 15, 2009 from the World Wide Web: http://radar.oreilly.com/archives/2006/12/web_20_compact.html.
[iv] What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software(2005, September 30). California: O’Reilly Media Inc.  Retrieved on October 15, 2009 from the World Wide Web: http:oreilly.com/lpt/a/6228.


Josh Struve is the Digital Marketing Manager at Perr&Knight, as well as the Managing Editor of the Journal of Insurance Operations.

P&C underwriting automation: It’s time to optimize and modernize

Background
The property & casualty insurance industry continues to face challenging market conditions.  Premium rates continue to drop while at the same time the economic slump results in exposure basis reductions.  In the face of this premium shrinkage, carriers are trying to hold the line on expenses even as they strive for higher submission and policy counts to keep premium revenue up. Agents who face their own revenue pressures are now shopping more risks around and demand greater ease of doing business from their carriers. At the same time, underwriters are under constant pressure to improve underwriting quality and discipline. Through it all, internal processes are cumbersome, key systems are inflexible, and any changes involve major commitments of people, time, and money with uncertain results.
Challenging times indeed!  I’m not fond of “perfect storm” analogies, but if you feel like George Clooney trying get his fishing boat up over that wave, or Mark Wahlberg at the end, stretched out in his survival suit one hundred miles from land, we need to talk!
It is time to modernize and optimize your underwriting processes, even in the face of challenging times.  There are technologies and methods emerging that can do all kinds of interesting things, but before selecting the technology, we have to figure out what our new process should be.  So let us consider what carriers really want their underwriting process to be, and then look at what technologies can get us there.
The carriers speak out
We conducted three separate research studies where we surveyed Commercial carrier CEOs and senior management for their input with regard to pain points, emerging technologies and underwriting management systems.  Let us share with you some of our key findings.
1. Strategic Technology Investments to Combat the Soft Market – A Survey of Commercial Insurance Executives (Conducted by The Ward Group)
Meeting technology expectations of agents and employees is significant and often overlooked.  Beyond the profit and loss improvements that technology investments are expected to deliver, there is a growing expectation among agents and employees, especially among younger professionals, that technology should be easy to use, friendly, and cutting-edge.
In this survey conducted by The Ward Group, commercial carrier CEOs were asked ten questions about technology implementation, how technology helps them compete against other insurance companies, and the use of technology for underwriting activities.
The findings clearly show that technology is recognized as a powerful competitive weapon.  Eighty-five percent of executives polled indicated that technology can play a “significant role” or a “more than average role” in their companies’ ability to compete against other carriers.
Additional benefits that these executives expected from technology investments and a modernized underwriting system were:

  • Improved underwriting productivity and reduced underwriting expense
  • Reduced loss ratio
  • Ease of doing business
  • Better individual risk selection and pricing
  • Better understanding of the entire book of business
  • Streamlined processes and reduction of expenses
  • Meeting expectations of agents and employees

The survey participants also provided, in their own words, what they believe are the most important ways to implement new systems or to invest in new technologies that will help in a soft market:

  • “Technology is key to accomplishing underwriting and processing more efficiently….”
  • “Make it quick and easy for the agent to do business and they are more apt to use your products in a soft market.”
  • “If agents have to rekey to do business with us, they will place the business elsewhere.”
  • “Automating underwriting rules will speed up policy processing and shorten turnaround time.”
  • “Quickly understand at what price level a risk can be written and still make a profit.”
  • “New systems and technologies…allow underwriters more time to review the risk and make more qualified underwriting decisions.”
  • “Improved efficiencies give underwriters the opportunity to review more submissions.”
  • “Technology can differentiate a company from competitors.”

2. Mid-Tier Carrier CEO Study (conducted by Phelon Group)
When surveyed about pain points, the predominant concern for mid-market P&C carriers (48%) is how to get profitable business on the books in the softening market.  Executives recognize that their current underwriting processes are grossly inefficient, partly due to processes based on outdated legacy systems.  However, they believe that their intellectual capital lies in their existing systems and analytics, and they are unwilling to walk away from that competitive advantage.  Carriers are looking for ways to leverage this asset and to further codify their knowledge to get profitable business on the books.
Regarding underwriting challenges, executives chose the following priorities:

  • Improving ease of doing business with agents (33%)
  • Automation of underwriting (30%)
  • Straight-through processing (30%)
  • Integration with predictive analytics systems
  • Management visibility
  • Sharing of best practices

The participants shared with us some of their perspectives:

  • “Getting the business on the books and pricing properly with respect to risk is my main concern. Profitability is key.”
  • “Our legacy systems create huge inefficiencies and the bodies we need to process the underwriting are too heavy.”
  • “It is hard to establish true and profitable pricing in a softening market…We need better tools to analyze trends and create pricing that accurately reflects the market.”
  • “We looked to the market for a 3rd party solution, but we could not find one that met our customization requirements. Whatever we would choose would have to integrate with our system to leverage the investments we have already made in customizing our policies and pricing.”
  • “We operate in a highly competitive market and need to make it easier to work with our agents.”

3. Magic Wand Survey (Conducted at NAMIC Commercial Lines UW Seminar)
Earlier this year, we asked senior and underwriting managers, “If you had a magic wand, what top benefits would you want from an underwriting automation system?”

  • The overwhelming winner was increased efficiency and productivity. It got the most votes and the most number #1 votes.
  • Tied for second place were both speed & agility and ease of use (for underwriters). Managers are looking for user-friendly, intuitive systems that will make it easy to do their jobs without adding complexity or requiring extensive training. At the same time, they are looking for agility, the ability to change their rules, data, and processes quickly to respond to changing market conditions.
  • The fourth most popular response was ease of doing business with agents.

These four responses all address the need for better workflow and systems in the underwriting process. Additionally, our surveyor shared with us some insightful comments.

  • “We want increased premium capacity with the same number of staff, for profitable growth.”
  • “I want to reduce the number of people handling a submission and cut down on the back-and-forth questions between underwriters and agents.”
  • “We want to improve the customer experience.”
  • “The ideal system would allow our customer – the agents – to interact with our associates and view the system together.  This would allow us to provide better service.”

The remaining responses included:  integration of disparate systems into a unified underwriting desktop, management visibility, discipline & consistency, scale the business, and predictive modeling & analytics. (It is interesting to note that all of these items received some first and second place votes.)
How much we’ve spent, how little we’ve changed!
A few months ago I was involved in a discussion about the challenges of tracking submission activity and turnaround times. It reminded me of how little we have accomplished over the last 25 years. The question had to do with what to use as the received date/time for a submission – when it was received in the mailroom/imaging station, when the underwriting assistant got it, or when the underwriter got it.  I realized that I had that same discussion with my business users 25 years ago. While certain steps had been automated in and of themselves, we still have basically the same processes, the same steps, the same people!
With today’s capabilities, a submission could be received through upload or agent portal entry (including supplemental data and attachments). Any necessary web services could already have been run in accordance with carrier rules (e.g., address scrubbing, geo-coding, financials, etc.) and attached to the account.  It could immediately appear on the assistant’s or underwriter’s work queue.  Submission tracking from that point could auto-magically be done by the system and available in real-time through a dashboard.
But that is not where most carriers are today. Generally, we have automated various individual steps, but the overall workflow is still a manually-controlled one, performed by the mailroom, imaging, clerical, and underwriting staff.
For example, we’ve spent millions of dollars to go paperless, but in many companies, underwriters still are pulling up electronic images and re-typing data into another system, just like we used to do with mailed-in or faxed-in paper. This is wonderful document management and forwarding, but is still the same old workflow. In fact, underwriting team members may be re-typing information into their rating engine and/or quoting system. They are probably re-typing into multiple web services like D&B, Choicepoint, geo-coding, engineering survey, loss control vendors, etc. And maybe they are still typing to get loss history, customer id’s, submission file labels, and who knows what else.  (Take a little test: How often does your entire staff enter, type, or write the insured name, whether it is in a system, on a letter or form, on a label, in a web service, etc.?  Once, twice, four times, seven times, ten times, more?)
How many of us still pass paper from one person to the other – underwriter to assistant, rater, or referral underwriter? How many of us still take hours or days to acknowledge receipt of a submission, to collect the supplemental data needed to underwrite it, to generate a quote, to get the agent’s feedback? How long does it normally take to resolve a 30-second issue between the underwriter and the agent or the underwriter and his/her supervisor? How long does it take to send, receive, research, make a decision, and reply to a referral?  On the other hand, how long would it take if all the information were presented to the underwriter and manager in context, a click or two away, and the transmission was instant?
And that’s just what we do to ourselves.  How does an agent feel about how we help him/her provide service to their customer?
Our business processes are constrained by our old systems and our old patterns. Our systems treat underwriting as a data entry process for policy administration instead of a unique workflow with its own set of players, sources of information, processes, and rules.  And our ideas tend to be limited to this view of what is possible.
We need to break free of this mindset – to be able to see what is possible. Let’s start with a list of workflow “don’ts”, things that underwriters and agents shouldn’t have to do or use anymore:

  • Tracking sheets
  • Typing in data from a paper or from one system to the other
  • Waiting for a paper file to be pulled or received
  • Having to close one submission in the system to be able to access another
  • Re-typing data into another system to get a loss control, loss history, credit report, MVR, VIN validation, etc.
  • Searching through the underwriting manual or old emails to find that company directive on writing xxx LOB in yyy territory
  • Agent entering 5 screens of data only to find out you don’t write that class at that size in that state
  • Waiting two hours for an agent/underwriter to get back in the office to check their files on something
  • Losing the account because someone misplaced the submission paperwork
  • Finding out six months later that an agent/underwriter shouldn’t have quoted that account because it was outside your appetite or their authority level
  • Collecting information, printouts, separate documents, and the underwriter’s notes to pass it on to the referral underwriter
  • Reconciling your agent portal quote with your backend rating quote

Now let’s review what types of emerging technologies are ready for prime-time and then think about how we can do things better.
Emerging technologies
Over the last several years, there have been many exciting advances in technology and what you can do within the context of business operations. By and large, most of these are still on the wish-list for carriers and, for that matter, for insurance systems vendors. But these emerging technologies provide the new foundation to break free of the older system/technology constraints that have kept us stuck in our old workflows.
Service-Oriented Architecture
One of the most basic innovations is Service-Oriented Architecture (SOA). SOA breaks application systems into separate “services” that can receive input parameters, run, and return their result set to whoever invoked them. Each service acts like a building block that can be used and re-used in various contexts, like a Lego block. This allows applications to be assembled from appropriate services: You’d like to check the customer’s financial status? Just plug in a Dun & Bradstreet report. SOA provides a more flexible and more sustainable way to set up your enterprise applications.
Note that retrofitting existing applications into a service-oriented architecture can be challenging. Some major subsystems (e.g., rating, policy issuance) may be able to be broken out into services to let the legacy system play with newer SOA applications, but a full rework of legacy applications is rarely practical.
However, SOA is clearly the best practice now and all new applications, whether built in-house or acquired from solution providers, should be service-oriented architecture solutions.
Web 2.0 & rich internet applications
Web 2.0 or Rich Internet Applications are generic terms that refer to the use of technologies and methods to bring new levels of interactivity and real-time behavior into browser-based applications.  Examples are the use of blogs, wikis, chat, social-networking, photo/video, and voice.
What’s new is not so much the technical capabilities themselves, but the new forms of mass use that have sprung up as internet access has expanded past critical mass. People have been sharing files and chatting over the internet for decades.  But now it is so common and standard that internet applications are being built around these capabilities, with documents and streaming video and chatting as a part of the application interaction. Witness Facebook, dating services and even the NBC Olympics website.
Similarly, Web 2.0 offers new options for how we do business in the insurance space. We can incorporate chat, real-time notes, flexible file/video attachments wherever they can improve the quality and/or speed of the process.
Configurability
Configurability refers to the ability to specify or change details of a system without having to touch the underlying base code of the system. The concept is not new – vendors have talked about being configurable for a couple of decades.  But both the breadth and the ease of configuration has improved dramatically in the last several years.
In the past, configurability usually referred to the ability to redefine the values of a few fields to fit a carrier’s specific data requirements, or a control table that would direct processing between a few pre-defined paths. But now you have the ability to truly define or redefine any and all the data elements, values, supplemental data, screens and screenflow, edits, risk selection and appetite rules, underwriting guidelines and best practices, straight-thru processing, assignments, referrals, users, permissions, letters of authority, and the internal and external services you want to perform. Before, you could tweak your hard-coded process with a few variations.  Now you can configure virtually your whole process for each line of business, geography, distribution channel, and even each individual.
Configuration has gotten more powerful and much easier.  In the past, the “configuration” was done by the vendor’s programmers, either in native programming code or through a proprietary pseudo-code.  Today, the advanced solutions in the marketplace offer point-and-click configuration tools that allow business system analysts or developers to specify what should happen.
Configurability is another best practice that carriers should insist on as they look at new solutions.  (But make sure you get to see and try it – everyone says they are configurable, but what they actually offer varies widely.)
Rules & workflow engines
Rules and workflow engines allow the definition of specific business rules and/or process workflows separate from the system’s data and screen handling.
This segregation of the rules and/or process steps allows for easier modification of the rules and/or process without having to change the underlying base code. For instance, if the carrier decides to tighten their underwriting rules, change their assignment rules, or tweak their scheduled credit ranges for a specific territory and class, the change may be made to the appropriate rule or workflow, and the application will automatically absorb that change every place the application uses that rule or workflow.
In addition, these engines permit separate and more effective management and facilitate re-use of the business rules and workflows across the carrier’s entire business operation. (Rules and workflow engines are different from each other, though in some installations they overlap, but are similar in how they relate to the business application.)
Separate external rules and workflow engines have been available for many years. But, in reality, their effectiveness in insurance applications has often been limited. Traditionally they have been toolkits with little or no applicable insurance content out-of-the-box.  As a result, you would have to build a new application from scratch, or you would have to integrate the external rules/workflow into your existing legacy systems. Either approach involves significant cost and time. In addition, often you would find that you can’t efficiently invoke the rule/workflow engine everywhere you would like without prohibitive performance overhead (e.g., invoking a rules engine at the field level).
In recent years, however, modern configurable solutions are increasingly emerging with embedded rules and workflow capabilities.  These products offer the necessary level of rule and workflow management while also providing standard insurance rules and workflow out-of-the-box, allowing configuration of company-specific rules and practices, and performing efficiently at any level in the application (e.g., pre-screening, field-level, screen-level, assignment, quote, referral, etc.). This can enable the carrier to implement a modern solution with configurable, embedded rules and workflow in a much more reasonable time and cost.
Underwriting 2.0 – the platform of the future
Okay, so that’s the technology with all of its marketing glory. But let’s be real. What can these emerging technologies do for our process?  Can they bring all our islands of automation into a coherent, efficient underwriting process? How will they really improve productivity and quality for the underwriter and the agent? What can the modern process be like?
Above we reviewed some of the “don’ts” that have plagued our workflows for the last few decades. Let’s start looking forward and defining some “do’s” as principles for our future underwriting process.

  • Everything you need to see in one place (not in different systems, email, the fax room, your in-basket, the document management system, etc.).
  • Everything in 1-3 clicks – everything!
  • (account, submission/policy header, application, correspondence, attached files, web service reports, external system data (loss history, loss control, payment history), underwriting worksheets, rating worksheets, predictive. analytics model results, rating, rating factors, quotes, ….).
  • Everything is accessed and updated real-time.
  • Everyone is notified of everything relevant, immediately.
  • Underwriters and/or agents can work with each other, not at each other, in one process (notes, chat, shared view/update, instant update and communication).
  • Multiple accounts open at once (a click away).
  • Straight-thru processing for the clear winners and losers and, for the rest, everything set up in one desktop for the underwriter.
  • Automatic advice and reminders for the underwriter based on account characteristics or activity.
  • Intuitive, easy-to-learn, easy-to-use  (insurance terminology, no Save buttons, even a configurator designed for real insurance people and processes).
  • No re-entry – ever.
  • Configurability to keep the system current with the business needs and opportunities.

These are all possible today. The technologies are available now, and people are using them to do exactly these kinds of things (though not always in insurance). And they don’t require tens of millions of dollars and years of waiting. The first step is realizing that this is the business process you want.
What you need is a single platform, an integrated desktop, a control station for the underwriter and the agent that has all the necessary steps and resources right there. Tasks that don’t require human intervention happen automatically ahead of time. Tasks that require professional judgment or decision are automatically queued up for the underwriter and agent – with all the appropriate research, background information, and pre-analysis needed to make the best possible decision available at the click of a mouse. This new desktop and process is integrated with and leverages the carrier’s existing systems, data, rating, forms, models, and knowledge resources. Communication and collaboration with others is instantaneous and part of the account record. The platform and the process are intuitive for underwriters and agents. And all aspects of the desktop and the process can be adjusted, added to, or redirected as fast as the market changes.
Let’s now explore in more detail how this type of platform works and what it delivers.
Agent productivity & ease of doing business
Underwriting 2.0:  The agent can upload a submission from their agency management system or can easily enter a submission from scratch. The entry process and screens are intuitive so agent training and errors are minimal. The agent desktop provides quick pre-qualification and risk appetite feedback so the agent doesn’t waste time submitting risks that the carrier is not interested in. Supplemental data is prompted for at entry while the submission is still in front of the agent. Electronic documents, photographs, loss runs, and notes can quickly be added to the submission as a part of entry. When the agent submits the risk, it goes directly to the appropriate assistant’s or underwriter’s desktop and the receipt is confirmed to the agent instantaneously. The entire process of submitting a risk, including supplemental data and attachments, only takes 5 – 15 minutes from the agent’s desktop to the underwriter’s desktop.
Quotes (including multiple quotes and quote options), agent responses, re-quotes, bind requests, and binders are prepared and delivered in real-time. Now the agent and underwriter can work through a rush quote much more efficiently and accurately, collaborating and communicating together on the same system.
For example, the new platform utilizes immediate alerts and notifications, notes, live chat (like Instant Messenger), email correspondence, shared viewing and update of the account.  This enables the agent and underwriter to resolve questions and move the account along as fast as possible without time-wasting email, fax, and voicemail delays and constant account pick-up/put-downs and handoffs.
The Result:  The agent wants to bring business to you because he/she can get a confirmation, quote, and binder from you faster and more efficiently than with any other carrier. Both sides benefit and help each other succeed.
Underwriter productivity
Underwriting 2.0:  The system automatically prepares the risk for the underwriter’s consideration. Leveraging its SOA platform, the platform can pre-assemble carrier system data (e.g., loss history, loss control, payment history), web service data (e.g., MVR, Xmod, financial, geo-code, etc.), and predictive analytics results, or it can allow the underwriter to select what information is appropriate for this risk.  In addition, the desktop analyzes the submission to either highlight risk conditions or characteristics for the underwriter’s attention or to require a referral based on the carrier’s underwriting best practices, knowledge base, and the underwriter’s letter of authority.  Given all of this information about the account, using its embedded rules capability, the desktop advises the underwriter or automatically drives the appropriate processing for the risk.  The platform also can screen out clear winners and losers for straight-through processing before the underwriter has spent any time on the risk, can also present it to the underwriter with best-practice advice, or can flag the account to require a referral.
These features allow the underwriter to spend more of his/her time underwriting, concentrating on the risk characteristics and the appropriate price. Everything the underwriter needs is on the desktop, just clicks away –  the complete application, attachments, notes and chat, external web reports, underwriting guidelines and best practice checklists, rating and pricing, quote and bind capabilities, issuance, endorsements, cancellations, renewals, and dashboard visibility.
For example, the underwriter prepares any worksheet items that have not been prefilled and generates one or more quote options and proposals in real-time.  If a referral is required, the full account and all of the backup information can instantly be placed in the referral underwriter’s queue for their review and decision.
Once the quote is released to the agent, an alert pops up on the agent’s desktop and an email is sent to the agent with the quote attached to notify them immediately.  The agent and underwriter can now collaborate through chat or notes and can share views and updates of the risk.  This helps the underwriter to instantly respond and modify the quote if appropriate, lets the agent accept the right quote, and lets the underwriter close the business in real-time.
Finally, when the underwriting process is completed, all the policy information and documents are passed to the carrier’s existing systems of record so the existing processes and systems are not disrupted. Throughout the entire underwriting process, all information and actions are saved in a detailed audit trail for reference by the underwriter, the referral underwriter, a claims adjuster, loss control, billing, and auditors.
The Result: The Underwriter spends more time underwriting,  handles more quotes, and writes more business. Setup activity is automatic, incorporation of web data and carrier knowledge happens in real-time, communication is instantaneous, and the agent gets their response as quickly as possible. Ultimately, agents bring you more business because you get them an answer first.
Underwriting quality/discipline
Underwriting 2.0: The underwriting desktop needs to enforce quality as well as productivity. Quality underwriting is the key to an insurance carrier’s profitability. This platform will use its embedded rules engine and the external data from web services and the carrier’s systems to guide and enforce best practices throughout the underwriting process.  Every step of the process is assisted by contextual business rules that advise the underwriter and/or drive the process – the initial screening of the risk, the analysis of the risk characteristics, the knowledge-based reminders, assigning appropriate tiering/rating/pricing factors, checking electronic letters of authority, and automatic referral flags.
The Result: Quality is built right into the process. Underwriters are advised and directed in accordance with the carrier’s guidelines and best practices every step of the way.  Rather than relying on the underwriter to find and use paper- or email-based directives and after-the-fact audits, or forcing all risks through a referral process to ensure senior underwriters’ review, the desktop will lead every underwriter through the carrier’s approved risk analysis and pricing regimen. The carrier’s book will be accurate, consistent, and auditable.
Incorporating predictive analytics
Underwriting 2.0: Predictive analytics brings sophisticated analysis into the underwriting process, but only if it is used. Rather than modeling being a separate activity that involves additional work, the new platform will incorporate predictive analytics. The underwriting desktop can then directly apply model results to screen risks out, qualify them for straight-through processing, alert the underwriter to the key risk characteristics, pre-fill rating and pricing factors, and/or mandate referral processing. Having the best information and analysis available lets your underwriters assign the best price – aggressive pricing for the winning accounts, and defensive pricing for the marginal accounts.
The Result:  Incorporating predictive analytics into the underwriting process helps the underwriter write better business at the best price.  Precision pricing on top of informed risk selection and underwriting quality will produce the most profitable book of business.
Actionable knowledge management
Underwriting 2.0:  So often, a carrier’s underwriting knowledge and experience is locked up in senior underwriter’s heads or buried in underwriting manuals and email archives. The new platform leverages this intellectual capital within the underwriting process. By capturing and presenting knowledge items within context of specific risk criteria, they become actionable – suggesting attention to specific characteristics, requiring specific action, enforcing a referral, or performing an automated function.  Every underwriter will receive the benefit of the carrier’s best underwriters’ guidance and best practices as they are underwriting an account.
The Result:  Retaining the knowledge of our senior underwriters and training our junior underwriters is one of the major challenges in our industry today. Capturing and presenting underwriting knowledge through the underwriting desktop protects and leverages this most valuable asset, giving your junior underwriters the benefit of your best underwriters’ wisdom and experience where it matters most, right within the underwriting of the account. Actionable knowledge management will improve the quality of the book of business, preserve the carrier’s knowledge assets, and enable easier training of junior underwriters.
Visibility
Underwriting 2.0:  In today’s insurance world, everyone needs to know how they are doing against their goals. The new platform will track and display everything that has been processed through a real-time dashboard. Both individual underwriters and underwriting management have detailed, easy-to-read, and configurable displays of key metrics such as item and premium counts, ratios, and turnaround time. Further drill-down into those metrics are also available with a few clicks of the mouse.
The Result: Underwriters and managers now have real-time statistics that reflect what is being processed and written, enabling them to recognize and respond to their own progress as well as market changes and opportunities.
Configurability
Underwriting 2.0: Even while the new streamlined process is being laid out, changes are inevitable. As such, the new platform can’t be a rigid solution that requires costly and time-consuming intervention to manage any such changes. It needs to be able to incorporate new information, new rules, new knowledge, and new services with ease – through simple configuration – in order to keep the underwriting process current.
A truly configurable system enables changes to data, screens, edits, rules, documents, and screenflow to be implemented quickly and accurately by business analysts with only modest technical skills. When the market changes, the carrier’s appetite or capacity shifts or new opportunities arise, the underwriting desktop can be changed on the fly with them.
The Result:  The ability to quickly respond to the market changes and position your products and underwriting attention to new opportunities before your competition provides a clear competitive advantage.
Modernize, optimize, transform – start now
Can you underwrite business as efficiently and effectively as you think you should be able to?  Or, are you constrained by your existing processes and systems?
Are your underwriters spending most of their time underwriting?  Or are they chasing information and doing an hour of setup and data entry for every half-hour of true underwriting?
Do your agents consider you their carrier-of-choice because you make their job easier and help them succeed?  Or do they think you are hard to do business with, so you have to constantly press them for their quality submissions?
Are you leveraging your underwriting knowledge and best practices to write the best business at the best price?  Or are you just doing pretty well with what you have to work with? Do you even really know?
Modernizing and optimizing your process can transform your business.

  • Because you help agents to be more productive in getting answers to their customers, more business will come in.
  • Underwriters will be able to focus on underwriting and handle more submissions in less time with better quality. Yes, underwriters will be able to write more business – and better business – at the best price.
  • Managers will finally be able to see across all lines of business, react in real-time, and deploy a true enterprise underwriting strategy.

These platforms are all within our reach today, but only if we are willing to transform how we process our business.
Stop looking at the underwriting process as just data entry for the policy administration system – it is a unique business process with a unique set of demands and goals.
Stop investing your energy and resources in small enhancements to the same constraining workflow – tantamount to “paving the cowpaths” – and start thinking differently about how you would process if you had that magic wand.
The best time to modernize and optimize is when it helps you lead, not when you are trying to catch up. The possibilities are here, now. And if you don’t seize them, your competition will. The first step is to define the business process you want. So get started – thinking, talking, planning, and acting.


Edward Gray is the Director of Customer Solutions for FirstBest Systems in Lexington, MA, where he works with customers to develop a shared vision for how an underwriting management system can bring real-world productivity and quality benefits to the carrier’s internal and agency operations. Ed has more than twenty-five years of insurance expertise in Information Technology and Business Operations with carriers and brokers, including roles as CIO, COO, and Senior Vice President of Operations. He has extensive hands-on experience in system and business process architecture and re-engineering in policy administration, claims, billing, reinsurance, accounting, and management reporting areas, so he has seen what does (and doesn’t) deliver real value to the insurance organization. Ed would be happy to hear your thoughts on the underwriting process.

Complemented Core Capabilities: How small insurers can adapt and thrive

Our products are services
Insurers are service businesses. Although insurers use the same vernacular as manufacturers and refer to their “products,” they do not create a tangible product. Instead, insurers agree to provide services to their clients when certain events occur. The original insurance service, in its simplest terms, was fulfilling a promise to pay. This simple promise has expanded over the years into a broad array of services. To deliver these services, many insurers developed internal personnel and technology infrastructures that were substantial and complex. Whether large or small, successful or struggling, established or start-up, almost every insurer operates within this proprietary service business model: to provide services they build, own, and control the infrastructure and resources that provide the services.

4-1

This proprietary business model has been a competitive advantage for organizations large enough to create the needed service delivery infrastructure, and a barrier to entry for start-ups and for insurers seeking to open new lines of business, develop product variations, or expand geographically. But this past advantage has now been turned on its head. Technology and market forces have converged in recent years to offer small insurers an affordable opportunity to control their service infrastructure – without having to build and own the resources. Small insurance businesses can now effectively deploy a different business model: they can staff their core functions internally and use technology and insurance service providers as a key strategic factor, at a variable cost, to complement and extend their core capabilities.
Technology and insurance services can be dedicated to the insurer as though they are part of its internal infrastructure, but matched just to the extent of the insurer’s needs rather than drawing resources as embedded overhead. This Complemented Core Capabilities approach enables smaller insurers not only to manage infrastructure costs effectively but also to compete, grow, and thrive in ways that were previously beyond their grasp. This business model would have significant strategic and structural cost advantages even in the older, quieter insurance business of decades past. In the current competitive business environment, which includes market forces such as rapidly changing technology, increasing difficulties in recruiting and retaining insurance talent, and tightening regulatory restrictions, the model becomes even more compelling.

4-2

The competitive environment
The property and casualty insurance industry faces a deepening and potentially long term soft market. The soft market appears to be firmly entrenched across all lines of business.[i] Some analysts characterize the current market cycle as “painful and destructive.”[ii] Moreover, the market may stay this way until 2015 or 2016 and inevitably produce impairments in insurers that are less able to compete.
In addition to this soft market, costs are increasing on several fronts. This includes, for example, the effect of regulatory changes such as the Gramm-Leach-Bliley Financial Services Modernization Act and Sarbanes-Oxley requirements, disaster planning regulations after Hurricane Katrina, changes in accounting standards, and more that have added layers of regulatory and market-conduct burdens. The possibility also looms of a federal regulatory role increasing the industry’s already expensive and cumbersome regulatory environment.[iii] These changes and others present challenges for insurer staff and their technology. Unfortunately, both the ability to add expert staff and the readiness of legacy technology are problematic.
The insurance industry faces a rapidly and significantly shrinking employee base, and the competition for talent has become acute.[iv] The numbers are sobering. Deloitte Consulting notes the following:
[T]here is an impending shortage of “critical talent” in the insurance industry – the talent that drives a disproportionate share in a company’s business performance. Depending on an insurer’s business strategy and model, these can be the underwriters, claims adjusters, sales professionals, actuaries, and others who can make the difference between 10 percent and 20 percent annual growth – or between underwriting profit and loss. The looming talent crisis is about to become much worse due to two emerging trends: the retirement of Baby Boomers, who begin turning 62 in 2008, and a growing skills gap.[v]
In 2006, 80% of the chartered property and casualty underwriters and 70% of property and casualty claim adjusters were over 40. And replacements aren’t arriving in large enough numbers. By 2014, Deloitte predicts the industry will face a talent gap of 23,000 underwriters and 85,000 claim adjusters. This would be a crisis in any business environment, but the current soft market means that an insurer’s ability to compete will depend on finding flexible sources of talent and expertise. The shortage of talent is occurring at all levels, including executive and middle management. This crisis affects all insurers, but smaller businesses with fewer resources to compete on salaries and other incentives will have an acute disadvantage.

4-3

In addition to the increasing shortage of insurance expertise, insurers face a technology bind between new and legacy technologies. The average policy administration, claim, and billing system is 24 years old.[vi] Like investors increasing holdings in a stock that has dropped in value, companies have continued to add enhancements and modifications to their legacy platforms rather than making a stop-loss move to Web services systems with a clearly brighter upside. Insurance executives do see the problem. KPMG’s 2007 survey indicated that improving technology is second only to strategic acquisitions as a target for capital deployment and a major factor affecting their capabilities for future growth.[vii]
Small mutual insurers have great difficulty allocating the threshold amount of capital to join the “club” that benefits from rapid changes in technology. The National Association of Mutual Insurance Companies identified this capital deficit as a developing crisis for small insurers a few years ago,[viii]and if anything the situation has worsened. Adding insult to injury, customers expect more. IBM recently surveyed more than three thousand property and casualty policyholders and noted that insurers must change their traditional business models and technology to reach increasingly Internet-savvy customers who have increasing expectations for instantaneous transactions and information.[ix] Web-based product distribution presents both a significant competitive challenge to insurers who lack access to this customer channel and a significant cost, deployment, and maintenance challenge to insurer technology infrastructures and staff.[x]  The event horizon of the new market created by these forces is rapidly moving closer. According to the Gartner Group, just five years from now only insurers that overcome the challenges of increasing regulation, an aging talent base, and inflexible systems will remain competitive.[xi] Without a solution to enable them to remain competitive, small insurers will simply slip further behind their larger competitors. The solution path for small insurer survivors and “thrivers” lies in a Complemented Core Capabilities approach that leverages technological opportunities, internal core insurance strengths, and external availability of variable-cost insurance expertise.
Complemented Core Capabilities
A Complemented Core Capability approach builds on shifting business process outsourcing or BPO (retaining another company to perform distinct business activities for you) from a tactical to a strategic component of your business model. BPO has proven to be a successful strategy in financial service industries other than insurance.[xii] In deploying a Complemented Core Capabilities strategy, an insurer focuses its limited resources and talents on its core competencies while complementing those capabilities with leading technology and insurance services from outside the company. This is a strategic response to the market forces specified in the first part of this article. Insurers have looked outside their internal resources for help on individual service issues for decades. The claim process has particularly been an area where insurers have utilized outsourced services. Independent adjusters and appraisers have long been a staple of the property and casualty industry. Although the insurance industry has outsourced some services on a tactical basis, it nonetheless has lagged far behind other industries in deploying BPO to develop a strategic advantage. Smaller insurers facing today’s fierce competitive pressures now have an opportunity to leverage a Complemented Core Capabilities strategy to transform their capabilities to compete and succeed.
Technology capabilities
Transformational opportunities have emerged for insurers over the past five years due to two key fundamental changes to property and casualty insurance technology: (1) maturation of Web services architecture and (2) business process management tools.[xiii] Indeed, 80% of 2008’s technology development projects may be focused on these technologies.[xiv] With so many businesses now investing in these technologies, the changes they bring are inevitable. They are particularly suited to enable insurers facing resource challenges to extend their core capabilities and maintain and grow their market position. Small insurers, however, are traditionally cautious and change-averse. We’ve all heard decade after decade of hype about the great changes the newest new technology will bring. But a cynical approach is strongly contradicted by the facts on the ground – the performance of the technology in real work settings – and is especially counterproductive in this environment, for inaction actually produces the real risk of an insurer’s obsolescence and inability to compete as more and more competitors adapt.[xv] These  technologies have demonstrated capabilities, robust performance, and cost savings in crucial areas such as implementing and changing business rules. The question is no longer whether these technologies will dominate the market but rather which insurers will embrace the opportunity.
Web services technology takes what used to be complex business operations represented by millions of lines of code and breaks them down into reusable building blocks. Previously, to change the business process, add a product, enter a state, or implement any significant change, the insurer had to invest substantial time and resources changing those lines of code and testing those changes. Web services architecture allows insurers to capitalize on the fact that the business of insurance consists of patterns that repeat throughout business requirements across the various departments and functions of the insurer. This architecture provides very generic, reusable building components that produce a simple, easily configured environment that focuses on the business operations and facilitates rapid and flexible product development. Where business processes previously were forced to match system capabilities, now systems are easily configured and changed to match the business process. Business process management technology automatically tracks and coordinates transactions and processes, allows and triggers manual intervention as required, extracts data and transfers it to appropriate users, executes transactions across multiple systems, and facilitates straight-through processing of transactions without human intervention under defined criteria. The heart of business process management solutions are process and workflow automation in accordance with rules, maintained in a rules engine, that define the sequences of tasks, responsibilities, conditions controlling the processes, and process outputs, among other aspects of the processes.
Business process rules engines have even transformed the cumbersome process of configuring and maintaining rating engines.[xvi] Systems streamline processes by limiting human involvement to just those aspects of transactions that require exception decisions and actions by automatically handling transfer and execution of process tasks in accordance with defined conditions. By reducing the time and resources required to complete processes, the system reduces cost. Moreover, reducing manual touches reduces transaction time, improves service, and reduces errors.[xvii]
Straight-through processing (STP) is just one of many areas where the combination of Web services architecture and business process management solutions is making an impact on the property and casualty insurance industry. STP means the end-to-end execution of a business process, such as policy rating, quoting, and issuance, with little or no human interaction.
According to John Del Santo of Accenture:
[T]op performing carriers are now turning STP into reality and profiting handsomely along the way. These carriers are implementing rules-driven platforms that enable STP across the entire insurance policy life-cycle – from sales illustration to policy administration. The scalable technology with which these platforms are built is enabling these carriers to drive major transformational initiatives – a feat that their competitors are racing to repeat.[xviii]
Access to business process management technology is the obvious essential first step in winning that race.
Core capabilities
Increasing the focus on core competencies to increase business value is not a new concept to anyone who has ridden in an elevator with a newly minted MBA. The Complemented Core Capabilities approach builds on the basic core competency argument that capabilities reflected in skills and knowledge sets define the unique elements of a business and its competitive position in the marketplace. Non-core competencies when performed to expectations, on the other hand, do not offer an opportunity for significant differentiation from your toughest competitors.
Non-core does not, however, mean unimportant or unnecessary. Poor execution of non-core functions can, of course, impair competitive position, but there is no business advantage in your doing it well, if someone else can do it well or better for you as their own core competency. This risk of a non-core business done poorly has led a naturally risk-averse insurance industry to sequester non-core capabilities in house. That, in this market, is a strategic mistake and is based on a fallacy that only ownership delivers control. A model that delivers control without ownership, such as the Complemented Core Competencies approach, can increase focus on core competencies with controllable risk. The risk, indeed, lies elsewhere. In an environment where talent in both core and non-core functions is becoming harder to find and more expensive to acquire and retain, insurers become particularly vulnerable; one commentator articulated this as follows:
Given the limited corporate resources and executive attention, if you focus on core competencies, who focuses on the other non-core but necessary elements of the value chain?[xix]
Stated differently, if an insurer’s success depends upon its core capabilities, should it divert its resources and energy from those core competencies to maintain capabilities in non-core functions? Complemented Core Capabilities as strategy is more than tactically shifting staff time from non-insurance tasks. For example, automation of tasks and transfer of knowledge through the creation of system rules in business process management systems may alleviate some pressure on an insurer’s staff.[xx] Nevertheless, increasing competition, rapid technology changes, an evolving regulatory environment, and demand for ever more innovative insurance products will still challenge the capabilities of the insurers’ employees.[xxi] Insurers who are willing to realistically assess their inadequacies and needs, and turn to experts outside their organization to complement their core capabilities, will be in a better position to survive and prosper than those that continue to stretch their executives and staff to cover broader and broader areas of responsibilities and execute processes and tasks beyond their core skills.[xxii] In addition, by looking outside the company for resources to enhance expertise and capabilities, insurers can access talent on an as-needed, variable-cost basis rather than adding to overhead or, alternatively, proceeding without the expertise because a full-time resource cannot be justified financially or acquired competitively.
An opportunity: Reduced-risk transformational change
The maturation of Web service and business process management technologies, the increasing availability of outsourced insurance talent and services, and the convergence of those capabilities into variable cost options for insurers present small insurers the opportunity to transform themselves into formidable competitive platforms. Michael Sutcliffe of Accenture characterizes the challenge and opportunity as follows:
High-performance businesses revisit and adapt their operating models as required to sustain competitive advantages over time. Outsourcing can allow companies to build new business capabilities rapidly, expand into new geographic markets and change internal systems and processes to support new business models. It reduces the risk associated with implementing transformational change.[xxiii]
Market forces will inevitably transform small insurers over the next five years. The crucial question for each company is whether it executes that transformation purposefully and strategically, or shifts reactively to wherever the market forces drive it. Each insurer has an opportunity to rethink the way it does business and find ways to extend its current capabilities and develop new ones. Some are embracing that opportunity.
Case study: Unity Life[xxiv]
Unity Life of Canada has embraced a business model that enabled the Toronto-based mutual life insurer to grow from $2 million to $50 million in settled premium in only four years. The company
has been transformed from a struggling life insurance company into an innovative provider of unique insurance products through unique distribution channels. It accomplished its transformation by focusing its staff exclusively on core competencies while enhancing its capabilities in all processing and non-core functions by establishing strategic relationships with outside experts.
4-5
“About five years ago, we decided if we were going to survive and prosper in an environment where the larger mutuals had de-mutualized and there were mergers and acquisitions going on, we had to do something significantly different,” says Tony Poole, senior vice president of sales and marketing at Unity Life. So Unity Life management decided to create a virtual insurance company, spinning off its back-office operations into a separate company, now called Genisys.
“We recognized this was a completely different business model,” Poole says. “It would free us up to really do what we do best – our core competency – which is the manufacturing and distribution of products.” The idea was to transform Unity Life’s back office from an expense-driven operating division of a life insurance company into a revenue generator, Poole says. With Unity Life as its original customer, Genisys – an end-to-end business processing outsourcer (BPO) to the life insurance industry – has since attracted several more customers, including CIBC, BMO Life, Gerber Life Insurance Co., and Manulife Financial. Unity recently finished its transformation by divesting itself of Genisys to better execute its new business model. Freed from day-to-day back-office operations, Unity Life also outsourced human resources and legal, valuation, and actuarial services. “We said, ‘What is our core expertise? It’s manufacturing, marketing and distribution of products,’” Poole says. By outsourcing the valuation and actuarial functions, Poole noted that Unity Life obtained best-of-breed talent that it otherwise couldn’t afford as a small insurer. Indeed, Unity Life now functions very successfully with a core group of executives and employees who focus on developing profitable new business and retaining profitable current accounts, while complementing those core capabilities with technology and insurance services from expert providers. Unity Life has successfully executed a Complemented Core Capabilities strategy to transform itself from a small, struggling insurer to a thriving competitor. The strategy seems particularly suited to markets characterized by commodity products, tight margins, and industry consolidation, according to Mike McGuin, senior marketing specialist at Toronto-based Genisys. He notes, “When you look at the landscape, insurance companies need to redefine their core competencies to continue to be viable down the road. That’s why outsourcing is an option they should look at – to reduce expenses, redefine their processes, and leverage best-of-breed technology that they would not be able to afford otherwise.”
Case study: SureProducts Insurance
SureProducts Insurance Agency, a Monterey, California, property and casualty program manager, has employed an aggressive Complemented Core Capabilities strategy to profitably underwrite and service approximately $10 million of California-based property and casualty insurance. Utilizing a rule-based platform provided by its sister company, ISCS, Inc., SureProducts manages and services the business with just four employees: a senior executive with deep underwriting expertise, a senior executive with substantial claim expertise, a field underwriter, and an office manager. All other functions are performed on an outsourced basis.
SureProducts has integrated its business rules into the ISCS system rules engine to facilitate a modified straight-through processing approach. “Our system enables us to function almost completely on an exception basis, “says Ernie Weilenmann, vice president, Underwriting. “We spend our time making decisions to assure that we write good business, not processing policies or managing a backend infrastructure.” Steve Broom, vice president, Claims, manages the claim function in a similar fashion. He notes, “We rely heavily on outside adjusters to perform claim tasks, but I can make the key decisions on our claims and be confident that our service requirements will be met.”
Managing a $10 million book of business with just four employees may seem like a fantasy when companies of similar size require 20+ employees, but SureProduct’s track record proves the model works. It has consistently written to a combined ratio of less than 65% over the last five years. Moreover, in the critical area of operating expenses, its total cost for the company infrastructure and outsourced services is 10%.
Getting there
Conceptual barriers can keep an insurer from “getting there.” Although the reasons vary from company to company, major factors are fear of losing control and internal cultural resistance to outsourcing.[xxv]These barriers can seem insurmountable until forcibly shattered by market forces and it is too late to adapt. But research shows that fears of outsourcing are most often not realized: contrary to placing their businesses at risk by seeking expertise from outside sources, a large majority of companies report that their processes and capabilities improved, according to research by Accenture.[xxvi] The researchers also observed that “Outsourcing provides the opportunity to reach beyond a company’s typical boundaries with internal staff and leverage new thinking and alternative ways of effective change.” Moreover, outsourcing unquestionably introduces a rapid infusion of advanced technology and ongoing access to enhancements. These are positive views of specific changes and increased competitive positioning. Still it is change, and addressing cultural resistance to change can be solved only by effective insurer executives who are committed to assuring that their carriers can compete in the future and leading their companies through that change.[xxvii] Anxiety over loss of control should diminish considerably once the insurer understands the capabilities provided by business process rules engines. As Andy Scurto, President of ISCS, Inc. relates,
Insurers are just beginning to grasp the potential of Web services and rules-based technology. Many executives think that because they now have a Web-enabled front end to their system that they have all the functionality they need. But if they deploy a Web services and rules-based platform, they can have every person who services their business, whether an employee, agent, outsourced service provider, or vendor, work on the insurer’s platform through a Web portal and then assure through business rules in the rules engine not only that those individuals meet service standards but also that exceptions to those standards are immediately escalated to company management. That is a more reliable and controlled environment than they have now.
It may seem counterintuitive to those still wedded to the owning-is-control model, but the reality is that the small insurers that are laboring to compete with client-server technology, that even have deployed a Web front end, and are asking staff to stretch themselves across diverse functional areas have less control over their businesses than those moving to a Complemented Core Capabilities strategy utilizing advanced Web services and business process management technology. We must remember that while we call what we create “products,” we are more accurately executing business processes that deliver services according to designed rules. With the latter framework, we can better adapt to the changing market. The small insurers that thrive in the immediate future will be those that get beyond their cultural barriers, adapt to new business models, and embrace the transformational opportunities now available to them. The reward is a competitive business delivering value to its customers, gainful employment for staff, and significant return on investment for its owners. That’s worth striving for.
References


[i]See, “All Signs Pointing To Firmly Entrenched Soft Market”, National Underwriter Property & Casualty, October 29, 2007, p.8
[ii]See, “P-C Industry First-Half Profits Way Up, But Flat Premium Growth Raises Concern”, National Underwriter Property & Casualty, October 1, 2007, p.8.
[iii]See, “Regulation of the Property/Casualty Insurance: The Road to Reform”, Public Policy Paper, National Association of Mutual Insurance Companies.
[iv]See, ”How Insurance Companies Can Beat The Talent Crisis”, Deloitte Development LLC, 2006.
[v]See, “Waging a War for Industry Talent,” Insurance Journal, September 3, 2007.
[vi]“3 Reasons To Replace Legacy Systems”, Best’s Review, May, 2007, p.82.
[vii] See, “KPMG’s Annual Insurance Industry Survey,” KPMG LLP, September 11, 2007.
[viii]See, “Focus On The Future Options For The Mutual Insurance Company,” National Association of Mutual Insurance Companies, January 1, 1999. One option that NAMIC suggested was for small mutual insurers to move from a business model where they owned their service platforms to an environment that enabled them to share services with other insurers.
[ix]See, “Climate Change,” Insurance Journal, September 3. 2007. See also, “The Steady Evolution of Online Service,” Insurance Networking News, November 1, 2007.
[x]See, “Outsourcing to Play Larger Role Among Insurance Companies,” Outsourcing Center, January 2003.
[xi]See, “Staying Competitive,” TechDecisions, November 2007, p.30.
[xii]See, “The great transformation: Business process outsourcing as the next step in the evolution of Financial Services,“ The Point (Accenture), Volume Three, Issue 6, 2003.
[xiii] See, “Tipping Points in Insurance Automation,” ISCS, Inc., 2006.
[xiv]See, “Unlocking the Power of SOA with Business Process Modeling,” CGI Group, Inc., 2006, citing predictions by the Gartner Group.
[xv] See, “3 Reasons to Replace Legacy Systems,” Best’s Review, May, 2007.
xvi]See, “Rating Systems Move Out Into the Open,” Insurance Networking News, October 16, 2007.
[xvii] See, “A User’s Guide to BPM,” Doculabs, 2003.
[xviii] “Welcome to STP,” Best Review, October 2007, p. 121.
[xix] “Outsourcing Helps Firms to Focus on Core Competencies,” International Association of Outsourcing Professionals, 2006.
[xx] See, “3 Reasons to Replace Legacy Systems,” Best’s Review, May, 2007, p.83.
[xxi]  “BPO in Insurance Sector: Pains and Prescription,” Wipro Technologies, 2002, p.5.
[xxii]See, “Outsourcing to Play Larger Role Among Insurance Companies,” Outsourcing Center, January, 2003.
[xxiii]See “Creating an operating model for high performance: The role of outsourcing,” Outlook(Accenture), May 2004.
[xxiv]The Unity Life of Canada Case Study is excerpted from “Industry Moves toward Global Sourcing,”Insurance Networking News, February 1, 2005.
[xxv] “BPO in Insurance Sector: Pains and Prescription,” Wipro Technologies, 2002, p.6.
[xxvi]“Driving High Performance Through Outsourcing: Achieving Process Excellence,” Outlook(Accenture), 2005, p. 5.
[xxvii]“Driving High Performance Through Outsourcing: Achieving Process Excellence,” Outlook(Accenture), 2005, p.2.

Tom Trezise is president of Convergent Insurance Services. Tom possesses an extraordinary depth of experience in the property casualty business, including operational areas, technology, financial concerns, and contractual issues facing insurers, reinsurers, third party administrators, and intermediaries. From start-up insurers to large international insurers, Tom has served the insurance industry for over 28 years as an insurance executive, general counsel, and trial attorney. He has led organizations with more than 1,000 employees and managed multimillion dollar budgets. His roles have included VP Liability Claims with USF&G and VP Commercial Claims St. Paul Companies/USF&G. With XL Vianet, Tom was a member of the senior management team that launched an Internet-based commercial insurance business start-up, from business plan development through business process design, technology platform decisions, Web-tool design, and business operations.

Spreadsheet services: An efficient approach to implementing business logic

Introduction
Modeling, managing and pricing risk are among the most important priorities for every insurance company. Sophisticated proprietary models are developed by actuarial, underwriting, and finance units to perform these tasks. From a technical standpoint, a common, flexible, and easy-to-use analytical platform was necessary to build those models. As a result, spreadsheets have emerged as the preferred platform used by the vast majority of insurance professionals. The visual nature and step-by-step auditing capabilities have separated spreadsheets from more traditional programming environments such as Visual Basic, Java, C++ or mathematical programming tools like Matlab and Mathematica. Today, almost every insurance company uses spreadsheets to manage their risk one way or another. However, as an increasing number of  insurance companies streamline and automate their business processes (including those complex models) they must deal with a major downside of spreadsheet technology. Spreadsheets are designed for single user desktop environments and do not scale in an enterprise environment, which serves a large number of users concurrently. Facing this challenge, most insurance IT departments attempted to rewrite those spreadsheets in a more scalable programming environment. Taking into consideration the complexity of their models, this approach has often been very expensive and time consuming. In most cases, by the time IT departments complete the rewriting phase, business units have already modified their models to keep up with changes in the marketplace. This leads to never-ending projects that are vastly over budget, and significantly reduces the agility of insurance organizations who are less able to react to changes and opportunities in the marketplace.
This paper presents a technological alternative that enables insurance organizations to integrate their spreadsheet models with enterprise applications without having to rewrite and convert them to another platform. As a result, insurance organizations can experience substantial cost savings, react to changes in the marketplace more quickly, and take advantage of opportunities before their competitors do. It also encourages superior collaboration between business units and IT departments, enabling each to concentrate on their core functions.
Challenge
To stay competitive, insurance companies must constantly face the challenge of properly managing their risks. Managing risk requires collective effort from all parts of the organization. In particular, a collaborative effort involving the actuarial, underwriting and finance departments is crucial. Sophisticated models are built to better understand and properly price their exposure. “What if” scenarios are executed to understand the effect of model variables. Rules- based models are designed for underwriting. To illustrate this further, following is a partial list of complex models that are used in insurance organizations:

  • data validation and scrubbing;
  • actuarial pricing;
  • rating engines;
  • reserve calculations;
  • product selection rule engines;
  • predictive models; and
  • underwriting engines

Highly capable analytical platforms are necessary to build, test, and execute risk models. Traditionally, spreadsheet software has been used by insurance carriers for this purpose. There are multiple reasons that support the notion that spreadsheets provide an ideal platform for analytics:

  • Almost every insurance professional knows how to use spreadsheet software.
  • Hundreds of built-in functions simplify developing sophisticated models.
  • The familiar grid interface and built-in auditing tools enable users to visually follow complex algorithms.
  • Simple import/export features allows data manipulation.
  • Easy debugging is possible using built-in tools.

While spreadsheets are extremely powerful analytical tools, the fact that they are designed for single user desktop environments is a major disadvantage. This becomes more evident and critical as insurance companies move to web-based platforms that require integration of complex business logic with calculations that currently exist in spreadsheet format.
Traditional approach
In its most simplified form, there are three major components in any enterprise insurance software (Figure 1). They are the data layer (database), business layer (business rules and calculations), and presentation layer (user interface).

A1

The business layer is where complex spreadsheet models need to be integrated. In general, insurance companies chose to rewrite spreadsheet models using traditional programming languages. This is a long and expensive process (seeFigure 2); the process typically starts with business units writing specification documents, describing in extreme detail how their algorithms work, a tedious process that insurance companies either handle internally or outsource to a consulting firm to develop a specification document. Once finalized, the specification document is delivered to the IT department. Software developers then have to understand the algorithm and code it. Considering most software developers are not equipped with skills and experience to understand complex insurance calculations, this process is often protracted and error-prone. After the code is completed, it is delivered to QA teams for testing. Considering the analytical nature of this code, business units, in conjunction with QA teams, are ideally involved in testing. When testing, original spreadsheet models are used as reference points, and results obtained from the application are compared with those obtained from the spreadsheet models. A large amount of test cases are typically used to ensure that every aspect of the insurance algorithm has been triggered. Inconsistencies between spreadsheet models and the application are reported to IT units. At the risk of generalizing, these inconsistencies are often difficult for software developers to resolve as their understanding of the algorithm tends to be somewhat limited. As a result, testing becomes a long, iterative process that consumes valuable resources from business units and the IT department. At the end of the process, after all inconsistencies are resolved, business units sign-off on the application and it is – finally – ready to be rolled out.

B2

Unfortunately, this is only a part of the process. Business units continue to adjust their algorithms to stay competitive in the marketplace. With each algorithm adjustment comes a related need to be implemented in the insurance application. A process similar to the one described above is repeated to for all such changes.
The traditional process of implementing business logic and calculations is not only time consuming but very expensive; it negatively impacts an insurance organization’s ability to roll out new products faster.
An efficient new approach – spreadsheet services
Software products have recently become available that process spreadsheets in a server environment and integrate them with other enterprise applications. These products eliminate the need to rewrite spreadsheet models in traditional programming environments. Further, existing spreadsheets can be used “as-is” or with minimal modifications in order to integrate with other insurance applications.
Figure 3 below illustrates this new approach, which we dub “spreadsheet services.” The spreadsheet engine is the central component, essentially replacing the functionality of desktop spreadsheet software. The majority of insurance carriers use Microsoft Excel as their desktop spreadsheet software. However, using Excel in server environments is not recommended by Microsoft; unstable behavior and deadlocks are some of the problems that Excel can cause when run in server environments.[1] A spreadsheetengine can be used to process spreadsheet files in a server environment without depending on the spreadsheet software with which they were created.[2]
C2
Due to the fact that web applications require concurrent access by a large number of users, a spreadsheet engine can be designed and optimized to handle a high volume of requests and perform in multi-threaded environments.
The interface between the spreadsheet engine and software applications is another important component worthy of discussion. There are different ways to handle this interface. With recent developments in Service-Oriented Architecture (SOA), insurance organizations are moving to implement applications that support Web Services. A web service interface between the spreadsheet engine and the insurance application makes it easier for carriers to implement this new approach.

D1

How do you select the right technology?
There are already several products on the market that allow spreadsheet models to be run in a server environment and be integrated with enterprise applications. While each has many features designed for different applications, it is important to identify those criteria that define the right technology for your insurance application:

  • Web services. Designs based on Web Services have proven to be a valuable architecture for building enterprise applications in insurance organizations. As such, it is important to select a technology that can integrate with existing Web Services platforms. Aside from technological advantages, identical spreadsheet models can be used by multiple applications, making it easier to build within an SOA environment. For example, one rating engine can be used by internal quoting and underwriting systems as well as broker applications developed by external vendors. Having a Web Services-based rating engine that can be accessed internally as well as externally makes it easier to maintain and eliminates rating inconsistencies between the two.
  • Platform independence. Many insurance companies utilize Linux and Unix servers for their back office operations. Accordingly, platform-independent solutions provide the best alternative from a maintenance and operational point of view.
  • Performance. Running complex spreadsheet models in a server environment is a performance-intensive process that consumes significant CPU resources and memory. Performance-optimized solutions will therefore meet  concurrency and response-time requirements of enterprise applications, without needing to scale up with additional hardware capacity.
  • Maintain the integrity of spreadsheet files. There are products available that convert spreadsheet files into program code (e.g., Visual Basic, C++ or a propriety file format). This approach requires the involvement of software developers to integrate the code with the overall application every time business units update their spreadsheets. This could slow down the rollout process and increase testing requirements. Converting spreadsheet files into proprietary formats makes spreadsheet management more difficult as the number of files increases over time.
  • Small footprint. Processing spreadsheet models in a server environment is a back office operation that consumes significant server resources. Therefore, general purpose products offering spreadsheet processing as an additional feature will consume valuable server resources and leave limited CPU capacity and memory for executing spreadsheets. As a result, additional server capacity is often needed to meet performance requirements.
  • Grid computing. Insurance applications accessed by a large number of users typically require multiple servers to operate. Solutions that support grid computing will enable carriers to scale up their applications by simply adding new servers.

Benefits of the new approach

Short term
By adopting the spreadsheet-based approach, insurance organizations realize the benefits of accelerated application development and cost savings.
The spreadsheet services approach completely eliminates the time-consuming coding of insurance algorithms and their testing. Coding complex business logic and algorithms tends to be the most time consuming part of the development of any insurance application; eliminating the need for such tedium can have a profoundly positive impact on the project development cycle.
Traditionally, business units utilize business analysts to write specifications and test applications, while IT staff write the actual code and quality assurance teams perform extensive tests to validate the accuracy of the code. Using spreadsheet services virtually eliminates this process and substantially reduces project costs.
Another important benefit of the new approach is a better collaboration between business units and IT; each unit can focus on their core business functions, improving efficiency throughout the enterprise.
Medium term
Maintaining applications by periodically adjusting business logic using the traditional approach requires heavy involvement from all parties, as the specification writing, coding, and testing processes have to be repeated each time business units update their models. With spreadsheet services, business units need only provide IT with updated spreadsheet models. New algorithms can be implemented with minimal system testing.
Insurance organizations also benefit from faster time to market, as updates in business logic and calculations are rolled-out in days rather than weeks or months using the traditional approach.
Long term
In the long term, insurance organizations benefit from this superior architecture as spreadsheet services pervade the organization and an increasing number of business units start adopting the approach. As it is based on SOA, multiple enterprise applications access common Web Services for certain similar calculations and rules. Algorithms can be served from a single point, eliminating redundancy among unit applications used within the enterprise.
Typical insurance applications
Spreadsheet services can be utilized wherever spreadsheets are used, or can possibly be used, to model complex business logic and calculations. Actuarial pricing, underwriting and product rules engines, broker commission calculations, reserve calculations, and predictive modeling are only a few of the critical insurance processes where the new approach adds value.
Rating engines
Rating is typically a self-contained process in the policy lifecycle. Rating engines are simply software programs that return results based on programmed logic for a given set of inputs. In some cases, they require database connectivity; in others, they stand-alone.
An ideal insurance rating system may be characterized as follows:[3]

  • It supports all lines of businesses;
  • It easily handles algorithm changes;
  • It has strong decision-support capabilities;
  • It supports customization, including state- or company-specific deviations;
  • It easily integrates with existing systems (i.e., policy administration); and
  • It supports multi-line operations.

The spreadsheet services approach meets all of these characteristics. The modeling capabilities of spreadsheets used in conjunction with the many built-in formulas enables the development of rating algorithms for even the most complex lines of businesses, providing a single source for all rating regardless of complexity .
To respond to the dynamism of the insurance industry, carriers need the ability to quickly adjust their rates. Insurers often allocate sizeable maintenance budgets in IT departments to handle ongoing rate changes. A spreadsheet-based approach significantly reduces the burden on IT departments, frees up budgets and enables carriers to adjust their rates faster.
Conclusion
Solid risk management principles are crucial for every insurance organization. Sophisticated proprietary models are developed by actuaries, underwriters, and financial professionals to properly manage and price their risk. Most modeling is typically done in spreadsheet environments because of the familiarity, flexibility and features provided. Traditionally, the business logic already built into spreadsheet models is rewritten when integrating them with enterprise applications – typically a long and expensive process.
The spreadsheet services approach completely eliminates the need to rewrite business logic and calculations, while enabling business units to maintain control of their models by keeping them in a familiar format.
The spreadsheet services approach significantly reduces the costs of developing applications utilizing business logic promoting a more collaborative relationship between business units and IT by allowing each to concentrate on their core competence. Business users remain in full control of business logic, enabling faster time to market and greater profitability.
References


[1] Microsoft (2007). Considerations for server-side automation of Office. Retrieved fromhttp://support.microsoft.com/kb/257757/en-us
[2] Microsoft (2007). Considerations for server-side automation of Office. Retrieved fromhttp://support.microsoft.com/kb/257757/en-us
[3]  Stephenson, S. (2004). Insurers need to rate their rating technology. National Underwriter, Property & Casualty, Issue 45.


Ugur Kadakal is Chief Executive Officer of Pagos, Inc., a software and IT consulting firm specializing in helping its clients to integrate spreadsheet-intensive functions with enterprise applications. Insurance companies commonly use Pagos products to build web-based rating systems based on existing spreadsheet rating tools. Other applications include pricing, underwriting and reserving where sophisticated spreadsheets models are used. Prior to co-founding Pagos in 2002, Ugur held positions at Air Worldwide, Inc. a leading catastrophe modeling company. Ugur holds a Ph.D. from Northeastern University.