Incentive-Based Primary Care: Cost and Utilization Analysis


Marcus J Hollander, MA, MSc, PhD; Helena Kadlec, MA, PhD

Perm J 2015 Fall; 19(4):46-56 [Full Citation]


Context: In its fee-for-service funding model for primary care, British Columbia, Canada, introduced incentive payments to general practitioners as pay for performance for providing enhanced, guidelines-based care to patients with chronic conditions. Evaluation of the program was conducted at the health care system level.
Objective: To examine the impact of the incentive payments on annual health care costs and hospital utilization patterns in British Columbia.
Design: The study used Ministry of Health administrative data for Fiscal Year 2010-2011 for patients with diabetes, congestive heart failure, chronic obstructive pulmonary disease, and/or hypertension. In each disease group, cost and utilization were compared across patients who did, and did not, receive incentive-based care.
Main Outcome Measures: Health care costs (eg, primary care, hospital) and utilization measures (eg, hospital days, readmissions).
Results: After controlling for patients' age, sex, service needs level, and continuity of care (defined as attachment to a general practice), the incentives reduced the net annual health care costs, in Canadian dollars, for patients with hypertension (by approximately Can$308 per patient), chronic obstructive pulmonary disease (by Can$496), and congestive heart failure (by Can$96), but not diabetes (incentives cost about Can$148 more per patient). The incentives were also associated with fewer hospital days, fewer admissions and readmissions, and shorter lengths of hospital stays for all 4 groups.
Conclusion: Although the available literature on pay for performance shows mixed results, we showed that the funding model used in British Columbia using incentive payments for primary care might reduce health care costs and hospital utilization.


We present the findings from an evaluation of a new funding model of providing primary care services in British Columbia (BC), Canada. The funding model is based on incentive payments in the fee-for-service payment system for general practitioners (GPs). We compared the costs and utilization of services for patients with several chronic conditions for which additional incentive payments are available. Our analyses show that the incentive payments can reduce overall costs to the health care system but the amount of cost avoidance depends on a number of factors, including the type of chronic condition. We conducted the analyses in two different ways, each reflecting a different approach, and found that both approaches lead to the same conclusions. Thus, the purpose of this article is twofold. First, we report the main findings about the effectiveness of incentive payments. Second, we report on the comparison of two analytic approaches, one based on interactive action research more familiar to health policy makers and the second based on more statistically rigorous propensity score analysis more familiar to analysts and economists.

Funding Model for Primary Care in British Columbia

To set the context for the incentive payments and the funding model in BC, we begin with a brief description of some background and history. (For more details about how primary care is delivered in BC, see the article by MacCarthy and Hollander.1) In accordance with the Canadian Constitution, the provision of health care services is a provincial responsibility. The federal government collects both federal and provincial taxes. It then transfers funds to the provinces to pay for certain services such as health and education. Under the Canada Health Act,2 medical and hospital services are provided to Canadians without a charge or user fee.

In BC, Canada's most western province, primary care and drugs are provincially insured services in which providers bill provincial government insurance programs directly. The majority of medical services are billed to the Medical Services Plan (MSP) on a fee-for-service basis. Eligible payments for drugs and pharmacy services are billed to the Pharmacare Plan. Hospital services and all other health services are provided by Regional Health Authorities (RHAs). Lump sum payments for these services are made by the Ministry of Health directly to the RHAs. There is a complex set of rules regarding copayments for other services, such as drugs, long-term care, and allied health services.1

In Fiscal Year 2010-2011, 49.8% of GPs in BC worked in solo practices and 34.8% worked in small group practices of 2 to 4 GPs. Conceptually, a general practice is very similar to the US patient-centered medical home.1 Typically, one service is provided during one visit to a GP, although if a GP provides a service and a procedure during a single visit, s/he can bill for two activities. This allows GPs to care for a range of patients, including those needing complex care. On the basis of claims made to BC's MSP, and excluding part-time GPs (defined by Doctors of BC as those making less than Can$82,000 in the 2011-2012 fiscal year), the average annual payment to a GP was Can$255,522 in Fiscal Year 2011-2012 (April 1, 2011, to March 31, 2012).

The impetus for introducing incentive payments in the fee-for-service model came from discussions with GPs about future directions in the late 1990s and early 2000s, when BC was experiencing a decline in the number of full-service family practitioners. In response to government proposals, a GP representative responded with the question "Why don't you just pay us for what you want us to do?"1 This led to the formation of the General Practice Services Committee (GPSC), a joint committee of the BC Ministry of Health, the BC Medical Association (now Doctors of BC), and the Society of General Practitioners of BC. (Representatives from BC's Regional Health Authorities also attend as guests.) Since its formation in 2002, this committee has developed and implemented a number of initiatives to promote enhanced family practice, including the Full Service Family Practice Incentive Program, which we refer to as the Incentive Program here.

The approach taken in BC at that time was rather unique in several ways. First, it involved joint efforts on the part of the provincial government and the two professional associations. Second, rather than seeking structural solutions (eg, community clinics, large group practices) to an operational problem (GPs leaving their practices), as was more typical at that time, BC responded with an operational solution, namely providing GPs with additional payments in the fee-for-service system. This solution is what the GPs believed they actually needed to provide enhanced care to their patients (eg, developing care plans and taking more time), particularly to those patients with chronic or complex conditions.1 The incentive payments were initially implemented for GPs to provide enhanced, guidelines-based care to patients with diabetes and congestive heart failure (CHF) in 2003. Over time, other chronic conditions were added. In this article, we report on four main conditions that capture most of the patients who are high users of the health care system: diabetes, CHF, congestive obstructive pulmonary disease (COPD), and hypertension.

Evaluation of Incentive Payments and the Incentive Program

Do incentive payments, as a form of pay for performance, enhance primary care? The literature suggests that incentive payments have led to mixed results across various jurisdictions,3-6 including the Quality and Outcomes Framework in England,7-11 and in Canada.12,13 Furthermore, the term pay for performance can have a range of meanings. In its true sense, pay for performance refers to payments for specific outcomes that improve the health of patients, populations, or both. In actual practice, however, pay for performance often refers to payments for conducting certain process-related activities or achieving "measures," such as performing immunizations or ordering certain tests (eg, for diabetes). The latter would be better labeled as pay for activity, not performance.14 With this distinction in mind, the incentive payments currently offered in BC are also a form of pay for activity, similar to those in other jurisdictions.

How then should the Incentive Program be evaluated as pay per performance for enhancing health outcomes? Several factors were considered in setting up the evaluation framework for the GPSC's Incentive Program. First, in recognition that there would be methodologic and other shortcomings in evaluating the performance of the program at the level of individual GPs, the GPSC decided to look at the performance of the program at the system level; for example, Is overall medical care improving? Has value for money increased? Second, because each incentive was introduced on a provincewide basis, it was not possible to conduct a formal evaluation (eg, using randomized controlled trials). Third—and importantly for the ongoing work and evolution of the GPSC—to be useful, evaluation results were needed reasonably quickly so that the GPSC could review its policy and program development on the basis of new knowledge specifically targeted to its needs, and make evidence-based course corrections as needed.

With these factors in mind, we chose an evaluation approach that was methodologically rigorous while being transparent and understandable to program developers and policy makers as well as to researchers. We refer to the approach as Applied Rapid Response Research (R3). It falls in the frame of reference of action research15; it is applied, rather than technical or basic research, and it has a major knowledge transfer and translation component. Like action research,15 R3 is rigorous, is aimed directly at key questions and decision points for policy makers, and is interactive between policy and program development and evaluation. It provides results that are clear and understandable to program developers and policy makers.

Working with the GPSC, we used the Applied R3 approach to provide quick results regarding the Incentive Program. The results have generated considerable knowledge transfer/translation across Canada and internationally. The findings from our analyses of the Incentive Program have generated empirical evidence for cost avoidance that is associated with increased continuity of care.1,16,17 Similarly, findings from our evaluation of the various learning modules of the Practice Support Program, a GPSC-funded continuing education program for physicians,18 have led to contributions to the primary care literature.19-21

To evaluate the Incentive Program at the system level, we adapted the R3 approach to analyses of BC Ministry of Health's administrative databases, with the goal of estimating and comparing the relative health care cost and utilization patterns of incentive-based care. We needed a rapid, rigorous approach that would allow nonresearchers to see the patterns of relationships for themselves in an easy, transparent manner. The basic idea behind the analyses was to create and to compare two groups of patients: one group who received incentive-based care and the comparison group who received standard (ie, nonincentive-based) care.

Because our main outcome variable was the total cost of care, the incentive-based and nonincentive patient groups needed to be similar to each other in terms of variables related to cost (see more on this in the next paragraph). In working with the GPSC members, many of whom were not researchers, we developed and used an analytic adjustment procedure, where we equated, or adjusted, the two groups on these cost-related variables so that the GPSC members could see how the costs related to the incentives changed in the context of these variables. This approach is the same as indirect standardization in epidemiology, but rather than estimating outcome variables related to the incidence and/or prevalence of disease, our outcome variables were cost and service utilization patterns.

The specific cost-related variables on which the groups were adjusted were age, sex, level of service need, and continuity of care. Continuity of care was operationally defined as a patient's attachment to practice and was found to be strongly and inversely related to health care costs.16,17 Attachment level is defined as the percentage of all primary care services provided by 1 practice. The rationale is provided by the following example. Suppose a patient has 12 services in a year, and 9 of those services are from 1 practice. The patient's attachment level would be 75% (9 of 12 services provided by the main practice). However, if the main GP in the practice provides only 6 services and 3 other services are provided by locum tenens or colleagues in the GPs practice (ie, the billings go through the same payee number) the attachment level for patients seen by the main GP would be 50% (6 of 12 services), whereas the attachment level for the overall practice would be 75%. Given that the 3 other services in the practice are provided on behalf of the main GP and not by other separate practices (drop-in clinics, or GPs working in Emergency Departments), it was our view that the more appropriate indicator of continuity of care is attachment to the practice of the main GP.

For level of service need, we used as a matching variable the patients' Resource Utilization Band (RUB) designation, which is available in the BC Ministry of Health administrative databases. The RUB designation is a classification system developed by Johns Hopkins University, Baltimore, MD.22 The main groupings are categorized into Adjusted Clinical Groups, which are clinical groupings that incorporate age, sex, and the number and type of different diagnostic conditions the patient has. These can then be rolled up into 6 broader RUB categories ranging from 0 to 5, with 5 indicating very high care needs. (The interested reader is referred to the Johns Hopkins Web site for more details.22) This system is in wide use not only in the US but also internationally.23-26

For presenting results to a scientific audience, we also analyzed the administrative data using propensity score analyses, which are increasingly used in health services research to assess treatment effectiveness in observational studies when randomized control trials are not possible. For example, propensity score analysis has been used to assess the quality of diabetes care,13 COPD maintenance therapies,27 the costs and lengths of stay of total hip replacement,28 the cost-effectiveness of open laparoscopic appendectomies,29 and the cost-effectiveness of drug-eluting stents in patients with acute myocardial infarction.30

In summary, we were interested in exploring the impact of the Incentive Program on health care costs while controlling for several key cost-related variables. Having access to the BC Ministry of Health administrative databases, which consist of a series of registries that contain the records of people with chronic conditions, we were able to examine and compare costs and hospital utilization patterns for patients on the registries for diabetes, CHF, COPD, and hypertension.


Patient Selection

The BC Ministry of Health administrative database contains a series of registries of people with chronic conditions. To place patients on a registry, the Ministry uses a complex formula based on diagnostic codes from hospital and primary care visits and common drugs used to treat the given condition.

For our analyses, we extracted all patients in Fiscal Year 2010-2011 who did and did not receive incentive-based care and who were on the registries for each of the following four chronic conditions: diabetes, CHF, COPD, and hypertension. Patients in a given registry, such as diabetes, may have diabetes alone or may have diabetes plus other chronic conditions, and may thus appear on more than one registry. It should also be stressed that we were not dealing with samples in our analyses. Rather, we were dealing with a subset of the population, which included all BC patients who met our selection criteria.

Patient selection was made with the following additional considerations. For each chronic condition, we excluded people who died and who were estimated to be in a long-term care facility during Fiscal Year 2010-2011 because we wanted to include only patients who resided mostly in the community for an entire year. We also excluded patients with hospital costs greater than Can$100,000, the rationale being that if the average hospital cost is Can$1000 per day, an annual cost of Can$100,000 would imply the patient stayed in hospital for 100 days. Our focus in these analyses is on primary care; thus, we wanted to select patients who spent most of their time living in the community. The number of patients excluded on the basis of this criterion ranged from 7 (fewer than 0.01% of diabetes) to 29 (0.06% of COPD) patients per registry at RUB Level 4, and 261 (0.76% hypertension) to 296 (1.30% COPD) patients at RUB Level 5. For other analyses,16,17 patients with billings made by more than 25 different payees or service providers were also excluded because this would make them atypical users ("outliers") of the health care system. However, no patients were eliminated on the basis of this criterion in the current study.

As most incentives were developed for patients receiving care for a chronic condition, we selected people with somewhat higher care needs and those who saw their GP on at least a moderately regular basis. First, we selected people in RUB Levels 3 through 5. Second, we selected patients who had at least 5 GP services in a given year; these could be, but were not required to be, visits related to the particular chronic condition. Relatively few patients in RUB Levels 3 to 5 had fewer than 5 GP services in Fiscal Year 2010-2011 (10.5% of diabetes, 14.8% of hypertension, 7.3% of CHF, and 8.9% of COPD, with most of these at RUB Level 3).

Access to data for our analyses was obtained through a BC Ministry of Health Privacy Impact Assessment in conformance with the Freedom of Information and Protection of Privacy Act (Privacy Act).31 Approval of the Privacy Impact Assessment ensures that any collection, use, and disclosure of information conforms to all existing legislation, including the Privacy Act. The requirements for conducting research under the Privacy Impact Assessment agreement are similar to those imposed by ethics review boards.

Outcome Variables

Our outcome variables were the total annual (Fiscal Year 2010-2011) costs of health care and a number of indicators of hospital utilization. Specifically, cost variables included costs to the government from the provincial MSP (ie, GP costs, specialist costs, and diagnostic facility costs), hospital costs, pharmacy costs, and total costs (the sum of all cost categories). The utilization variables included were the number of hospital days per 1000 patients, net number of admissions, readmission rates, and average length of stay.

Analytic Adjustment Procedure

Many readers will have been trained in a health-related discipline and will be familiar with concepts from epidemiology such as age and sex standardization. Many social science disciplines also adjust data to control for confounders based on differential age and sex distributions (and distributions for other key variables). Thus, epidemiologic standardization is actually a subset of a broader concept of adjustment, which "encompasses both standardization and other procedures for removing the effects of factors that distort or confound comparison."32

The 2 groups, those who did and those who did not receive incentive-based care, were adjusted on 4 key cost-related variables. Those variables were as follows: 1) age, categorized into 5 groups or strata: 0 to 44 years, 45 to 59 years, 60 to 69 years, 70 to 79 years, and 80 years and older; 2) sex: male or female; 3) RUB: Levels 3 through 5; and 4) attachment to practice, defined as the percentage of all services provided by the primary care practice that provided the most services to the patient,16,17 with categories or strata of 0% to 39% attachment, 40% to 59% attachment, 60% to 79% attachment, 80% to 89% attachment, and 90% to 100% attachment.

Because of the highly variable nature of the costs of the different incentives and services associated with the different comorbidities, the use of constructed variables such as number of comorbidities as a matching variable was not considered to be appropriate in our cost analyses.

Propensity Score Analyses

The method for propensity score analysis is basically a 2-stage analysis. In Stage 1, each patient in the "treatment" group (corresponding to our incentive-based care group) is first matched with a patient in the comparison group who matches him/her on each of the other "matching" variables (eg, age, sex). The matching is done by computing a propensity score for each patient (a linear combination score of the matching variables) and matching the patients in the 2 groups on these propensity scores. In Stage 2 of the analysis, the 2 groups of matched patients are compared on the outcome variables of primary interest. For further discussion of this method and some of the key issues that must be considered, see Arbogast et al,33 Austin,34 Baser,35 Manca and Austin,36 Schneeweiss et al,37 Wilde and Hollister,38 Rosenbaum and Rubin,39 and Dunn et al.40

Estimates of patients' propensity scores were obtained using probit regression.40 For matching patients on the propensity scores, we used one-to-one nearest neighbor matching without replacement; because of the size of our data sets, there were large and similar numbers of patients in each of the two groups (incentive and nonincentive) with identical propensity scores. The quality of the matching was assessed for each analysis. Once matched, the average costs for patients who received incentive-based care were compared with their matched counterparts who did not receive incentive-based care, using paired samples t tests.38

As was the case for the adjustment procedure, the matching was done on age group, sex, RUB level, and attachment level. Matching patients on comorbidities in the propensity analyses was not feasible. Some groupings of all possible combinations of the main comorbidities resulted in very small numbers of patients who could not be matched adequately. Furthermore, as noted earlier, using constructed variables such as simple counts of comorbidities as a matching variable was not appropriate in our cost analyses because the incentive and other costs associated with different comorbidities are highly variable. Thus, for these analyses, we selected patients who had only diabetes (ie, they did not have CHF, COPD, hypertension, or other comorbidities for which an incentive could be billed) and, similarly, patients who had only hypertension. We chose diabetes and hypertension because these two conditions had the largest numbers of patients of the four conditions without any comorbidities.

Finally, because we needed to be able to communicate the findings from our analyses to diverse audiences, we employed two different analytic methods in our analyses of the provincial data. Use of these methods enabled us to compare the findings across methods and across chronic conditions.


Table 1 provides the basic demographic description of the patients in our analyses. A scan of the percentages indicates that there were some differences between the incentive and nonincentive groups across sex, age, RUB level, and level of attachment; thus, adjusting for these variables was warranted.

We present the results of the cost analyses in four sections. First, we present the cost estimates, in terms of cost avoidance, for incentive-based care obtained by the analytic adjustment method, and compare these with unadjusted, or raw, costs for each of the four chronic conditions. Second, we present and compare the cost estimates for diabetes and hypertension obtained using the propensity score analyses. Third, we report estimates of the overall cost avoidance that include the cost of the incentives themselves for each of the four chronic conditions, to provide an overall financial picture. Finally, using the adjustment method we report the impact of the Incentive Program on hospital utilization patterns.

Impact of Incentives on Costs Based on Analytic Adjustment Method

Table 2 presents the cost estimates for the 4 major chronic conditions using the analytic adjustment method. It shows the raw costs (ie, simple comparisons of costs for patients who did and did not receive incentive-based care, without adjustment) as well as the costs adjusted for age, sex, and RUB and attachment levels. Raw, unadjusted cost estimates indicate that patients who received incentive-based care cost, on average, less than those who did not, with raw annual cost differences ranging from Can$353 for diabetes to Can$812 for COPD (see "Raw Cost" columns in Table 2).

Costs adjusted for age, sex, RUB level, and attachment level (see right side of Table 2), however, led to different results. Specifically for diabetes, on the basis of the adjusted costs, patients who received incentive-based care actually cost, on average, Can$148 (2.99%) more than patients who did not. After adjusting for only age, sex, and RUB level (not shown here), the costs were similar for patients who received incentive-based care (Can$4993) and those who did not (Can$5059). Although the difference of Can$66 was small, these estimates were in the same direction as the raw total cost findings. However, with the additional adjustment for attachment level, cost estimates for patients who received incentive-based care were higher than for those who did not receive incentive-based care. For the other 3 chronic conditions, incentive-based care adjusted for age, sex, RUB level, and attachment level led to lower costs, on average, with the differentials ranging from Can$96 for CHF to Can$496 for COPD.

It is also of interest to compare the impact of the incentives on the different cost categories—GP, specialist, hospital costs, and so on—across the disease categories. For example, for all groups of patients, those who received incentive-based care incurred higher GP costs (primarily because of the costs of the incentive payments) and lower specialist costs (whether we consider raw or adjusted estimates) compared with those who did not receive incentive-based care. When the costs were adjusted for age, sex, RUB level, and attachment level, hospital costs were considerably lower for patients with incentive-based care (in all disease groups). However, pharmacy costs were higher for patients with diabetes and COPD (but not CHF and hypertension).

The impact of adjusting for comorbidities was examined for diabetes and hypertension. Findings are presented in Table 3 for patients who appeared only on the diabetes registry (and no other registries) or only on the hypertension registry. Compared with the estimates in Table 2, we noted that even though the overall total costs were lower for both incentive and nonincentive groups (by about Can$1700 for diabetes and Can$500 for hypertension), the adjusted cost differences were comparable, including the change in the direction for diabetic patients. For patients with only diabetes who received incentive-based care, the average annual total cost per patient was Can$3390, which was Can$147 higher than the average cost for diabetic patients who did not receive incentive-based care. For hypertension-only patients, those who received incentive-based care had average annual costs of Can$2674, which was Can$269 less per patient than those who did not receive incentive-based care. This provides some evidence that the pattern of results was similar for patients with only 1 chronic condition and those with multiple chronic conditions.

Impact of Incentives on Costs Based on Propensity Score Analyses

The effectiveness of the matching of patients, using propensity scores based on the 4 variables, is displayed in Table 4, which shows the group means on the 4 variables before and after propensity score matching. The numeric values of the means are not meaningful in and of themselves (because these are categorical variables with the numbers indicating category "labels"), but they do show how the 2 groups became more similar after the matching procedure. For the diabetes-only patients, the overall reduction in bias ranged from 16% (for sex) to 96% reduction for RUB level (Table 4, last column). For the hypertension-only patients, the reduction ranged from 50% on each of the covariates except sex (which was already very small before the matching procedure, at 1.6%). Note that the matching was always done for the N of the smaller of the 2 groups. For diabetes, the nonincentive group was smaller (N = 60,535), which means all nonincentive group members were matched with a patient from the larger incentive group, and their means remained the same after matching, but those of the incentive group changed. The converse can be seen for the hypertensive patients.

The main results of this analysis were the paired samples t tests conducted on the 2 matched groups of patients for each cost variable (Table 5). After matching on age group, sex, RUB level, and attachment level, diabetes-only patients who received incentive-based care cost an overall Can$97 more than those who did not receive incentive-based care. It is also interesting to look at the different cost categories; when looking only at hospital costs, patients who received incentive-based care cost Can$156 less than those without incentive-based care, after matching on the 4 covariates.

For hypertension-only patients, incentive-based care made a larger difference in the impact on the costs. For each type of cost, patients who received incentive-based care had lower costs. Overall, the incentive-based care recipients with hypertension only had total health care costs of Can$375 less than patients without incentive-based care (Table 5).

These results can be compared with those obtained on the same subpopulation of patients using the adjustment method shown in Table 3. The two sets of results are shown in Table 6. The findings were very similar, providing support for the adjustment method. The adjustment method provided a more conservative estimate of cost avoidance than did the propensity score analysis.

Additional Analyses: Overall Costs of Incentive Payment Program

The cost estimates presented in Table 2 are only for patients who were selected for analysis using our inclusion and exclusion criteria, which allowed us to compare costs across the two groups for high-care-needs patients. However, to obtain a more complete picture of the cost (or cost avoidance) of the incentive payment program for the entire health care system, we estimated the overall cost, including the cost of the incentives, for all patients in each of the four chronic conditions (ie, including patients at all RUB levels and those with fewer than five services).

Table 7 presents the overall cost avoidance of the Incentive Program for the four chronic conditions. Where the cost reductions from the incentives were less than the costs of the incentives (see negative net values in Table 7), the incentives constitute an additional cost to the health care system. Conversely, where the savings from incentives are positive but less than the costs of the incentives, this represents a partial return on investment. The overall cost estimates presented in Table 7 show that the Incentive Program resulted in cost avoidance for patients with CHF, COPD, and hypertension, but not diabetes. However, it should be noted that the costs presented in Table 7 are not additive across chronic conditions because the subpopulations of patients in these analyses overlap.

Impact of Incentives on Service Utilization

The Incentive Program also had an impact on hospital utilization outcome measures. Using the adjustment method, we compared hospital utilization for patients who received incentive-based care with those who did not. Across all four chronic conditions, patients who received incentive-based care had fewer admissions, fewer days in the hospital, fewer readmissions, and shorter lengths of stay (Table 8).








Our analyses bore interesting results. Incentive payments can and do avoid costs for the health care system—although it depends which costs and which chronic conditions are looked at—and in general reduce patients' utilization of more costly hospital services. The incentives are associated with cost avoidance for patients with CHF, COPD, and especially hypertension, but not for patients with diabetes. This difference appears to be because of the incentive-based cost avoidance for hospital costs being lower in both the raw and adjusted costs for the diabetes group. For example, Table 2 shows that the hospital cost differential for the diabetes group, using adjusted costs, was Can$187 (Can$2318-Can$2131) compared with Can$248 for hypertension, Can$629 for COPD, and Can$397 for CHF. In addition, although all 4 conditions showed a decrease in the measures of hospital utilization in the incentive group, the decrease was consistently comparatively smaller in the diabetes group. For example, Table 8 shows the difference in the number of hospital days per 1000 patients (236 fewer days in the incentive group) was smaller for diabetes than for each of the other chronic conditions (257 fewer days for hypertension, 536 for COPD, and 419 for CHF). Similarly, the difference in the net number of admissions per 1000 patients was smallest for diabetes (12.5 admissions) compared with hypertension (22.1 admissions), COPD (32.5 admissions), and CHF (21.2 admissions).

Regarding the two analytic methods, we showed that the adjustment method used for presenting the cost and utilization findings to health care policy makers was sound. Because we conducted each analysis separately for each chronic condition, there was some overlap in the populations that were selected for the analyses reported here. As we show in the results section, the findings were consistent across analyses of patients with only one chronic condition and those who had several chronic conditions. Thus, comorbidities did not present an issue in our interpretation of the Incentive Program.

 As we hoped, we effectively validated the adjustment method by comparing its estimates with those obtained from the propensity score matching analyses. The two methods provided similar results regarding the impact of the incentives on health care costs, when adjusting or matching (ie, controlling) for four patient cost-related variables, and the similarities of the cost estimates across the two methods strengthen the conclusions drawn from both analyses. Thus, overall results indicate both cost avoidance and reduced hospital utilizations for patients who received guidelines-based care supported by incentive payments.

One limitation of our study is that even though we matched the patients in the two groups on age, sex, RUB level, and attachment level, we do not fully know how similar the two groups were on other potentially relevant variables. However, previously published studies of cost analyses that used a wider range of independent variables (ie, patient's median household income, and physician's sex, age, and place of graduation) indicated that these other variables had a comparatively much smaller impact on costs in BC.15 In this regard, a second limitation is that the results of this study are specific to the BC context and thus are not directly generalizable to other contexts. We hope that the findings reported here spur further research in other jurisdictions and are of broader interest, particularly to health care policy makers and funders.

Our analyses examined the cost-effectiveness of the incentive-based model introduced in BC. The findings do not, however, address the bigger issue of whether the operational solution introduced to resolve an operational problem (ie, the Incentive Program) is more cost-effective than a structural solution.1 It would be informative and interesting to compare the BC approach to other innovations in primary care delivery and funding models across Canada, such as community clinics and/or large group practices, and to assess the benefits and shortcomings of each approach using the same outcome measures. Such a broader study would provide health care policy makers with information for evidence-based funding and delivery decisions in primary care.


Although the available literature on pay for performance shows mixed results, we showed that the funding model in BC using incentive payments for primary care might, on balance, reduce health care costs and hospital utilization.

Disclosure Statement

Hollander Analytical Services Ltd has a contract to evaluate General Practice Services Committee (GPSC) activities. To ensure the independence and objectivity of evaluations conducted by Hollander Analytical Services Ltd, which are funded by the GPSC, the BC Ministry of Health and the Doctors of BC (formerly the BC Medical Association) have signed an agreement, on behalf of the GPSC, that guarantees the integrity, objectivity, and independence of any evaluations conducted for the GPSC by Hollander Analytical Services Ltd. There are no competing interests.


We would like to acknowledge the funding provided for this paper by the General Practice Services Committee (GPSC), a partnership between the British Columbia Ministry of Health and the Doctors of BC (formerly the BC Medical Association). We would also like to acknowledge Angela Tessaro for her exceptional programming skills and ability to work with large administrative data sets, and for setting up the data sets for our analyses. We thank Nicole Littlejohn for her assistance with preparing and submitting this manuscript for publication. Finally, we thank the anonymous reviewers for their helpful comments and suggestions.

Kathleen Louden, ELS, of Louden Health Communications provided editorial assistance.

 1.     MacCarthy D, Hollander MJ. RISQy business (relationships, incentives, supports, and quality): evolution of the British Columbia model of primary care (patient-centered medical home). Perm J 2014 Spring;18(2):43-8. DOI:
    2.    Canada Health Act. R.S.C., 1985, c. C-6 [Internet]. Justice Laws Website; 2015 [cited 2015 May 11]. Available from:
    3.    Bremer RW, Scholle SH, Keyser D, Houtsinger JV, Pincus HA. Pay for performance in behavioral health. Psychiatr Serv 2008 Dec;59(12):1419-29. DOI:
    4.    Kolozsvári LR, Rurik I. [Quality improvement in primary care. Financial incentives related to quality indicators in Europe]. [Article in Hungarian]. Orv Hetil 2013 Jul 14;154(28):
1096-101. DOI:
    5.    Locke RG, Srinivasan M. Attitudes toward pay-for-performance initiatives among primary care osteopathic physicians in small group practices.

J Am Osteopath Assoc 2008 Jan;108(1):21-24.
    6.    McDonald R, Roland M. Pay for performance in primary care in England and California: comparison of unintended consequences. Ann Fam Med 2009 Mar-Apr;7(2):121-7. DOI:
    7.    Alshamsan R, Millett C, Majeed A, Khunti K. Has pay for performance improved the management of diabetes in the United Kingdom? Prim Care Diabetes 2010 Jul;4(2):73-8. DOI: .
    8.    Bottle A, Gnani S, Saxena S, Aylin P, Mainous AG 3rd, Majeed A. Association between quality of primary care and hospitalization for coronary heart disease in England: national cross-sectional study. J Gen Intern Med 2008 Feb;23(2):135-41. DOI:
    9.    Campbell SM, Reeves D, Kontopantelis E, Sibbald B, Roland M. Effects of pay for performance on the quality of primary care in England. N Engl J Med 2009 Jul 23;361(4):368-78. DOI:
    10.    Gillam SJ, Siriwardena AN, Steel N. Pay-for-performance in the United Kingdom: impact of the quality and outcomes framework: a systematic review. Ann Fam Med 2012 Sep-Oct;10(5):461-8. DOI:
    11.    Weyer SM, Bobiak S, Stange KC. Possible unintended consequences of a focus on performance: insights over time from the research association of practices network. Qual Manag Health Care 2008 Jan-Mar;17(1):47-52. DOI:
    12.    Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance-based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med 2012 Dec 18;157(12):889-99. DOI:
    13.    Kiran T, Victor JC, Kopp A, Shah BR, Glazier RH. The relationship between financial incentives and quality of diabetes care in Ontario, Canada. Diabetes Care 2012 May;35(5):1038-46. DOI:
    14.    Goldfarb D. Family doctor incentives: getting closer to the sweet spot [Internet]. Ottawa, ON: The Conference Board of Canada; 2014 May 21 [cited 2015 Apr 10]. Available from:
    15.    Koshy V. Action research for improving educational practice. 2nd ed. London, England: Sage Publications Ltd; 2010.
    16.    Hollander MJ, Kadlec H, Hamdi R, Tessaro A. Increasing value for money in the Canadian healthcare system: new findings on the contribution of primary care services. Healthc Q 2009;12(4):32-44. DOI:
    17.    Hollander MJ, Kadlec H. Financial implications of the continuity of primary care. Perm J 2015 Winter;19(1):4-10. DOI:
    18.    Practice support program [Internet]. Vancouver, British Columbia, Canada: General Practice Services Committee; 2009 [cited 2015 Apr 9]. Available from:
    19.    Garcia-Ortega I, Kadlec H, Kutcher S, Hollander M, Kallstrom L, Mazowita G. Program evaluation of a child and youth mental health training program for family physicians in British Columbia. J Can Acad Child Adolesc Psychiatry 2013 Nov;22(4):296-302.
    20.    MacCarthy D, Kallstrom L, Kadlec H, Hollander M. Improving primary care in British Columbia, Canada: evaluation of a peer-to-peer continuing education program for family physicians. BMC Med Educ 2012 Nov 9;12:110. DOI:
    21.    Weinerman R, Campbell H, Miller M, et al. Improving mental healthcare by primary care physicians in British Columbia. Healthc Q 2011;14(1):36-8. DOI:
    22.    The Johns Hopkins ACG System [Internet]. Baltimore, MD: Johns Hopkins University; 2013 [cited 2015 Apr 9]. Available from:
    23.    Halling A, Fridh G, Ovhed I. Validating the Johns Hopkins ACG Case-Mix System of the elderly in Swedish primary health care. BMC Public Health 2006 Jun 28;6:171. DOI:
    24.    Lee WC. Quantifying morbidities by Adjusted Clinical Group system for a Taiwan population: a nationwide analysis. BMC Health Serv Res 2008 Jul 21;8:153. DOI:
    25.    Reid R, MacWilliam L, Roos NP, Bogdanovic B, Black C. Measuring morbidity in populations: performance of the Johns Hopkins Adjusted Clinical Group (ACG) Case-Mix Adjustment System in Manitoba. Winnipeg, Manitoba, Canada: Manitoba Centre for Health Policy and Evaluation; 1999 Jun.
    26.    Sicras-Mainer A, Serrat-Tarrés J, Navarro-Artieda R, Llausi-Sellés R, Ruano-Ruano I, González-Ares JA. Adjusted Clinical Groups use as a measure of the referrals efficiency from primary care to specialized in Spain. Eur J Public Health 2007 Dec;17(6):657-63. DOI:
    27.    Roberts MH, Dalal AA. Clinical and economic outcomes in an observational study of COPD maintenance therapies: multivariable regression versus propensity score matching. Int J Chron Obstruct Pulmon Dis 2012;7:221-33. DOI:
    28.    Röttger J, Scheller-Kreinsen D, Busse R. Patient-level hospital costs and length of stay after conventional versus minimally invasive total hip replacement: a propensity-matched analysis. Value Health 2012 Dec;15(8):999-1004. DOI:
    29.    Hass L, Stargardt T, Schreyoegg J. Cost-effectiveness of open versus laparoscopic appendectomy: a multilevel approach with propensity score matching. Eur J Health Econ 2012 Oct;13(5):549-60. DOI:
    30.    Bäumler M, Stargardt T, Schreyögg J, Busse R. Cost-effectiveness of drug-eluting stents in acute myocardial infarction patients in Germany: results from administrative data using a propensity score-matching approach. Appl Health Econ Health Policy 2012 Jul 1;10(4):235-48. DOI:
    31.    Freedom of Information and Protection of Privacy Act [RSBC 1996] Chapter 165 [Internet]. BC Laws; 2015 [cited 2015 May 11]. Available from:
    32.    Schoenbach VJ, Rosamond WD. Understanding the fundamentals of epidemiology: an evolving text. Chapel Hill, NC: University of North Carolina at Chapel Hill; 2000. p 131.
    33.    Arbogast PG, Seeger JD; DEcIDE Methods Center Summary Variable Working Group. Summary variables in observational research: propensity scores and disease risk scores. Effective health care program research report number 33. Rockville, MD: Agency for Healthcare Research and Quality; 2012 May.
    34.    Austin PC. A critical appraisal of propensity-score matching in the medical literature between 1996 and 2003. Stat Med 2008 May 30;27(12):2037-49. DOI:
    35.    Baser O. Propensity score matching with multi-level categories: an application [Internet]. Lawrenceville, NJ: International Society for Pharmaeconomics and Outcomes Research; 2008 [cited 2014 Jun 2]. Available from:
    36.    Manca A, Austin PC. Using propensity score methods to analyse individual patient-level cost-effectiveness data from observational studies [Internet]. York, United Kingdom: Health, Econometrics and Data Group, The University of York; 2008 Jul [cited 2015 Apr 22]. Available from:
    37.    Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology 2009 Jul;20(4):512-22. DOI:
    38.    Wilde ET, Hollister R. How close is close enough? Evaluating propensity score matching using data from a class size reduction experiment. J Policy Anal Manage 2007 Summer;26(3):455-77. DOI:
    39.    Rosenbaum PR, Rubin DB. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Stat 1985 Feb;39(1):33-8. DOI:
    40.    Dunn G, Mirandola M, Amaddeo F, Tansella M. Describing, explaining or predicting mental health care costs: a guide to regression models. Methodological review. Br J Psychiatry 2003 Nov;183:398-404. DOI:


Click here to join the eTOC list or text ETOC to 22828. You will receive an email notice with the Table of Contents of The Permanente Journal.


2 million page views of TPJ articles in PubMed from a broad international readership.


Indexed in MEDLINE, PubMed Central, EMBASE, EBSCO Academic Search Complete, and CrossRef.




ISSN 1552-5775 Copyright © 2021

All Rights Reserved