Big Data, Miniregistries: A Rapid-Turnaround Solution to Get Quality Improvement Data into the Hands of Medical Specialists


Lisa J Herrinton, PhD; Liyan Liu, MD, MS; Andrea Altschuler, PhD; Richard Dell, MD;
Violeta Rabrenovich, MHA; Amy L Compton-Phillips, MD

Perm J 2015 Spring; 19(2):15-21 [Full Citation]


Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive.
Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist's contributions to the science and management of population health.
Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments.
Main Outcome Measure: Proof of concept.
Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days.
Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient  organization of information workers to support accountable care is an emerging question.


A disease registry is a tool for tracking clinical care and outcomes at the level of a population or patient panel.1 Registries can be used to focus attention on the most urgent patients, to tailor clinical decision support to the patient's specific medical needs, to improve care coordination, to identify opportunities to improve care2,3 or steward medical resources, to implement quality improvement,4 and to build reports needed for accreditation and public reporting,5 among other uses. However, many registries are disease specific, and because they are costly, they tend to be developed for high-volume conditions and settings. However, medical specialists want these tools to manage lower-volume conditions as well.

Nearly every health care worker who contributes to patient care also contributes to the patient's electronic medical record (EMR), and these systems contain a wealth of information. Data in the EMR can be processed and displayed to provide a disease registry.

Big data is an emerging management concept6,7 that responds to data acquisition as an automated part of the business process. "Big data" brings together ideas about multiple varieties of data creators and demands for rapid access to data by decentralized data customers, together with exponential increase in data volume and relatively low-cost computing. These trends create a strong demand for analytics to apply new knowledge to the local context for the purpose of process improvement. Crowdsourcing, or co-creation, is the reorganization and decentralization of labor to increase efficiency, such that those who are immediately affected or have direct accountability are supported by platforms (or collaborative interfaces, combining technology, staff, and workflows) to contribute to problem solving. Crowdsourcing is particularly efficient for sharing specialized knowledge.4,8-10

This project was designed to deliver miniregistries, which are small datasets that can be routinely updated, to Chiefs of medical specialties by enabling them to collaborate rapidly with experts in population management and data analysis. Whether these miniregistries improved care was beyond the scope of the project because improvement requires separate, additional resources.11-13

This project was designed as an innovation, not as a traditional research project. Traditionally, research is organized into projects, with one project creating knowledge and a separate project implementing that knowledge through an intervention. The innovation framework is different; it uses iterative cycles to create knowledge and to test prototypes.14 Thus, from one cycle to the next, the results are used to refine implementation. Because implementation is a key endpoint, proof of concept (ie, does it work?) is considered an important outcome. We worked with Kaiser Permanente's (KP's) Innovation and Advanced Technology Group15,16 to develop and to iteratively test and adapt the miniregistries platform, tinkering with staffing arrangements, technology, and workflow until the combination worked.


The project was approved by the Kaiser Foundation Research Institute's institutional review board.

KP is located in 7 regions, with each operating its own Epic-based EMR (Epic Systems Corp, Verona, WI). Although the present project engaged stakeholders across the national program, data processing was restricted to KP Northern California, which comprises 3.3 million members, 7000 physicians, and 21 Medical Centers. In Northern California, care is capitated, with members receiving all or nearly all their care from KP clinicians in owned facilities.

Overview of Platform

A conceptual model of the Specialty Miniregistries platform is shown in Figure 1. Clinical questions are generated at the level of the specialty department, and actionable metrics are returned to the specialty department. Collaboration among experts in medicine, population management, and information technology is structured and streamlined using the Specialty Miniregistries platform (circles in Figure 1). The Specialty Miniregistries platform is a combination of staff, technology, and workflows that make use of data in the EMR. The staff, technology, and workflows were specifically designed to meet the needs of the medical specialist whose experience with data and analytics may be limited. In addition, the platform was designed for efficiency, by structuring and minimizing the time required for collaboration and data processing. Maximizing the efficiency of the platform increases the number of clinical questions that can be addressed.

We tested the prototype and piloted the platform in two phases.

Phase 1

Phase 1 included creating an initial conception; engaging stakeholders and asking them to react to a description of the initial conception; clarifying how the platform was different from other tools and what kinds of problems the platform could best address; and finally, formalizing its purpose and general design in a set of business requirements.

Initial Conception

KP's medical specialties departments have long aspired to build disease registries. However, the few that were built were too costly to generalize.17 Therefore, the initial concept was to quickly get relevant, actionable data into the hands of Specialty Chiefs. Existing registries were expensive because nonphysician staff who supported the registries were trained to support a few high-volume clinical areas. This staffing model was not feasible for supporting a larger number of low-volume clinical areas. Therefore, we chose a crowdsourcing platform, one in which the medical specialist would be expected to contribute labor by specifying diagnostic codes, operational definitions of tests and therapies, and needed information about time frames. This information was used to write algorithms in SQL (structured query language) and statistical analysis software as needed to extract and process the data. We further expected the medical specialists to interpret the data.

Stakeholder Engagement

To understand operational barriers and facilitators, we interviewed Medical Directors in clinical informatics and decision support, quality and risk management, technology, patient panel management, and care delivery, as well as Chiefs of medical specialties. These stakeholders identified key barriers and facilitators, enabling us to align the work with existing accountabilities and priorities (Table 1).

Needs Assessment and Differentiation

We next interviewed Chiefs of medical specialties in Endocrinology, Pulmonology, Pediatrics, General Surgery, Vascular Surgery, Cardiology, Oncology, and Neurology. The interview elicited the specialty department's leadership and organization, specific clinical questions and needs for data, efforts taken in the past to address the clinical questions, trust in the validity of the EMR data, the existence of data-savvy physicians in the specialty department, readiness for acting on new information, and opinions about data governance.

Every Chief readily formulated and articulated questions for which they wanted data, but most thought these data could not be obtained. Regarding trusting the validity of the data in the EMR, the general sentiment was "it's a starting point." Queried about the ability to translate knowledge into action, the clinical leaders explained that without objective metrics, they could only make guesses about needed improvements and had little influence. The respondents were unanimous that Chiefs of medical specialties were best able to govern priorities and implementation. Each of the clinical leaders could name physicians in their department who "liked computers" or "liked research" and could help define algorithms and validate data.


The clinical questions were grouped into templates. A template is a category of tools and approaches that can be used to address a set of clinical questions that are common across diseases. The clinical questions posed by the clinical leaders fell into seven crosscutting templates: 1) Patient List, 2) Comparison with Benchmark, 3) Variation in Care, 4) Text Search, 5) Standardized Physician Assessments, 6) Patient-Reported Outcomes, and 7) Comparative Effectiveness Studies.

Given the pilot nature of this project, we decided to build capability for the first three templates as proof of concept. The Patient List template allowed identification of patients with suspected surgical site infections, elevated laboratory values, need for discontinuation of medications, approaching care transitions, and so forth. The Comparison with Benchmark template computed overall rates, proportions, and other statistics for comparison over time or against the benchmark. The Variation in Care template was similar but presented statistics by clinician and facility.

The fourth template, Text Search, will identify search terms in narrative reports, such as laboratory, pathology, imaging, and operative reports; for this, we opted to develop capability in natural language processing; this work is ongoing. The fifth and sixth templates, Standardized Physician Assessments and Patient-Reported Outcomes, require engagement of operational staff to enhance the EMR and patient portal; we deferred on these templates to gain experience before engaging EMR staff, who have other critical priorities. The seventh template, Comparative Effectiveness Studies, was judged to be out of scope because such studies require greater investment of expertise.

Specification of Business Requirements

At the completion of Phase 1, we specified the business requirements (Table 2).

Phase 2

Phase 2 included iteratively prototyping and piloting the platform, and then performing an evaluation.

Creating the Prototype and Piloting

During Phase 2, we created the prototype platform 1 clinical question at a time and iteratively adapted staff, technology, and workflows to optimize performance.

Staffing for the platform included a Senior Epidemiologist, who performed the intake interview, drafted algorithms, and designed the final report, and a Senior Data Consultant, who extracted data from the EMR and used statistical analysis software to process the data. In addition, the work required effort by the Chief of the medical specialty, who sometimes engaged physicians in his/her department who had an understanding of codes, algorithms, epidemiology, or spreadsheets, or mere availability and interest with different Chiefs using somewhat different approaches. In addition, the Senior Data Consultant sought advice from an extract, transfer, and load programmer housed in the Data Warehousing Department. Although the extract, transfer, and load programmer did not extract data directly, his/her advice on the location of data was helpful. The EMR comprises thousands of data tables, and the extract, transfer, and load programmer possessed knowledge of the location of variables in the EMR. On average, this individual consulted for approximately four hours per specialty, with each specialty's EMR tables being located together. Thus, the first report provided to a specialty was more costly than the second and subsequent reports to that specialty.

We simplified the technology over the course of the innovation, abandoning aspirations to use Web services and business intelligence software because of barriers related to staffing and collaboration. Ultimately, we realized that user-centered design was a key to success. User-centered design recognizes—as a first principle—the preferences, capabilities, and limitations of the end user (ie, the physician), those factors driving each stage of the design process. We therefore used highly accessible technology. SurveyMonkey (Palo Alto, CA) served as the user query to elicit, for each clinical question, the detailed clinical knowledge of diagnoses, procedures, tests, results, and therapies that would be operationalized into algorithms as written into computer programs. We used a spreadsheet to report the data and electronic mail to deliver the report.

The workflow comprised five steps: 1) intake interview and selection of the template; 2) completion of the user query; 3) initial programming, using SQL and statistical analysis software, to extract data from the EMR, build reports, and output results into a spreadsheet; 4) data validation; and 5) final coding and delivery.

During the intake interview, we explained the resource and used a script to define actionable questions: "Let's imagine we generate the perfect report and the data show exactly what you suspect. To whom will you show the data? What do you expect them to do after they see the data? Have you talked to them? Should we instead focus on opportunities for which you have greater control?"

For each template (eg, Patient List), we developed a user query in SurveyMonkey to be completed by the Chief or a clinical delegate. The user query was written as a series of approximately 30 questions to elicit components of needed algorithms, such as diagnostic codes, tests, therapies, and timeframes. To validate the data, we provided simple cross-tabulations. We then completed the programming and
e-mailed the final spreadsheet. After the Chief's approval of the report, we used cron, a job scheduler, to automate SQL and the statistical analysis software to extract, process, and output the data into a spreadsheet, and to send the spreadsheet to the physician through e-mail.


We assessed leadership engagement by the Medical Specialty Chiefs, logistical feasibility, hours of effort, overall turnaround time, and satisfaction using a brief satisfaction survey. We also noted translation toward improved clinical processes and outcomes, although this was not a primary goal of the project.





The Specialty Miniregistries platform was successful in engaging Chiefs of medical specialties and producing accessible and authoritative reports. The required meeting time was 1 to 2 hours; analytic effort, 8 to 32 hours; and overall turnaround time, no longer than 2 weeks. Data alone may be insufficient for population health management, and translation was not an immediate goal of the innovation; nonetheless, several reports were implemented into simple workflows.

Medical specialists expressed satisfaction that the platform was easy to use and effective. However, the evaluation revealed that additional work is needed to clarify data governance, particularly the need for institutional review board approval to use data for research that would be publicly disseminated at national specialty meetings.

Experiences with specialty groups are detailed here.


The Oncology Department implemented a miniregistry to improve consultation among oncologists in the department for amyloidosis, a rare disease. The oncologists sought to improve quality and reduce variation in care by rapidly identifying patients with amyloidosis who had been newly diagnosed by a less specialized oncologist and reporting those cases to oncologists who specialized in amyloidosis. They therefore requested lists of cancer patients by cancer site. The Chief of Oncology first tested the Specialty Miniregistries platform through an initial request for a list of patients with amyloidosis. The amyloidosis list was automated and implemented into clinical care. A dedicated amyloidosis team of three oncologists proactively reached out to guide diagnosing physicians whose volumes of patients with amyloidosis is lower.

Medical Genetics

The Medical Genetics Department was concerned that genetic tests that are difficult to interpret were ordered by other departments without appropriate consultation. Focusing on ataxia, we performed a text search to identify who was ordering genetic tests. The search confirmed that physicians across a range of departments were ordering ataxia genetic tests without first consulting the Medical Genetics Department. Our work was one of several efforts organized by the Medical Genetics Department to address utilization of genetic testing, and together these efforts resulted in system-level changes.


Rheumatology was concerned that patients with gout were missing important laboratory tests needed to monitor their uric acid levels. They wanted to influence other departments, such as Pharmacy, to assume panel management of gout. We created a list of patients with uric acid levels in the top first percentile, showing that laboratory orders and procedures were missing and identifying preventable utilization. Discussions are under way about possible system-level changes.


The Ophthalmology Department Chief requested surveillance of surgical site infection after cataract surgery to enable a failure analysis. The tool was implemented, with each infection being investigated and discussed and "zero infection" reports being celebrated.

Pediatric Subspecialties

The pediatric oncologists wanted to improve the transition from pediatric to adult cancer care and therefore requested a list of teenaged patients scheduled for transition in the next year. The list was one of several tools used to understand and improve the care transition, with system-level changes now being discussed.

The pediatric hematologists wanted a better understanding of comanagement of sickle cell disease. They therefore requested a list of patients identifying their care providers together with the patient's "last touch" with the Health Plan and with pediatric hematology.

Pediatric cardiology referred neonates needing surgery to an external provider facility, where there had been staff changes and seeming increases in the rate of late complications. Postdischarge utilization data were analyzed to assess complication rates before and after the staff changes. This latter experience provided an example of the use of data about outside providers to measure quality.


Numerous changes are needed to transform health care. Among these is the use of "precise and clinically relevant"18 data and metrics to identify care gaps, such as missing tests and medication dispensings, and inappropriate variations among providers, and to enable performance improvement activities in the clinical microenvironment. The Specialty Miniregistries platform was successful in using the EMR to provide these data and metrics. Keys to success included the design of the infrastructure and collaborative interfaces, and enabling the medical specialists to formulate the clinical questions.

Infrastructure and Interprofessional Interfaces

With respect to registries, primary care differs from specialty care in that physicians treat a larger number of high-volume conditions. Thus, it is efficient for population health scientists and technology workers who build and maintain primary care registries to specialize on clinical topics; the clinical training they receive on-the-job is well used.

Because specialty care encompasses a greater diversity of topics, knowledge transfer from clinicians to nonclinicians is more resource intensive; thus, collaboration and infrastructure development is more costly. To obtain the greatest return on investment, infrastructure for building registries in specialty care should be cross-cutting.

Crowdsourcing knowledge-intensive tasks to specialists, and building interfaces to support their collaboration, greatly reduced the time and complexity of knowledge transfer. Collaboration was narrowly focused on actionability, programming algorithms, data validity, and governance, with minimal discussion of clinical concepts. Use of medical specialists to develop algorithms, manipulate spreadsheets, and interpret the data was less costly than directing this work to analytic staff because of the specialists' greater knowledge and more rapid cognitive processing of the clinical topic and clinical workflows. In addition, giving this task to the specialist enabled deeper insights from the data and gave the specialists an appreciation for needed improvements in medical documentation. It also increased their skill in using data.

We used a flexible, low-investment strategy to build the collaborative platform. This made it easier for the physician to use the platform, and it reduced the level of effort needed by scarce technology workers. Although physicians are well trained to formulate questions at the level of the individual patient, they generally are not trained to conceptualize questions at the level of a population. Nor do they have experience using EMR data or building algorithms. For this reason, the intake interview was an essential component of the platform. However, physicians know how to define clinical concepts, and they were successful in completing the user query to communicate operational definitions to the data consultant.

When we began our work, we aspired to discover a reduced set of quality metrics that might "roll up" from the clinical microenvironment to the Health Plan and beyond. The logical consequence is an infrastructure that is top-down, expensive, and difficult to adapt. We now believe that top-down approaches may not be adequately user-centered to the various stakeholders who need data, will be poor at addressing questions at the local clinical level, and may lose relevance as clinical processes evolve.

Asking the Right Clinical Questions

We anticipated that locating control of the clinical question with those at the frontline might be effective in improving care, because Medical Specialty Chiefs have direct accountability to their patients and physicians, knowledge of care delivery and data creation, and insight into improvement opportunities. Indeed, we observed that the process of formulating questions focused attention on modifiable units of change. In addition, data increased the Chief's authority to set priorities, intervene with staff, and advocate for system-level change. We further noted that the priorities expressed by the Medical Specialty Chiefs were at times exquisitely local. Chiefs sorted their priorities using intimate knowledge of past attempts to create change, the impact of information on key individuals and teams, and their ability to leverage influence over highly specific problems.

Many national specialty societies are developing metrics for general use across care delivery settings; some are intended for public reporting, others for local quality improvement. These metrics contain implicit clinical questions that may or may not correspond to local priorities and opportunities. Regardless of whether a metric is formulated at a national or local level, effort is needed to write algorithms, access data, compute metrics, and deliver reports. Even when specialty societies require participation in a centralized registry operated by the society, effort is needed to create and test data flows from the health care system to the national registry. The platform described here can be simplified and used for that purpose.


Compared with primary care, specialty care is composed of smaller organizations of clinicians, and this smaller scale creates opportunities for collaboration, influence, and change that differ from those in primary care. Thus, discussion of the reports at Medical Specialty Chiefs meetings and use of the data to influence others may be the first step to translation. Nonetheless, even in specialty care, developing and testing new clinical workflows may require a burdensome initial investment that exceeds local capacities. When improvement activities require investments from other departments, such as laboratory or pharmacy, alignment becomes essential.13-15 Although data and metrics alone may not lead to change, well-designed metrics that are appropriately targeted, objective, and transparent can "start the conversation" and build alignment. However, engaging stakeholders in other departments to participate in formulating questions may result in greater clinical change.


The EMR data used by the Specialty Miniregistries platform is the same data used to create claims and for external reporting to meet accreditation and meaningful use criteria.19 We are confident that the platform could be implemented in health care delivery systems that are less integrated than KP, although the range of questions that can be answered may be narrower. For example, the data available in less integrated systems may not capture late clinical outcomes, but they would capture early outcomes. They would also provide detailed process information, including practice variation. Furthermore, as Health Information Exchanges are rolled out under the Affordable Care Act,20 possibilities for linking data across settings will increase. The innovation we present here is not the use of data; rather, it is the structuring of a collaborative interface to support the specialist's contributions to the science and management of population health.

Conclusions and Future Directions

The Specialty Miniregistries platform provides one solution for increasing access to valid, objective, and actionable knowledge about local care processes. The platform is useful where there are low-volume clinical questions that cut across knowledge domains, as is often the case in specialty care, although cross-cutting questions occur in primary care as well. As EMR vendors increase the usability and analytic capabilities of their products, the need for solutions such as the Specialty Miniregistries platform may diminish, although this is uncertain. Organization and management of information systems to support accountable care is a rapidly evolving topic.

In the meantime, the many and diverse professionals responsible for using and stewarding information to improve care delivery ask: What is a sensible set of investments? How do we train and support data creators to become data customers? How do scale and context drive the cost-effectiveness of infrastructure built for clinical measurement? What is the right family of platforms, and can we be discerning about the features of stakeholders and their needs that fit with any particular platform? As specialty societies and other national stakeholders become increasingly involved in creating public accountability through metrics, how will care delivery systems staff and organize their information workforce to support those expectations? These are the emerging questions that drive future research.

Disclosure Statement

Lisa Herrinton, PhD, has had unrelated research contracts with Medimmune (2013 to present), P & G (2006 to 2012), and Genentech (2008 to 2012). The author(s) have no conflicts of interest to disclose.


This research was funded by the Innovation Fund for Technology of Kaiser Permanente Information Technology. The fund's staff provided advice about implementing an innovation project, but had no other role in the study. We wish to thank the many individuals across Kaiser Permanente's regional and national program who provided perspective and advice on this project, including Scott Adelman, MD; David Baer, MD; Matthew Carnahan, MD; William J Chang, MD; Kavin Desai, MD; Robert Goldfien, MD; Bradley B Hill, MD; Stacy Month, MD; Charles Meltzer, MD; and Scott Peak, MD; from The Permanente Medical Group; John Brookey, MD; Robert W Chang, MD; Marc Klau, MD; and Ronald K Loo MD; from the Southern California Permanente Medical Group; John Merenich, MD, FACP; from the Colorado Permanente Medical Group; and Michael S Alberts PhD, MD; and Robert Unitan, MD; from Northwest Permanente, PC.  The anonymous reviewers were instrumental in improving the clarity of the report. We also thank the US National Cancer Institute's Training Institute for Dissemination and Implementation Research in Health, Bethesda, MD.

Kathleen Louden, ELS, of Louden Health Communications provided editorial assistance

Author Contributions

Study design (Lisa J Herrinton, PhD; Liyan Liu, MD, MS; Richard Dell, MD; Amy L Compton-Phillips, MD), data collection (Liyan Liu, MD, MS; Andrea Altschuler, PhD), data analysis (Lisa J Herrinton, PhD; Liyan Liu, MD, MS; Andrea Altschuler, PhD), and manuscript preparation (Lisa J Herrinton, PhD; Liyan Liu, MD, MS; Andrea Altschuler, PhD; Richard Dell, MD; Violeta Rabrenovish, MHA; Amy L Compton-Phillips, MD).

 1.    Gliklich RE, Dreyer NA, editors. Registries for evaluating patient outcomes: a user's guide. 2nd ed. AHRQ Publication No. 10-EHC049. Rockville, MD: Agency for Healthcare Research and Quality; 2010 Sep.
    2.    Young AS, Chaney E, Shoai R, et al. Information technology to support improved care for chronic illness. J Gen Intern Med 2007 Dec;22 Suppl 3:
425-30. DOI:
    3.    Goyal A, Bornstein WA. Health system-wide quality programs to improve blood pressure control. JAMA 2013 Aug 21;310(7):695-6. DOI:
    4.    Margolis PA, Peterson LE, Seid M. Collaborative Chronic Care Networks (C3Ns) to transform chronic illness care. Pediatrics 2013 Jun;131 Suppl 4:S219-23. DOI:
    5.    Halpin HA, McMenamin SB, Simon LP, et al. Impact of participation in the California Healthcare-Associated Infection Prevention Initiative on adoption and implementation of evidence-based practices for patient safety and health care-associated infection rates in a cohort of acute care general hospitals. Am J Infect Control 2013 Apr;41(4):307-11. DOI:
    6.    Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA 2013 Apr 3;309(13):1351-2. DOI:
    7.    Krall MA, Gundlapalli AV, Samore MH. Big data and population-based decision support. In: Greenes RA, editor. Clinical decision support: the road to broad adoption. 2nd ed. London, UK: Academic Press; 2014. p 363-81. DOI:
    8.    Aitamurto T, Leiponen A, Tee R. The promise of idea crowdsourcing: benefits, contexts, limitations [white paper; Internet]. Los Angeles, CA: Crowdsourcing, LLC; 2011 Jun [cited 2013 Jun 27]. Available from:
    9.    Neilsen M. Reinventing discovery: the new era of networked science. Princeton, NJ: Princeton University Press; 2011 Oct 23.
    10.    Clancy CM, Margolis PA, Miller M. Collaborative networks for both improvement and research. Pediatrics 2013 Jun;131 Suppl 4:S210-4. DOI:
    11.    Schilling L, Chase A, Kehrli S, Liu AY, Stiefel M, Brentari R. Kaiser Permanente's performance improvement system, part 1: from benchmarking to executing on strategic priorities. Jt Comm J Qual Patient Saf 2010 Nov;36(11):484-98.
    12.    Schilling L, Dearing JW, Staley P, Harvey P, Fahey L, Kuruppu F. Kaiser Permanente's performance improvement system, part 4: creating a learning organization. Jt Comm J Qual Patient Saf 2011 Dec;37(12):532-43.
    13.    Whippy A, Skeath M, Crawford B, et al. Kaiser Permanente's performance improvement system, part 3: multisite improvements in care for patients with sepsis. Jt Comm J Qual Patient Saf 2011 Nov;37(11):483-93.
    14.    Zuckerman B, Margolis PA, Mate KS. Health services innovation: the time is now. JAMA 2013 Mar 20;309(11):1113-4. DOI:
    15.    Berkowitz L, McCarthy C, editors. Innovation with information technologies in healthcare. London, UK: Springer-Verlag; 2013.
    16.    McCreary L. Kaiser Permanente's innovation on the front lines. Harv Bus Rev 2010 Sep;88(9):92, 94-7, 126.
    17.    Paxton EW, Kiley ML, Love R, Barber TC, Funahashi TT, Inacio MC. Kaiser Permanente implant registries benefit patient safety, quality improvement, cost-effectiveness. Jt Comm J Qual Patient Saf 2013 Jun;39(6):246-52.
    18.    Daschle T, Domenici P, Frist W, Rivlin A. Prescription for patient-centered care and cost containment. N Engl J Med 2013 Aug 1;369(5):471-4. DOI:
    19.    2014 definition stage 1 of meaningful use [Internet]. Baltimore, MD: Centers for Medicare and Medicaid Services; last modified 2014 Oct 5 [cited 2014 Nov 24]. Available from:
    20.    The Patient Protection and Affordable Care Act of 2010. Public Law 111-148, 111th Congress, 124 Stat 119, HR 3590, enacted 2010 Mar 23.


Click here to join the eTOC list or text ETOC to 22828. You will receive an email notice with the Table of Contents of The Permanente Journal.


2 million page views of TPJ articles in PubMed from a broad international readership.


Indexed in MEDLINE, PubMed Central, EMBASE, EBSCO Academic Search Complete, and CrossRef.




ISSN 1552-5775 Copyright © 2021

All Rights Reserved