Psychometric Properties of the Problem-Oriented Patient Experience—Primary Care (POPE-PC) Survey



 

Ali Rafik Shukor, M Biotech, MSc1,2

Perm J 2020;24:19.191 [Full Citation]

https://doi.org/10.7812/TPP/19.191
E-pub: 04/21/2020

ABSTRACT

Introduction: Measuring the experiences of patients regarding delivery and receipt of person-oriented primary care is of increasing policy and research interest and is a core component of the Institute for Healthcare Improvement’s Quadruple Aim.

Objective: To describe the Problem-Oriented Patient Experience—Primary Care (POPE-PC) survey, a novel instrument designed to measure patients’ experiences of primary care, and to assess the instrument’s psychometric properties.

Methods: Psychometric testing was performed using data from a Canadian urgent primary care center, derived from March 2019 to September 2019. Patients automatically received the 9-question survey by email after leaving the clinic. Exploratory factor analysis (EFA) on all questions and the entire dataset was performed using parallel analysis and scree plot for factor extraction. Internal consistency was assessed by calculating Cronbach α. A split-half cross-validation of the ensuing factor structure was conducted. A correlation analysis helped explore associations between the survey’s questions.

Results: Results from the initial EFA indicate that the POPE-PC has a conceptually sound 2-factor structure, with good internal consistency. A split-half validation yielded the same findings, reaffirming that the 2-factor model has good psychometric properties. The correlation analysis indicated that the concept of respect is strongly associated with clinical functions related to problem recognition.

Discussion: Problem recognition, despite being the cornerstone of person-oriented primary care, remains largely overlooked in health services research. The POPE-PC’s validity and problem orientation render it potentially useful in rigorously assessing patient experiences of problem-oriented primary care.

Conclusion: The survey’s conceptual underpinning and psychometric properties, coupled with its simple and parsimonious design, enable application in primary care settings to provide person-oriented care.

INTRODUCTION

Measuring the experiences of patients in relation to the delivery and receipt of person-oriented primary care is of increasing policy and research interest and is a core component of the Institute for Healthcare Improvement’s (IHI’s) Quadruple Aim.1-3 Patient experiences pertaining to the delivery and receipt of clinical primary care can be measured and assessed using systematic and validated survey instruments.4-6 There has therefore been increasing health services research attention and resources dedicated to the design and testing of survey instruments to measure primary care patients’ experiences.7,8 Existing instruments vary in design, content, and function and are underpinned by different conceptual frameworks because they are often adapted and validated for specific organizational settings, patient population profiles, and purposes.8

This study describes a novel instrument designed to measure patient experiences relating to the care delivery and receipt functions of person-oriented primary care, and assesses the instrument’s psychometric properties. The survey, named the Problem-Oriented Patient Experience-Primary Care (POPE-PC), was designed by a team of medical directors and senior administrators at Vancouver Coastal Health Authority in Vancouver, British Columbia, Canada.

Before development of the survey, a scoping literature review was performed to identify potentially suitable English-language patient experience and satisfaction surveys for consideration. The following tools were identified and reviewed because most had undergone at least some processes of validation: The Johns Hopkins Primary Care Assessment Tool,4,6 the Canadian Institute for Health Information Measuring Patient Experiences in Primary Health Care Survey,8 the General Practice Assessment Questionnaire,9 the Relational Communication Scale,10 the CollaboRATE survey,11 the Primary Care Assessment Survey,12 the European Task Force on Patient Evaluation of General Practice Care (EUROPEP),13 the Components of Primary Care Index,14 the Interpersonal Processes of Care Survey,15 the Saanich Peninsula Patient Experience Survey,16 the Veterans Affairs National Outpatient Customer Satisfaction Survey,17 the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Patient-Centered Medical Home survey,18 the Care Coordination Quality Measure for Primary Care survey,19 and the RAND Patient Satisfaction Questionnaire.20

Although these surveys have interesting and useful content for primary care in various contexts, none were deemed to properly fit the specific needs or context in this case. The main reasons pertained to factors such as the length of surveys, the perceived complexity of wording, a lack of specific focus on experiences of clinical interactions at the interface of care delivery and receipt, and general perceptions of survey design and content not fitting the needs of the particular organizational or regional context.

Therefore, the survey development team decided to conceptualize and develop a new survey (described in the Methods section) with a fit-for-purpose design using the following key criteria: conceptual rigor, system orientation, problem orientation, parsimony, simplicity, consistency, relevance, and practicality. The team decided to conceptually underpin the survey using Dr. Starfield’s21,22 model for health services research, because its content directly pertains to system-oriented functions of problem-oriented primary care delivery and receipt (specifically the performance domains related to Provision of care and Receipt of care; Figure 1). The model’s key functions are defined and summarized in Starfield’s4 seminal book titled Primary Care: Balancing Health Needs, Services, and Technology. The most poorly recognized and understood function by health services researchers is that of “problem recognition,” which Starfield4(p28) defines as follows:

The providers first must recognize the needs existing both in the community and in individual patients. This feature is known as problem (or needs) recognition and is a particularly important consideration for primary care. The problem may be a symptom, a sign, an abnormal laboratory test, a previous but relevant item in the history of the patient or of the community, or a need for an indicated preventive procedure. Problem recognition implies being aware of the existence of situations requiring attention in a health context. After recognizing the problem, the health professional generally formulates a diagnosis or an understanding of the problem when no diagnosis is possible.

METHODS

Survey Development and Implementation

The team worked collaboratively to operationalize the conceptual model into survey questions. The content of the POPE-PC survey was refined by the team using an iterative content validation approach, whereby the list of drafted questions was reviewed several times for relevance, completeness, and essentiality until consensus was reached. The team members paid special attention to ensure that the survey content and design were simple and concise, considering that their public community health centers’ patients present with differing levels of distress, limited literacy, high burdens of illness and disease, complex psychosocial needs, limited resources and abilities, and weak motivational profiles.3

Team members operationalized the model’s domains into a series of 6 questions, with the first 2 regarding receipt-of-care functions related to problem recognition and the other 4 questions about delivery-of-care functions related to acceptance and satisfaction, understanding, and concordance.4,21 Two additional cross-cutting questions were added: 1 relating to the temporal nature of the experience at the interface of care delivery and receipt, and one relating to the theme of respect. A final question was added to measure patient satisfaction, using Reichheld’s Net Promoter Score (NPS) question.23 The ensuing 9-question POPE-PC survey is found in Table 1.

Survey questions 1-8 (Q1-Q8) are answered on a 5-point Likert scale (not at all; very little; somewhat; yes, for the most part; yes, definitely). The NPS question (Q9) is answered on a simple 3-point scale (not at all; maybe; yes, definitely) rather than the traditional 10-item NPS scale, which was perceived to be potentially confusing for patients. Furthermore, the 3-point scale conceptually aligns with the 3 assessment categories used by the NPS instrument (ie, “detractors,” “passives,” and “promoters”). The POPE-PC survey is free to use by anyone and requires no licensing or special permission for use.

The finalized survey questions specifically pertain to experiences at the interface of care delivery and receipt, thereby providing insights on the performance of functions that are directly in health care practitioners’ actual sphere of influence and that are amenable to change and improvement. This approach renders the survey tool fit for purpose and useful for quality improvement efforts. The survey’s design explicitly focuses on the practitioner-patient relationship and on patient engagement in the context of problem-oriented care settings, both of which are critical aspects of person-oriented primary care.21,24,25 The POPE-PC survey’s content and problem orientation therefore support assessment of the IHI’s and the National Patient Safety Foundation’s “Ask Me 3” guidance activities, which focus on enabling patients to ask the following 3 questions during care encounters: 1) “What is my main problem?,” 2) “What do I need to do?,” and 3) “Why is it important for me to do this?”26

The POPE-PC survey was implemented at a public community-based urgent primary care setting, and patients automatically receive the survey by email on leaving the clinic. The survey has enabled performance assessment, evaluation, multidisciplinary team-based quality improvement initiatives, and accountability to the regional Health Authority and provincial Ministry of Health. There is also substantial organizational and policy interest in potentially leveraging the POPE-PC as a regional and provincial standard. Validation of the instrument’s psychometric properties is therefore essential and is the key purpose of this study.

Data Processing and Preliminary Analysis

Psychometric testing was performed using POPE-PC survey data from City Centre Urgent Primary Care Centre, Vancouver Canada, derived from the 6-month period of March 2019 to September 2019. The original dataset is not linked to any patient identifiers and is completely anonymous. Research was conducted according to the ethical principles of the Declaration of Helsinki.

Data were randomized using a spreadsheet’s random function (Microsoft Excel RAND function, Microsoft Corp, Redmond, WA). Outliers were identified by calculating z-scores (standard scores) and were removed if found to be 2.99 standard deviations from the mean. Assumptions (ie, additivity, normality, homogeneity, homoscedasticity) were tested in Excel by running the correlation table, histogram, normal probability plot, and residual plot.

Excel data were saved as a CSV (comma separated values) file. The file was then imported into an open-source statistics program (JASP v0.11, University of Amsterdam, Amsterdam, Netherlands) to conduct descriptive statistics (ie, means, standard deviations, minimum/maximum ranges, frequency tables, distribution plots, and box plots).

Exploratory Factor Analysis and Internal Consistency

JASP software was used to conduct testing of psychometric properties.27 Exploratory factor analysis (EFA) on the entire dataset was used to assess the factor structure of the study data. An EFA was performed on the entire dataset and all 9 questions using parallel analysis and scree plot using the Promax oblique rotation (JASP Team [2019], JASP Version 0.11.1, Amsterdam, The Netherlands) for factor extraction. Factor loadings of 0.4 or greater were considered significant, and any factor cross-loadings below 0.4 were considered acceptable.28 With use of these criteria, problematic items were gradually eliminated until EFA resulted in a satisfactory factor structure. Goodness of fit was tested using the Non-Normed Fit Index (NNFI, also called the Tucker-Lewis Index), with values above 0.90 considered acceptable. Residual statistics were tested using the root mean square error of approximation (RMSEA), with values less than 0.08 considered acceptable. Internal consistency was assessed by calculating a Cronbach a, with a score of 0.7 or higher being considered satisfactory.28

Split-Half Cross-validation

A split-half cross-validation of the ensuing factor structure (from the aforementioned EFA of the entire dataset) was then conducted, by randomly splitting the dataset in halves and running EFA on each half.28 Parallel analysis and scree plot (Promax oblique rotation, JASP Version 0.11.1) were used for factor extraction, with factor loadings of 0.4 or greater considered significant. Goodness of fit was tested using the NNFI, and values above 0.90 were considered acceptable. Residual statistics were tested using the RMSEA, with values below 0.08 considered acceptable. Internal consistency was assessed by calculating a Cronbach a, with a score of 0.7 or higher being considered satisfactory.28 Cronbach a values were calculated for all factor models emerging from the respective EFA.

Exploratory Correlation Analysis

The correlation table (used to test assumptions relating to additivity) was used to explore associations between questions that did not fit the factor structure and those that did.

19.191 table 1

19.191 figure 1

19.191 table 2

RESULTS

Assumptions Testing and Descriptive Statistics

The dataset was composed of 1152 complete survey responses collected between March 2019 and September 2019. Of the total dataset, 2.95% (34 responses) were deemed to be outliers (ie, z-score = 2.99) and therefore excluded from further analysis. The ensuing dataset was tested and found to meet the assumptions relating to additivity, normality, homogeneity, and homoscedasticity. Basic descriptive statistics were run on the ensuing dataset (N = 1118), as shown in Table 2. The descriptive statistics indicated a high level of clinic performance in relation to patient experiences across all 9 survey questions.

Exploratory Factor Analysis

EFA on the entire dataset (N = 1118) and all 9 questions was conducted using parallel analysis and scree plot (Promax oblique rotation, JASP Version 0.11.1) for factor extraction. Factor loadings equal to or greater than 0.4 were considered significant; factor cross-loadings were not found. Because single-item factors and factors that had a loading less than 0.4 were removed, questions 7 to 9 were removed, resulting in a 2-factor model comprising 6 questions (Table 3). Factor 2 (questions 1 and 2) conceptually aligns with the “Provision of Care” performance domain’s problem recognition function, whereas factor 1 (questions 3-6) conceptually aligns with the “Receipt of Care” performance domain.

Goodness of fit was found to be satisfactory with an NNFI value of 0.993 and residual statistics with a RMSEA value of 0.033. Reliability analysis yielded a Cronbach a of 0.810 for factor 1 and a Cronbach a of 0.760 for factor 2, indicating satisfactory reliability.

Split-Half Cross-validation of Two-Factor Structure

A split-half cross-validation of the 6-question, 2-factor structure was conducted by randomly splitting the dataset in halves (set 1 and set 2, each consisting of 576 survey responses) and running EFA on each half. Parallel analysis and scree plot (Promax oblique rotation JASP Version 0.11.1) were used for factor extraction. For both sets, single-item factors and factors that had loadings less than 0.4 were removed, resulting in 2-factor models composed of 6 questions (Tables 4 and 5).

For set 1, goodness of fit was found to be satisfactory with an NNFI value of 0.965, and residual statistics with a RMSEA value of 0.075. Reliability analysis yielded a Cronbach a of 0.823 for factor 1 and a Cronbach a of 0.784 for factor 2, indicating satisfactory reliability.

For Set 2, goodness of fit was found to be satisfactory with an NNFI value of 0.967 and residual statistics with a RMSEA value of 0.065. Reliability analysis yielded a Cronbach a of 0.795 for factor 1 and a Cronbach a of 0.723 for factor 2, indicating satisfactory reliability.

Exploratory Correlation Analysis

The correlation matrix (Table 6) indicates that Q7 was most strongly associated with Q4 (r = 0.737), Q5 (r = 0.672), Q2 (= 0.665), and Q1 (r = 0.663). Q8 was most strongly associated with Q2 (r = 0.694), Q7 (r = 0.629), Q4 (r = 0.613), and Q1 (r = 0.613). Q9 was most strongly associated with Q8 (r = 0.665) and Q3 (r = 0.609).

DISCUSSION

This study aimed to assess the psychometric properties of the POPE-PC survey. Results from the initial EFA indicate that the POPE-PC survey has a 2-factor structure, with good internal consistency. A split-half validation yielded the same findings, which reaffirms that the 2-factor model has good psychometric properties.

The 2-factor structure aligns with the survey tool’s underpinning conceptual framework, which reaffirms the value of the expert input and their perceptions pertaining to the tool’s face validity.21,22 Factor 2 (Q1 and Q2) operationalizes the “Provision of Care” performance domain’s problem recognition function, whereas factor 1 (Q3-Q6) operationalizes the “Receipt of Care” performance domain’s functions related to acceptance and satisfaction, understanding, and concordance.21 The study’s findings therefore suggest that the survey is conceptually sound, which has important implications regarding the instrument’s validity for the measurement and assessment of patient experiences related to delivery and receipt of person-oriented primary care.

As expected from the conceptual framework, questions 7 to 9 did not fit the factor structure. The correlation matrix (Table 6) highlighted hypothetically plausible and interesting associations between the 3 questions and the questions that fit the 2-factor model, providing interesting insights for future potential assessments of concurrent validity. Performance on Q7 (a temporal question relating to patients being given enough time to discuss problems or concerns) is strongly associated with patients being given the opportunity to ask questions, patients being given the opportunity to discuss decisions regarding their care, whether staff listened to what patients had to say, and whether patients were given the opportunity to describe their problems or concerns. Performance on Q8 (respect) is most strongly associated with whether staff listened to what patients had to say, whether patients were given enough time to discuss their problems or concerns, whether patients were given the opportunity to ask questions, and whether patients were given a chance to describe their problems or concerns. Performance on Q9 (NPS question) is most strongly associated with whether patients were treated with respect and whether patients perceived that they received useful help for their problems or concerns.

The dynamics of the aforementioned associations warrant further exploration by health services researchers to better understand the complex mechanisms by which different primary care functions affect patient perceptions relating to respect and satisfaction. Specifically, the strength of the relationships between respect (Q8) and factor 2’s questions (Q1 and Q2) relating to problem recognition are particularly interesting in light of research findings indicating that respect is strongly a function of recognition of problems and attention to needs.29 Such findings, along with this study’s correlation analysis results, do indicate that Q8 (respect) can potentially be used as a proxy to assess concurrent validity for factor 2’s questions.

It is important to also highlight the survey’s parsimonious design philosophy, which has likely enabled a high response rate, which peaked at 42% for one of the study’s months. The survey was created by a team of Health Authority Medical Directors, senior administrators, and researchers working with public community health centers and was therefore designed to enable responses by patients presenting for care exhibiting high levels of distress, relatively low literacy rates, weak motivational profiles, high burdens of illness and disease, and complex biopsychosocial profiles.

The survey may be potentially well suited for application in various primary care settings that strive to provide problem-oriented care, regardless of the patient population profile. This is reaffirmed by the context and setting of the study’s clinical site, an urgent primary care center with a multidisciplinary team providing care for patients from diverse demographic and socioeconomic backgrounds, exhibiting various levels of urgency (of needs) and biopsychosocial complexity. It would be of value to test and to continue to validate the POPE-PC survey at different health care settings (ie, different organization contexts serving various patient population profiles) that strive to provide problem-oriented primary care, including virtual care or telehealth interactions.

Accurate measurement of the core functions of primary care enables meaningful performance assessment and the development of evaluation and quality improvement initiatives.3,30 At the study’s clinical site, monthly clinic and practitioner-level patient experience reports are operationalized, the results of which are actively used by the clinic’s Medical Director and management team to monitor performance and enable multidisciplinary team-led quality improvement activities. These performance assessments and quality improvement activities are executed with a spirit of enabling learning, continuous improvement, and professional development, rather than being punitive. The clinic’s leadership, clinical, and administrative teams have been reporting the patient survey to be of high value, particularly in relation to quality improvement and enabling staff motivation. The leadership team plans to design trending studies that enable assessment of the impact of aquality improvement activities on patient experience over time. The clinic also sends monthly patient experience performance reports to the Vancouver Coastal Health Authority and the British Columbia Ministry of Health, for accountability purposes.

By enabling the measurement of patient experiences of problem-oriented primary care, the POPE-PC survey aims to make a major contribution to health services research, with a focus on the field of primary care. Problem recognition, despite being a critical function and the cornerstone of person-oriented primary care, remains largely overlooked by health services research.4,21,24,25 Survey instruments that incorporate elements of measuring performance relating to the function of problem recognition include The Johns Hopkins Primary Care Assessment Tool, the Relational Communication Scale, and the CollaboRATE survey.6,11,31-33 The POPE-PC survey, by explicitly focusing on problem orientation in primary care, can potentially help assess performance of organizations participating in the IHI’s and National Patient Safety Foundation’s Ask Me 3 educational program.26

Basic standards for problem recognition and problem orientation of care, originally developed in the 1960s by Lawrence L Weed, MD,34 father of the problem-oriented medical record, remain largely absent in contemporary primary care systems. Problem lists in contemporary electronic medical records often contain diagnostic hypotheses rather than verifiable statements of the presenting problem or chief concern.35 Ensuing care plans are therefore potentially formulated for the wrong diagnoses.36 The subsequent inappropriate care and outcomes are not accurately captured, since the frame of reference (ie, the patient’s actual problem) is effectively distorted. This compromises the meaning of quality of care evaluations by health services researchers. Without problem-oriented standards, primary care activities are often untethered from the realities of patients and are therefore of uncertain value.37,38 Starfield’s21 research therefore strongly affirmed that the performance of primary care is largely contingent on systems enabling problem recognition.

Trained both in clinical medicine (Pediatrics) and Public Health, I have devoted my entire professional career to improving the effectiveness and equity of health services. Early in my career, I developed a conceptual scheme—published in the New England Journal of Medicine—that captured all the health systems [sic] characteristics related to providing health services. One key feature was identified as “health needs and problem recognition” by health professionals, a feature of care neglected by all approaches to measuring and assuring quality of care. My subsequent work expanded on the notion that recognition of needs is a salient contributor to improvements in individual and population health.

It is therefore hypothetically plausible that the relatively high level of performance of City Centre Urgent Primary Care Centre on the POPE-PC survey can be attributed to the fact that its multidisciplinary team systematically elicits and documents patients’ presenting problems and concerns, which are coded in the electronic medical record using Presenting Complaint codes derived from the Canadian Emergency Department Information System Presenting Complaint List.39 It is likely that most community-based primary care clinics in Canada, which solely use International Classification of Diseases, Ninth Revision electronic medical record classification systems and do not systematically elicit and code presenting problems or chief concerns, could manifest lower levels of performance on the POPE-PC survey.35,40,41 Use of the POPE-PC tool across primary care settings could reaffirm the importance of problem recognition and promote positive changes to classification and care standards, which ultimately contribute toward achievement of the Quadruple Aim.1,4

It is important to highlight that a formal assessment of concurrent validity—although essential as part of the survey’s ongoing validation process—was not performed within the scope of this study. Concurrent validity will be the subject of future studies that will involve linkage of the POPE-PC survey dataset to other datasets that are deemed to contain suitable variables that enable analyses of concurrent validity. In the absence of a formal assessment of concurrent validity, the study’s exploratory correlation analysis (Table 6) provided interesting insights requiring further research, particularly in relation to potentially leveraging questions outside the POPE-PC’s 2-factor structure (ie, Q7-Q9) for assessment of concurrent validity.

19.191 table 3 4 5

19.191 table 6

CONCLUSION

The POPE-PC survey was designed to enable the conceptually sound measurement of patient experiences of problem-oriented primary care. This study indicates that the instrument has satisfactory psychometric properties and is unique in that it rigorously enables the assessment of initiatives promoting problem orientation in primary care, such as the IHI’s and National Patient Safety Foundation’s Ask Me 3 activities.26 The POPE-PC survey’s problem orientation reaffirms Starfield’s21 assertion that “recognition of needs is a salient contributor to improvements in individual and population health” and thereby aims to make a positive contribution to operationalization of the IHI’s Quadruple Aim.

Disclosure Statement

The author(s) have no conflicts of interest to disclose.

Acknowledgments

The author would like to acknowledge that the Problem-Oriented Patient Experience-Primary Care (POPE-PC) Survey was developed in collaboration with the following team: Dean Brown, MD; Andrew Day, MSc; Rachael McKendry, MA; Michael Norbury, MD; and Nardia Strydom, MD, ChB. The author notes that the content of the published article does not necessarily reflect their respective individual views or perspectives.

Kathleen Louden, ELS, of Louden Health Communications performed a primary copy edit.

Author Affiliations

1 Seymour Health Centre, Inc. Vancouver, British Columbia, Canada

2 Department of Public Health, Amsterdam University Medical Center, The Netherlands

Corresponding Author

Ali Rafik Shukor, M Biotech, MSc ()

How to Cite this Article

Shukor AR. Psychometric properties of the Problem-Oriented Patient Experience—Primary Care (POPE-PC) survey. Perm J 2020;24:19.191. DOI: https://doi.org/10.7812/TPP/19.191

References
1.    Bodenheimer T, Sinsky C. From triple to quadruple aim: Care of the patient requires care of the provider. Ann Fam Med 2014 Nov-Dec;12(6):573-6. DOI: https://doi.org/10.1370/afm.1713 PMID:25384822
    2.    Gelmon S, Sandberg B, Merrthew N, Bally R. Refining reporting mechanisms in Oregon’s patient-centered primary care home program to improve performance. Perm J 2016 Sum;20(3):15-115. DOI: https://doi.org/10.7812/TPP/15-115 PMID:27213488
    3.    Shukor AR, Edelman S, Brown D, Rivard C. Developing community-based primary health care for complex and vulnerable populations in the Vancouver Coastal Health Region: HealthConnection Clinic. Perm J 2018;22:18-010. DOI: https://doi.org/10.7812/TPP/18-010 PMID:30227907
    4.    Starfield B. Primary care: Balancing health needs, services, and technology. New York, NY: Oxford University Press; 1998
    5.    Malouin RA, Starfield B, Sepulveda MJ. Evaluating the tools used to assess the medical home. Manag Care 2009 Jun;18(6):44-8. PMID:19569570
    6.    Cassady CE, Starfield B, Hurtado MP, Berk RA, Nanda JP, Friedenberg LA. Measuring consumer experiences with primary care. Pediatrics 2000 Apr;105(4 Pt 2):998-1003. PMID:10742362
    7.    Haggerty JL, Pineault R, Beaulieu M-D, et al. Practice features associated with patient-reported accessibility, continuity, and coordination of primary health care. Ann Fam Med 2008 Mar-Apr;6(2):116-23. DOI: https://doi.org/10.1370/afm.802 PMID:18332403
    8.    Wong ST, Haggerty JL. Measuring patient experiences in primary health care: a review and classification of items and scales used in publicly-available questionnaires [Internet]. Vancouver, BC: University of British Columbia Centre for Health Services and Policy Research; 2013 May [cited 2019 Jan 7]. Available from: https://open.library.ubc.ca/cIRcle/collections/facultyresearchandpublications/52383/items/1.0048528
    9.    Mead N, Bower P, Roland M. The General Practice Assessment Questionnaire (GPAQ)—adevelopment and psychometric characteristics. BMC Fam Pract 2008 Feb 20;9(1):13. DOI: https://doi.org/10.1186/1471-2296-9-13 PMID:18289385
    10.    Gallagher TJ, Hartung PJ, Gregory SW. Assessment of a measure of relational communication for doctor-patient interactions. Patient Educ Couns 2011 Dec 1;45(3):211-8. DOI: https://doi.org/10.1016/s0738-3991(01)00126-4 PMID:11722857
    11.    Forcino RC, Barr PJ, O’Malley AJ, et al. Using CollaboRATE, a brief patient-reported measure of shared decision making: Results from three clinical settings in the United States. Health Expect 2018 Feb;21(1):82-9. DOI: https://doi.org/10.1111/hex.12588 PMID:28678426
    12.    Safran DG, Kosinski M, Tarlov AR, et al. The Primary Care Assessment Survey: Tests of data quality and measurement performance. Med Care 1998 May;36(5):728-39. DOI: https://doi.org/10.1097/00005650-199805000-00012 PMID:9596063
    13.    Wensing M, Mainz J, Grol R. A standardised instrument for patient evaluations of general practice care in Europe. Eur J Gen Pract 2000 Jan 1;6(3):82-7. DOI: https://doi.org/10.3109/13814780009069953
    14.    Flocke SA. Measuring attributes of primary care: Development of a new instrument. J Fam Pract 1997 Jul;45(1):64-74. PMID:9228916
    15.    Stewart AL, Nápoles-Springer AM, Gregorich SE, Santoyo-Olsson J. Interpersonal processes of care survey: Patient-reported measures for diverse groups. Health Serv Res 2007 Jun;42(3 Pt 1):1235-56. DOI: https://doi.org/10.1111/j.1475-6773.2006.00637.x PMID:17489912
    16.    Saanich Peninsula Patient Experience Survey. Community Health and Care Evaluation Program. Victoria, British Columbia, Canada: Vancouver Island Health Authority.
    17.    Borowsky SJ, Nelson DB, Fortney JC, Hedeen AN, Bradley JL, Chapko MK. VA community-based outpatient clinics: Performance measures based on patient perceptions of care. Med Care 2002 Jul;40(7):578-86. DOI: https://doi.org/10.1097/00005650-200207000-00004 PMID:12142773
    18.    Hays RD, Berman LJ, Kanter MH, et al. Evaluating the psychometric properties of the CAHPS Patient-centered Medical Home survey. Clin Ther 2014 May;36(5):689-696.e1. DOI: https://doi.org/10.1016/j.clinthera.2014.04.004 PMID:24811752
    19.    Care Coordination Quality Measure for Primary Care (CCQM-PC) [Internet]. Rockville, MD: Agency for Healthcare Research and Quality; 2016 Jul [cited 2019 Dec 11]. Available from: www.ahrq.gov/ncepcr/care/coordination/quality/index.html
    20.    Thayaparan AJ, Mahdi E. The Patient Satisfaction Questionnaire Short Form (PSQ-18) as an adaptable, reliable, and validated tool for use in various settings. Med Educ Online 2013 Jul 23;18(1):21747. DOI: https://doi.org/10.3402/meo.v18i0.21747 PMID:23883565
    21.    Starfield B. Primary care and equity in health: The importance to effectiveness and equity of responsiveness to peoples’ needs. Humanity Soc 2009 Feb 1;33(1-2):56-73. DOI: https://doi.org/10.1177/016059760903300105
    22.    Starfield B. Health services research: A working model. N Engl J Med 1973 Jul 19;289(3):132-6. DOI: https://doi.org/10.1056/NEJM197307192890305 PMID:4711342
    23.    Krol MW, de Boer D, Delnoij DM, Rademakers JJ. The Net Promoter Score—An asset to patient experience surveys? Health Expect 2015 Dec;18(6):3099-109. DOI: https://doi.org/10.1111/hex.12297 PMID:25345554
    24.    Starfield B. Is patient-centered care the same as person-focused care? Perm J 2011 Spring;15(2):63-9. DOI: https://doi.org/10.7812/TPP/10-148 PMID:21841928
    25.    Shukor AR. An alternative paradigm for evidence-based medicine: Revisiting Lawrence Weed, MD’s system approach. Perm J 2017;21(16):16-147. DOI: https://doi.org/10.7812/TPP/16-147 PMID:28488985
    26.    Ask Me 3: Good questions for your good health [Internet]. Boston, MA: Institute for Healthcare Improvement [cited 2019 Oct 21]. Available from: www.ihi.org:80/resources/Pages/Tools/Ask-Me-3-Good-Questions-for-Your-Good-Health.aspx
    27.    Quintana DS, Williams DR. Bayesian alternatives for common null-hypothesis significance tests in psychiatry: A non-technical guide using JASP. BMC Psychiatry 2018 Jun 7;18(1):178. DOI: https://doi.org/10.1186/s12888-018-1761-4 PMID:29879931
    28.    Gambashidze N, Hammer A, Brösterhaus M, Manser T; WorkSafeMed Consortium. Evaluation of psychometric properties of the German Hospital Survey on Patient Safety Culture and its potential for cross-cultural comparisons: A cross-sectional study. BMJ Open 2017 Nov 9;7(11):e018366. DOI: https://doi.org/10.1136/bmjopen-2017-018366 PMID:29127231
    29.    Dickert NW, Kass NE. Understanding respect: Learning from patients. J Med Ethics 2009 Jul;35(7):419-23. DOI: https://doi.org/10.1136/jme.2008.027235 PMID:19567690
    30.    Bodenheimer T, Ghorob A, Willard-Grace R, Grumbach K. The 10 building blocks of high-performing primary care. Ann Fam Med 2014 Mar-Apr;12(2):166-71. DOI: https://doi.org/10.1370/afm.1616 PMID:24615313
    31.    Tai-Seale M, Foo PK, Stults CD. Patients with mental health needs are engaged in asking questions, but physicians’ responses vary. Health Aff (Millwood) 2013 Feb;32(2):259-67. DOI: https://doi.org/10.1377/hlthaff.2012.0962 PMID:23381518
    32.    Tai-Seale M, Elwyn G, Wilson CJ, et al. Enhancing shared decision making through carefully designed interventions that target patient and provider behavior. Health Aff (Millwood) 2016 Apr;35(4):605-12. DOI: https://doi.org/10.1377/hlthaff.2015.1398 PMID:27044959
    33.    Barr PJ, Forcino RC, Thompson R, et al. Evaluating CollaboRATE in a clinical setting: Analysis of mode effects on scores, response rates and costs of data collection. BMJ Open 2017 Mar 24;7(3):e014681. DOI: https://doi.org/10.1136/bmjopen-2016-014681 PMID:28341691
    34.    Weed LL. Medical records that guide and teach. N Engl J Med 1968 Mar 21;278(12):652-7. DOI: https://doi.org/10.1056/NEJM196803212781204 PMID:5637250
    35.    Hofmans-Okkes IM, Lamberts H. The International Classification of Primary Care (ICPC): New applications in research and computer-based patient records in family practice. Fam Pract 1996 Jun;13(3):294-302. DOI: https://doi.org/10.1093/fampra/13.3.294 PMID:8671139
    36.    Weed LL, Weed L. Diagnosing diagnostic failure. Diagnosis (Berl) 2014 Jan 1;1(1):13-7. DOI: https://doi.org/10.1515/dx-2013-0020 PMID:29539981
    37.    Weed LL, Weed L. Medicine in denial. Scotts Valley, CA: CreateSpace Independent Publishing Platform; 2011
    38.    Weed LL, Weed L. Opening the black box of clinical judgment—an overview. Interview by Abi Berger. BMJ 1999 Nov 13;319(7220):1279. DOI: https://doi.org/10.1136/bmj.319.7220.1279 PMID:10559033
    39.    Grafstein E, Bullard MJ, Warren D, Unger B; CTAS National Working Group. Revision of the Canadian Emergency Department Information System (CEDIS) Presenting Complaint List version 1.1. CJEM 2008 Mar;10(2):151-73. DOI: https://doi.org/10.1017/S1481803500009878 PMID:18371253
    40.    Verbeke M, Schrans D, Deroose S, De Maeseneer J. The International Classification of Primary Care (ICPC-2): An essential tool in the EPR of the GP. Stud Health Technol Inform 2006;124:809-14. PMID:17108613
    41.    Soler J-K, Okkes I, Wood M, Lamberts H. The coming of age of ICPC: Celebrating the 21st birthday of the International Classification of Primary Care. Fam Pract 2008 Aug;25(4):312-7. DOI: https://doi.org/10.1093/fampra/cmn028 PMID:18562335

Keywords: patient experience, primary care, problem orientation, psychometric properties, quadruple aim, quality improvement, validation

ETOC

Click here to join the eTOC list or text ETOC to 22828. You will receive an email notice with the Table of Contents of The Permanente Journal.

CIRCULATION

2 million page views of TPJ articles in PubMed from a broad international readership.

Indexing

Indexed in MEDLINE, PubMed Central, HINARI, EMBASE, EBSCO Academic Search Complete, rdrb, CrossRef, and SciVerse/Scopus.


                                             

 

 

ISSN 1552-5767 Copyright © 2020 thepermanentejournal.org.

All Rights Reserved.