Differences in Perceived Difficulty in Print and Online Patient Education Materials

Differences in Perceived Difficulty in Print  and Online Patient Education Materials

Michael Farnsworth, MA

Perm J 2014 Fall; 18(4):45-50 [Full Citation]

https://doi.org/10.7812/TPP/14-008

Abstract

Context: Written patient education materials frequently exceed the reading ability of the general public. Patients are often intimidated by the task of reading patient education materials, perceiving the materials' difficulty levels as prohibitive, even when they do not exceed the patients' reading abilities. It is unclear how the delivery mechanism—print or a computer screen—affects a patient's reading experience through his/her perception of its difficulty.

Objective: To determine whether first-year college students perceived online or print-based patient education materials as more difficult to read.

Design: Convenience sampling of first-year college students.

Results: Some first-year college students perceived online patient education materials to be more difficult to read than print-based ones—even when the reading level of the patient education materials was similar. Demographic information about this sample's high levels of digital literacy suggests that other populations might also perceive online patient education materials as more difficult to read than print-based equivalents. Patients' perceptions of the difficulty of patient education materials influenced their ability to effectively learn from those materials.

Conclusion: This article concludes with a call for more research into patients' perceptions of difficulty of patient education materials in print vs on a screen.

Introduction

Effective patient education is a continuing objective in health care, and patient education materials provided in both print-based and online formats play important roles in this aim. Written patient education materials (both print-based and online) frequently exceed the reading ability of the general public.1,2 Perhaps more importantly, though, patients are often intimidated by the task of reading patient education materials, perceiving patient education materials' difficulty levels as prohibitive, even in cases where the patient education materials are not written in excessively technical language and do not exceed the patients' reading abilities.3

Research projects with a focus on patients' perceptions of the readability levels of patient education materials may assist patient educators in the development of these educational materials. The purpose of this study was to explore readers' understanding of health information in print vs on a computer screen by determining whether a convenience sample of first-year college students perceived online or print-based patient education materials as more difficult to read. The central concern of this article, then, is not a matter of reading levels or penetrability of the text, but of how the delivery mechanism interferes with or enhances a person's reading experience through his/her perception of its difficulty.

To my knowledge, no published studies have compared levels of perceived difficulty between online and print-based patient education materials. Most researchers of patient education materials have focused on readability levels in print media2 or online media4,5 but have not yielded comparative analyses of either print or online formats. The measure of perceived difficulty has received comparatively little attention recently. An exception is the work by Leroy et al,6 who launched many promising investigations of perceived difficulty, although not through a comparison of print and online formats. The following section outlines the limitations of evaluating patient education materials with readability measures alone. Limitations of readability measures may demonstrate the promise of using perceived difficulty to more effectively evaluate patient education materials presented in both print-based and online media.

Readability-Based Improvements to Patient Education Materials

Historically, creators of patient education materials sought to lower levels of readability, where readability was measured by years of education necessary to comprehend a text. Levels of readability can be determined with a number of formulas, including the Simple Measure of Gobbledygook (SMOG), the Gunning Fog Index, and the Flesch-Kincaid grade-level formula, each of which is recommended by the Health Literacy Advisor (an interactive health literacy software tool from Health Literacy Innovations, Bethesda, MD). SMOG is also recommended by the US Centers for Medicare and Medicaid Services. These formulas are useful as basic guides for pairing patient education materials with appropriate audiences and for tracking attempts to improve the content of patient education material. Understanding readability-related problems identifies areas of need for alternative approaches to improvement of patient education materials, such as perceived difficulty measures.

Both print-based and online patient education materials are written at reading grade levels that exceed the reading ability of most patients. A recent study of the readability of online health literature found a mean reading grade level of 12.30 from a sample of 352 Web sites using the readability tests SMOG, Gunning Fog, and Flesch-Kincaid.5 A similar study focused on readability of source material for patient education materials provided by private electronic health record vendors, as well as by the National Library of Medicine.1 The study found that these vendors' patient education materials had reading grade levels greater than the 5th through 6th grade recommendations provided by the European Commission and the Health Literacy Advisor in their codes of conduct for the readability of health information.1 The American Medical Association and the National Institutes of Health also recommended that readability levels not exceed the 6th-grade level, and the Maine Centers for Disease Control and Prevention recommended that "consent forms be written at approximately the 6th-8th grade reading level, and preferably closer to the 6th grade level."7 These studies demonstrate that many patient education materials are largely inaccessible to general audiences because they are written at higher reading grade levels.

Complicating the readability landscape, the results of the various available readability formulas often vary greatly. Wang et al2 found that readability varies by up to five reading grade levels, depending on which readability test is applied. The SMOG formula has a standard error of approximately one and one-half grade levels, where the Flesch-Kincaid has a standard error of up to two and one-half grade levels. Effectively, SMOG varies by up to three grade levels, or twice the standard error, whereas the Flesch-Kincaid varies by up to five grade levels. For this reason, the Journal of the Royal College of Physicians of Edinburgh stated, "SMOG should be the preferred measure of readability when evaluating consumer-orientated healthcare material."4 These findings demonstrate the complexities involved in applying readability formulas to patient education materials. A potential exists for underestimating and overestimating patient education materials with the use of either formula, but SMOG produces more accurate approximations.

A related issue that can lead to variation in reported levels of readability is formatting. Often, readability tests fail to incorporate considerations involving overall passage length, individual paragraph length, as well as margin use and other formatting issues; however, these issues may play a major role in a reader's comprehension of a document. Specifically, readability formulas are often difficult to apply to patient education materials written in outline formats; outlines, which often depend mainly on sentence fragments, do not clearly reflect sentence length—a primary factor in readability calculations.

Readability tests have entered many domains beyond those for which they were originally created. In these ill-suited contexts, they potentially fail to clearly represent the reading grade level or actual difficulty of health information. However, readability tests justifiably remain a popular tool for evaluating health information because they can rapidly provide gross approximations for establishing patient education materials' difficulty, as measured through an estimation of reading grade level.

Applying Perceived Difficulty Measurements to Patient Education Materials

Several conceptual frameworks have been designed to explain why patients engage in or fail to engage in a variety of health-related behaviors; these measures attempt to account for why some patients are compliant and others are not.8-10 These frameworks examine the presence of possible impediments to successful completion of health-related behaviors. One barrier to health-related behavior is "perceived difficulty," which impedes patients from engaging in health-related behaviors because of the belief that the difficulty of engaging in such behaviors is prohibitive. Leroy et al6 state: "In the context of consumer education, perceived difficulty of the text is a barrier encountered by many consumers who are expected to read text and educate themselves." Both the perceived and actual difficulty of patient education materials, then, might act as barriers to patient education by impeding patients from obtaining knowledge about their medical condition.

Levels of perceived difficulty can be altered through manipulations of surface-level grammar and term familiarity.6 Surface-level grammar manipulations include changes to sentence structure, noun phrase complexity, and function word density. Sentence structure manipulations include constructing a sentence with either an active voice or a passive voice. Overall sentence structure can also change by writing the sentence with an extraposed subject or a sentential subject. Complex sentences often have sentential subjects that contain the elements of sentences as subject terms. For example, a sentence with a sentential subject might read "the symptoms that were observed during intake were cough and fever." On the other hand, extraposed subjects use "placeholders," such as "it," for more complex terms or descriptions. For instance, consider the sentential-subject sentence, such as "ACE inhibitors used to lower blood pressure can cause fatigue." The subject of the preceding sentence could be extraposed to read: "They can cause fatigue." This latter form may lower levels of perceived difficulty.6

Function words, such as in, why, be, or the, also affect sentence structure and, in turn, perceived difficulty. Noun phrase complexity increases as the number of function words decreases. Finally, intuitive ease of reading decreases as the number of function words in a sentence decreases. Consequently, a liberal use of function words may lower levels of perceived difficulty. Each of the three methods described requires time commitments and writer expertise, and thus may prove prohibitive for many attempts to improve patient education materials.

Term familiarity is defined by the frequency of a term in the Google Web corpus, a database of more than a trillion words. The measure of term familiarity helps explain why words with fewer syllables (ie, more "readable" words) are sometimes more difficult to comprehend.3 For example, the corpus helps identify why certain shorter words, such as apnea, are actually more difficult for most readers than longer words like obesity. Term familiarity presents a hopeful direction for improvement of patient education materials because, similar to readability, term familiarity can be assigned by computational means with the use of algorithms.

The current study adds to this area of inquiry by evaluating whether the perceived difficulty of patient education materials is also a function of presentation media (eg, online or print). Acting as a hopeful launch for future research trajectories of greater scope, the following research suggests that patients may perceive online patient education materials to be more difficult than commensurate print-based patient education materials.

Methods

The purpose of this research project was to determine whether a convenience sample of first-year college students perceived online or print-based patient education materials as more difficult.a The study additionally sought to measure the students' perceived difficulty level of each patient education material, using a Likert-type scale.

The research was collected at James Madison University (JMU) in Harrisonburg, VA, during November 2012. This study was approved by the university's institutional review board (IRB Protocol 13-0141, approved on November 8, 2012). The sampling method was convenience: participants were from 4 course sections of General Writing, Rhetoric, and Technical Communication (GWRTC) 103, Critical Reading and Writing. Forty-one students participated in the research, which took place in JMU computer laboratories. Each laboratory had 21 computers with Windows 7 operating system (Microsoft, Redmond, WA) available for student use. All students voluntarily participated; none refused to participate.

Most JMU students take GWRTC 103 during their first year of college, meaning that they are probably members of the Class of 2016. The Class of 2016 at JMU is composed of 4632 enrolled students, most of whom are members of the
Millennial Generation, also referred to as Generation Y. Barring specific petition for exemption, all students entering JMU are required to take GWRTC 103, which means that each group of students included a mix of academic majors from across the university. Thus, this sample should be generally representative of the university's first-year class. Survey data from the JMU Office of Institutional Research shows that 83% of the Class of 2016 was 21 or younger at the time of the study.11 Therefore, these students were approximately 10 years younger than necessary for inclusion in the "digital native" classification, as stipulated by Prensky.9 Additionally, 87% of the Class of 2016 graduated in the top third of their high school class, and 65% came from a background with an estimated family income of $100,000 or more annually.11

Each student received patient education materials about 2 of 4 possible topics that are familiar in student health contexts. Data were collected about 81 pairs of patient education materials. The topics included the following: conjunctivitis ("pink eye"); mononucleosis ("mono"); self-care for cuts, scrapes, and burns; and back exercises. Topics were paired in all possible combinations, resulting in 6 survey forms, A through F. The survey forms were as follows:

   A.  "Pink Eye" and "Mono"

   B.  "Pink Eye" and "Cuts, Scrapes, and Burns"

   C.  "Pink Eye" and "Back Exercises"

   D.  "Mono" and "Cuts, Scrapes, and Burns"

   E.  "Mono" and "Back Exercises"

   F.  "Cuts, Scrapes, and Burns" and "Back Exercises."

Survey forms were evenly distributed across participants. Each topic was presented in both online and print-based formats. Participants received four total readings: two print readings and two online readings. For example, a student in Survey Group C received a print patient education material on pink eye, an online patient education material on pink eye, a print patient education material on back exercises, and an online patient education material on back exercises. All patient education materials were used in actual practice, available at either a health center or a health education Web site.

The online readings were selected from popular search results from Google.com; each selection occurred on the first page of Google search results. These patient education materials were available at Web pages that the students accessed directly. The print-based readings were physical copies provided by the JMU Student Health Center (see Sidebar: Patient Education Materials). The SMOG test was used to construct an approximately equivalent reading grade-level difficulty between each set of patient education materials (eg, the online and print back exercises patient education materials). Materials in each set varied by approximately two reading grade levels. For examples of the text, see "So, You Have Mono: Taking the Next Step"12 from the American College Health Association and "Mononucleosis"13 from WebMD.

The online text from WebMD on mononucleosis was 2.3 grade levels lower than the printed brochure according to SMOG and 1.4 grades lower according to the Flesch-Kincaid measurement. On the basis of the expected standard error for these readability measures (the SMOG formula has a standard error of approximately 1.5 grade levels, and Flesch-Kincaid has a standard error of up to 2.5 grade levels), this sort of variation means that the texts may actually be almost identical reading grade levels or may vary by up to approximately 3.8 grade levels according to SMOG and approximately 3.9 grade levels according to Flesch-Kincaid. The WebMD example scored 4.7 for SMOG and 5.4 for Flesch-Kincaid, whereas the American College Health Association brochure scored 7.0 for SMOG and 6.8 for Flesch-Kincaid.

The survey was available for the participants at the same time they viewed the patient education materials, so that they could refer back to the readings for confirmation of their assigned levels of difficulty. All surveys were collected in Qualtrics Research Suite survey software (Qualtrics, Salt Lake City, UT). The survey questions asked the students to provide two kinds of difficulty rankings of the patient education materials. The first question asked the participant to decide whether the online or print-based education material was more difficult concerning the same subject matter (eg, the subject matter "pink eye"). This question requested an ordinal ranking from the student. An example of this kind of question follows: "Which was easier to read: the online material on conjunctivitis (‘pink eye') or the paper material on conjunctivitis?" Three additional questions resulted from the other three subject matters in the respective patient education materials. The results for each subject matter (eg, mono, pink eye, back exercises) were combined to find an overall ranking for print patient education materials and an overall ranking for online patient education materials. The generalized, two-tailed hypothesis stated the following: the format (online or print) will produce a statistically significant difference in the resulting rankings.

The second survey question asked the participant to rank the difficulty of each type of patient education material, online and print-based, for both subject matters. These cardinal difficulty rankings were recorded on a seven-value Likert scale from "very difficult" to "very easy." In this case, the generalized, two-tailed hypothesis stated: students will report significantly different rankings for online vs print-based patient education materials. The statistical tests were computed in the statistics program SPSS version 21.0 (IBM SPSS, Armonk, NY).

Results

The first hypothesis was analyzed with a c2 test, and the second hypothesis was analyzed with a t test. The first hypothesis did not reflect a statistically significant difference, whereas the second hypothesis did reflect a statistically significant finding.

Overall, participants ranked the print-based patient education materials as less difficult than online patient education materials in a test of Hypothesis 1. Across 80 difficulty rankings, participants ranked print-based materials as less difficult in 43 cases and more difficult in 37 cases, which did not reflect a statistically significant difference (p = 0.45; Table 1).

In the second hypothesis, participants reported an average ranking of 6.03, or "somewhat easy," for the online patient education materials, whereas they reported an average ranking of 5.48, or "easy," for the print patient education materials, which reflected a statistically significant difference (p = 0.000015; Tables 2a and b). In the Likert scale, "very easy" translated to a value of 7, "easy" to a value of 6, and so on.

Differences in Perceived Difficulty in Print  and Online Patient Education Materials

Differences in Perceived Difficulty in Print  and Online Patient Education Materials

Discussion

This study is possibly the first published research to compare levels of perceived difficulty between online and print-based patient education materials. The findings concluded that first-year students at JMU perceive print-based patient education materials as less difficult than online patient education materials. The students' reports that online patient education materials were more difficult to comprehend may be further supported by the observation that the online materials were written at lower reading grade levels, as demonstrated in the SMOG and the Flesch-Kincaid measurements, described earlier.

Precisely why online patient education materials might be perceived as more difficult is beyond the scope of the current project. However, hypotheses include distractions in online environments (eg, advertisements or other applications), the cognitive difficulties associated with reading on a backlit screen, and the processes associated with searching for and opening Web pages.

Growing consensus suggests a positive correlation between digital literacy and a number of demographic and psychosocial factors, which include being born in the early 1980s or later, having at least middle-class socioeconomic status, and having high levels of general literacy.9,10,14 As discussed in the "Methods" section, the students in this study were born later than the 1980s, had at least middle-class socioeconomic status (indicated by household income), and had high general levels of literacy (indicated by their class standing in high school). These characteristics suggest that the students likely had higher-than-average levels of digital literacy.

It then is reasonable to hypothesize that other populations that have demonstrably lower levels of digital literacy may also perceive online patient education materials to be more difficult than print-based patient education materials. This claim presents reasons for further inquiry into differences in perceived difficulty between print and online patient education materials among other populations, perhaps while tentatively maintaining the hypothesis that most user groups will perceive online patient education materials to be more difficult than print-based patient education materials.

Future studies may confirm that most populations perceive online patient education materials as more difficult. Health educators may then wish to direct patients toward print-based patient education materials before they consult online patient education materials, and they might approach online patient education materials with caution despite the growing availability of online patient education materials.

The current study did have some limitations. It dealt with a limited population: first-year students at JMU. The sample size was also small. This study offers starting points and directions for future research and does not provide immediately generalizable knowledge.

However, despite the shortcomings of the convenience sample, the population's potentially high levels of digital literacy suggest that populations with lower levels of digital literacy may also perceive online patient education materials as more difficult than print equivalents. Larger-scale studies of perceived difficulty rankings of patient education materials among additional demographics or in more randomized settings will help to produce more generalizable information about the differences between print-based and online patient education materials.

The decision to group four popular student health topics together may have affected the results, in that there may be important differences between the topics. For example, the online or print format may have led to a larger divide in reported perceived difficulty concerning an individual topic than is reflected by the pooled information that was analyzed in this study. Furthermore, student health topics, such as those examined here, may not be representative of other sorts of patient education materials. Subsequent work may wish to examine a wide range of health topics individually and with relevant populations to better understand, in each case, whether patient perception of difficulty is influenced by presentation media.

Finally, it may be argued that because mononucleosis may sometimes be associated with promiscuity—a potentially charged topic—health information seekers may experience additional difficulties when learning about this topic. Conversely, a topic that does not invoke similar emotional responses, such as back exercises designed to help stave off back pain, may not include similar impediments to learning.

Conclusion

This research presents a starting point for future research on the influence of the delivery medium on the perceived difficulty of patient education materials. Larger-scale studies with more randomized samples may more conclusively demonstrate that online patient education materials are more difficult to comprehend. This study underscores the topic's importance and offers a model for a relatively easy-to-follow protocol. That is, other researchers might select randomized samples from relevant populations, choose patient education materials for examination that cover topics relevant to the studied population, and use Qualtrics or other survey software to compile and analyze valuable information about the examined patient education materials. As well, clinicians could conduct their own small-scale inquiries like the author's own to learn more about the dispositions toward patient education materials in various media.

As health care systems move toward a preventive focus and patient-centered care, patient education may receive increased attention. Thus, the effectiveness of delivery of patient education materials may become an increasingly pressing concern. Although knowledge that the delivery medium affects delivery is important, knowledge of how the delivery medium affects patient understanding may also help patient educators better create and distribute patient education materials. In particular, investigators might attempt to understand why online patient education materials are perceived as more difficult.

Additionally, a content analysis of current online patient education materials' use of best practices in Web writing and Web design may highlight important differences between writing designed for online and print-based contexts. These and other possible factors deserve attention to better understand why online patient education materials are perceived as more difficult, should that tentative conclusion receive further confirmation.

The author had access to this population while in pursuit of a master's degree at James Madison University in the Writing, Rhetoric, and Technical Communication Department, where he focused on medical writing, communication, and rhetoric.

Disclosure Statement

The author has no conflicts of interest to disclose.

Acknowledgment

The author wishes to thank Cathryn Molloy, PhD, for her continual support during this project.

Kathleen Louden, ELS, of Louden Health Communications provided editorial assistance.

References

   1.  Stossel LM, Segar N, Gliatto P, Fallar R, Karani R. Readability of patient education materials available at the point of care. J Gen Intern Med 2012 Sep;27(9):1165-70. DOI: https://doi.org/10.1007/s11606-012-2046-0.

   2.  Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Social Adm Pharm 2013 Sep-Oct;9(5):503-16. DOI: https://doi.org/10.1016/j.sapharm.2012.05.009.

   3.  Fitzsimmons PR, Michael BD, Hulley JL, Scott GO. A readability assessment of online Parkinson's disease information. J R Coll Physicians Edinb 2010 Dec;40(4):292-6. DOI: https://doi.org/10.4997/JRCPE.2010.401.

   4.  McInnes N, Haglund BJ. Readability of online health information: implications for health literacy. Inform Health Soc Care 2011

Dec;36(4):173-89. DOI: https://doi.org/10.3109/17538157.2010.542529.

   5.  Janz NK, Becker MH. The Health Belief Model: a decade later. Health Educ Q 1984 Spring;11(1):1-47. DOI: https://doi.org/10.1177/109019818401100101.

   6.  Leroy G, Helmreich S, Cowie JR. The influence of text characteristics on perceived and actual difficulty of health information. Int J Med Inform 2010 Jun;79(6):438-49. DOI: https://doi.org/10.1016/j.ijmedinf.2010.02.002.

   7.  Suggestions on improving the readability of a consent form [Internet]. Augusta, ME: Maine Center for Disease Control and Prevention; updated 2014 Aug 11 [cited 2014 Aug 11]. Available from: www.maine.gov/dhhs/mecdc/irb/irb08.htm.

   8.  Leu DJ. The new literacies of online reading comprehension: expanding the literacy and learning curriculum. Journal of Adolescent and Adult Literacy 2011 Sep;55(1):5-14. DOI: https://doi.org/10.1598/JAAL.55.1.1.

   9.  Prensky M. Digital natives, digital immigrants. On the Horizon 2001 Oct;9(5):1-6.

10.  Selber SA. Multiliteracies for a digital age. Carbondale, IL: SIU Press; 2004.

11.  James Madison University, Student Affairs and University Planning. First-year survey [Internet]. Student Development Newsi 2012 Oct [cited 2014 Apr 14];35(1):1. Available from: www.jmu.edu/ie/Surveys/FirstYear2012.pdf.

12.  So you have mono: taking the next step [brochure HS21]. Hanover, MD: American College Health Association; 2012.

13.  Mononucleosis (mono) [Internet]. New York, NY: WebMD; updated 2011 Jul 28 [cited 2014 Apr 16]. Available from: www.webmd.com/a-to-z-guides/infectious-mononucleosis-topic-overview.

14.  Hayles NK. How we think: digital media and contemporary technogenesis. Chicago, IL: University of Chicago Press; 2012.

Circulation

27,000 print readers per quarter, 9,725 eTOC readers, and in 2016, 1.4 million page views on TPJ articles in PubMed from a broad international readership.

The Permanente Press

Sponsored by the National Permanente Medical Groups, The Permanente Press publishes The Permanente Journal and books related to Kaiser Permanente and health care.

Letters

Articles, editorials, letters to the editor, and other material represent the opinion of the authors. Send your comments to permanente.journal@kp.org.


Copyright 2017 Kaiser Permanente - The Permanente Journal. All Rights Reserved.