Abstract
Introduction: The Implementing Universal Lynch Syndrome Screening (IMPULSS) study explained institutional variation in universal tumor screening (UTS) with the goal of identifying ways to aid organizational decision-makers in implementing and optimizing Lynch syndrome UTS programs. Methods: After applying the Consolidated Framework for Implementation Research (CFIR 1.0) to analyze interviews with 66 stakeholders across 9 healthcare systems to develop a toolkit for implementation, we adapted the International Patient Decision Aid Standards (IPDAS) to assess toolkit potential to aid decision-making consistent with organizational values. We then conducted user testing with two experienced and four non-experienced implementers of UTS to improve the content and functionality of the toolkit and assess its acceptability and appropriateness. Results: Toolkit components were organized to address findings related to CFIR 1.0 constructs of evidence strength and quality, relative advantage, cost, engaging, planning, executing, and reflecting and evaluating. A home page was added to direct users to different sections based on whether they are deciding to implement UTS, planning for implementation, improving an existing UTS program, or considering a different approach to identify patients with Lynch syndrome. Upon initial evaluation, 31 of 64 IPDAS criteria were met by the original toolkit. All users rated the toolkit as acceptable and appropriate for assisting organizational decision-making and identified multiple areas for improvement. Numerous iterative changes were made to the toolkit, resulting in meeting 17 of the previously unmet IPDAS criteria. Conclusion: We demonstrate the rigorous development of a toolkit guided by the CFIR and show how user testing helped improve the toolkit to ensure it is acceptable, appropriate, and meets most IPDAS criteria relevant to organizational values-based decision-making.
Introduction
Lynch syndrome (LS) is the most common cause of hereditary colorectal cancer (CRC) and endometrial cancer (EC). Approximately 1 out of every 35 individuals with CRC has LS, yet it is estimated that only about 2% of individuals are aware they have LS [1, 2]. Individuals with LS are born with a pathogenic germline mutation in one of four mismatch repair (MMR) genes, MLH1, MSH2, MSH6, PMS2, or in rare cases EPCAM. Dysfunction of any of these genes predisposes individuals to CRC, EC, and several additional cancer types including cancer of the ovaries, stomach, small bowel, pancreas, prostate, urothelial tract, and biliary tract [3]. Individuals with LS have up to a 61% risk of developing CRC and 57% risk of developing EC in their lifetime, which are significantly higher than the general population risks of 4 and 3% for CRC and EC, respectively [3‒7]. The age of cancer onset is also significantly younger in individuals with LS in comparison with the general population [3].
Identifying individuals with LS can reduce cancer-related morbidity and mortality in numerous ways. Affected individuals can benefit from targeted therapies to better treat existing cancers. Additionally, risk-reduction strategies, including screening and prophylactic surgery, can decrease the likelihood of future cancer development or increase the likelihood of detecting cancer at an earlier, more treatable stage [8‒11]. Identification of individuals with LS also allows for cascade testing and detection of at-risk family members who can also benefit from these risk-reduction and cancer screening strategies [1].
Fortunately, numerous diagnostic advances have been made to increase the detection of individuals with LS, specifically the use of universal tumor screening (UTS). UTS consists of screening tumors of all newly diagnosed individuals with CRC or EC through immunohistochemistry (IHC) or microsatellite instability (MSI) testing to look for evidence of high MSI or mismatch repair-deficiency (dMMR), which are characteristic of LS [12]. High MSI or dMMR tumors then undergo further screening via MLH1 promotor hypermethylation analysis (for CRC or EC) and/or somatic BRAF V600E mutation analysis (for CRC) to rule out additional individuals who are very unlikely to have LS. Individuals with the absence of MLH1 promoter hypermethylation or the absence of BRAF V600E are referred to genetic counseling for germline genetic testing to confirm or rule out an LS diagnosis [13]. This systematic, “universal,” approach allows for the identification of LS cases that would be missed without this screening [1, 14‒16]. However, organizational implementation of this “universal” approach requires multiple decisions, each of which is dependent upon local resources and context.
Although many institutions have implemented UTS for LS, the method is underutilized, leaving LS patients and their family members undiagnosed without access to risk-reduction and treatment opportunities [17, 18]. Even at institutions where UTS programs exist, program structure and outcomes vary widely [14, 17, 19‒25]. This variability is likely due to the complexity of implementing UTS for LS and the need for coordination by multiple individuals across multiple clinical departments and service lines [26‒28]. Additional variability in implementation may stem from the multiple decisions that must be made to implement UTS for LS in an organization, and that these decisions depend on available resources, individual and group understanding of the evidence and willingness to implement, organization structure, and individuals to complete different parts of the process.
Efforts have been made over the years to facilitate successful UTS implementation. The Lynch Syndrome Screening Network (LSSN) was formed in 2011 to promote and facilitate LS UTS implementation through the provision of resources, protocols, and data which are provided on the LSSN website (https://www.lynchscreening.net/). The Implementing Universal Lynch Syndrome Screening (IMPULSS) research study, funded by the National Cancer Institute (R01CA211723), was created to describe, compare, and explain institutional variation in UTS with the aim of developing a toolkit to aid in the successful implementation and optimization of LS UTS programs at the organizational level [27]. The IMPULSS study was guided by the Consolidated Framework for Implementation Research (CFIR) version 1.0 [29]. The CFIR combines multiple theories, models, and frameworks used to promote implementation, organized into constructs that may influence what works where and why within 5 domains. The CFIR constructs may have a positive or negative impact on implementation, making this a practical framework for understanding the complex, multilevel, and real-world experience of organizations with LS UTS program implementation in the IMPULSS study. Information on LS UTS implementation was obtained through interviews with stakeholders from multiple healthcare systems at varied stages of LS UTS program implementation [27, 30]. A primary outcome of the CFIR analysis was identifying the variability and difference-makers between not implementing a UTS program, implementing a non-optimized program, and optimizing implementation [27] at the organizational level; results of which are reported elsewhere [31]. Here, we describe the use of this information for the systematic development of a toolkit to promote organizational decision-making regarding implementation and optimization of LS UTS programs and describe our iterative process of evaluating and revising the toolkit based on subsequent usability interviews and surveys conducted with additional stakeholders.
Methods
Toolkit Content and Organization
Toolkit content was largely derived from existing information on the LSSN website created by author DC and other members of the LSSN. The website had been previously evaluated through user-testing interviews as part of an NCI-funded study on UTS (5R01CA140377, Goddard), leading to revisions in website design and content that were implemented prior to the IMPULSS study. Information on the website and additional feedback from the prior evaluation became the foundation for the IMPULSS toolkit. Toolkit content was further supplemented and revised based on IMPULSS study data from 66 organizational stakeholder interviews across 9 healthcare systems. These stakeholders included genetic counselors, pathologists, organizational leaders, and other medical practitioners associated with LS UTS at the different organizations. Data from these interviews had been coded according to CFIR 1.0 constructs and those constructs that were salient in the interviews or that were identified as key difference makers in UTS program implementation and optimization were addressed as part of the toolkit [30, 31]. The toolkit was initially created using Microsoft PowerPoint for mockup and user testing purposes, then converted into a web-based interactive tool using iSpring 10 to allow for easier navigation with a table of contents and integration within the LSSN website. Given that navigational challenges and ease of finding content to guide program implementation were key concerns in the prior evaluation of the LSSN website, the IMPULSS toolkit consolidated the informational content from the website and included a table of contents and multiple links throughout that were designed to make it easier to find and interact with content.
Evaluation of Toolkit Decision-Making
Implementing UTS requires multiple decisions (e.g., which tumors to screen; which screening test to use; which reflex test to use; who receives and discloses screening results, when, and how; etc.) which must be addressed by different organizational decision-makers. To evaluate the potential of the IMPULSS toolkit to guide this organizational decision-making, we adapted the International Patient Decision Aid Standards (IPDAS) Checklist (usually used for individual, patient-level decision-making) to assess and improve the toolkit [32]. The IPDAS Checklist consists of 64 criteria to evaluate the quality of patient decision aids, which, when adapted could be relevant to organizational decision-making. The IPDAS criteria fall into three domains: “Content” (28 criteria), “Development Process” (29 criteria), and “Effectiveness” (7 criteria). Several criteria were slightly modified (e.g., changing the word “patients” to “users”) and 3 criteria were substantially modified (i.e., changing, adding, or removing multiple words) to ensure relevance to organization-level rather than patient-level decision-making.
Authors T.W. and D.C. independently reviewed the toolkit and classified IPDAS criteria into one of four categories: “Met,” “Planning to Meet,” “N/A,” “Not Feasible/Salient.” Criteria classified into the “Met” category included those that were met by the version of the toolkit reviewed during the IPDAS evaluation. Criteria classified into the “N/A” category were deemed not applicable to organization-level decision-making. Criteria classified into the “Not Feasible/Salient” category required revisions that were not possible due to technical and/or time limitations or that required revisions the larger IMPULSS team agreed would not add significant value to the IMPULSS toolkit. Criteria classified into the “Planning to Meet” category consisted of issues that could be revised or met through the subsequent user feedback evaluation process. Discrepancies between classifications were discussed until consensus was achieved, with AKR as the tie-breaker when necessary.
User Testing
After the IPDAS evaluation, healthcare professionals who are, or could be, involved in UTS for LS were invited to participate in user-testing of the IMPULSS toolkit prototype. This convenience sample of individuals with varying levels of UTS knowledge and experience was recruited via email invitations sent through the LSSN listserv and to individuals at the authors’ institutions. Recruitment continued throughout the iterative evaluation and revision process, to obtain feedback from both expert and potential-user perspectives, taking care to ensure that no more than one-third of the sample were experienced implementers. Each evaluation participant provided feedback through a usability interview [33] followed by an online survey.
Usability interviews included semi-structured questions to prompt participants to share positive and constructive feedback on the content, usability, and visual appeal of the toolkit and to evaluate the extent to which the toolkit addressed specific CFIR 1.0 constructs and IPDAS criteria. All interviews were arranged and conducted by TW using Microsoft Teams to record both toolkit navigation and discussion. Using a think-aloud process [34, 35], participants were asked to navigate through the toolkit on their own while providing their verbal feedback and thoughts. Participants could be asked at the end of the interview to navigate to sections they did not view to obtain additional feedback; so that all participants viewed all sections of the toolkit.
For descriptive analysis and when categorizing feedback as positive or constructive, respondents were divided into two groups based on their experience with implementing LS UTS programs: “Experienced Implementers” who had played a primary role in implementing UTS previously and had experience making the decisions which the toolkit is intended to address, and “Non-Experienced Implementers” who had not been responsible for implementing UTS programs but could be in the future. Comments from the user testing think-aloud process and interview questions were classified as either “Positive” or “Constructive.” Constructive feedback was further divided into three categories based on whether it pertained to “Content,” “Usability,” or “Visual Appeal.”
At the conclusion of the usability interviews, each participant was sent a link to the Qualtrics survey via the Microsoft Teams chat to complete anonymously. The survey evaluated the acceptability and appropriateness of the toolkit using the acceptability of intervention measure (AIM) and intervention appropriateness measure (IAM) [36]. Acceptability was defined as the perception that the IMPULSS toolkit is agreeable, palatable, or satisfactory. Appropriateness was defined as the perceived fit, relevance, or compatibility of the IMPULSS toolkit to address the needs of implementing UTS programs [37]. Each measure consists of four items with a Likert-style response scale ranging from “1 = Completely disagree” to “5 = Completely agree.” For each measure, participant ratings from the 4 questions are averaged. Although cut-off scores for interpretation are not available, higher scores indicate greater acceptability and appropriateness [36, 38, 39]. AIM and IAM measures were then averaged across all participants to obtain overall average AIM and IAM scores ranging from 1 to 5.
Results
Results are presented in sequential order according to the three methods sections detailed above: (1) toolkit content and organization, followed by (2) IPDAS evaluation for decision-making, and finally (3) user testing results from interviews and surveys.
Toolkit Content and Organization
TW and DC, in consultation with other IMPULSS researchers, developed the interactive, web-based toolkit to assist institutions in decision-making, planning, and optimization of LS UTS programs. Based on findings from the IMPULSS study that showed decision-making progressed differently depending on whether the organization had no program, had implemented a program but was not optimized, or was considering changes to a program based on new evidence/guidelines [30, 31], a home page for the toolkit was created to guide users to first select which of these descriptions best fit their organization’s current decision-making needs (shown in Fig. 1).
CFIR 1.0 constructs [40‒42] that guided toolkit development included evidence strength and quality, relative advantage of implementing a UTS program, cost of a UTS program, engaging, planning, executing, and reflecting and evaluating. These constructs were identified through the original IMPULSS interviews and analyses as important factors to implementation and/or they consistently distinguished between sites that had a non-optimized program and UTS programs that were optimized [30, 31]. Table 1 details how information about these constructs was utilized to add or organize content within the IMPULSS toolkit.
CFIR construct . | Construct definition . | Examples of how constructs influenced toolkit content and organization . |
---|---|---|
Evidence strength and quality | Stakeholders’ perceptions of the quality and validity of evidence support the belief that the intervention will have desired outcomes [38, 39] | Included evidence of cost-effectiveness, the ability of UTS to identify missed cases, benefits to patients, etc., with hyperlinks to reference articles |
Relative advantage | Stakeholders’ perception of the advantage of implementing the intervention versus an alternative solution [37] | Included comparison of advantages/disadvantages of multiple LS identification approaches |
Organized approaches for UTS into an interactive table of digested information. (shown in Fig. 2) | ||
Cost | Costs of the intervention and costs associated with implementing the intervention including investment, supply, and opportunity costs [29] | Included link to the user-friendly, customizable cost modeling tool that calculates program costs and efficiency using different cost parameters input by users |
Engaging | Attracting and involving appropriate individuals in the implementation and use of the intervention through a combined strategy of social marketing, education, role modeling, training, or other similar activities [29] | Included strategies and resources for educating and engaging team members (e.g., discussing LS at tumor board meetings, holding conferences, etc.) |
Matched strategies to specific implementation stages on the home page | ||
Planning | The degree to which a scheme or method of behavior and tasks for implementing an intervention are developed in advance, and the quality of those schemes or methods [29] | Included detailed planning guides with steps to think through before implementation along with evidence-based suggestions anticipated to optimize implementation based on IMPULSS data and other prior research [14, 20, 30, 31] |
Included printable handout to document their plan (including key decisions necessary when planning) | ||
Executing | Carrying out or accomplishing the implementation according to plan [29] | Included strategy for a champion or designated person who checks to ensure the plan is being executed with consistency |
Reflecting and evaluating | Quantitative and qualitative feedback about the progress and quality of implementation accompanied by regular personal and team debriefing about progress and experience [29] | Included a detailed optimization planning guide with steps for quality assurance, a sample spreadsheet to track expected case numbers, and other tools specific to creating an optimized program |
CFIR construct . | Construct definition . | Examples of how constructs influenced toolkit content and organization . |
---|---|---|
Evidence strength and quality | Stakeholders’ perceptions of the quality and validity of evidence support the belief that the intervention will have desired outcomes [38, 39] | Included evidence of cost-effectiveness, the ability of UTS to identify missed cases, benefits to patients, etc., with hyperlinks to reference articles |
Relative advantage | Stakeholders’ perception of the advantage of implementing the intervention versus an alternative solution [37] | Included comparison of advantages/disadvantages of multiple LS identification approaches |
Organized approaches for UTS into an interactive table of digested information. (shown in Fig. 2) | ||
Cost | Costs of the intervention and costs associated with implementing the intervention including investment, supply, and opportunity costs [29] | Included link to the user-friendly, customizable cost modeling tool that calculates program costs and efficiency using different cost parameters input by users |
Engaging | Attracting and involving appropriate individuals in the implementation and use of the intervention through a combined strategy of social marketing, education, role modeling, training, or other similar activities [29] | Included strategies and resources for educating and engaging team members (e.g., discussing LS at tumor board meetings, holding conferences, etc.) |
Matched strategies to specific implementation stages on the home page | ||
Planning | The degree to which a scheme or method of behavior and tasks for implementing an intervention are developed in advance, and the quality of those schemes or methods [29] | Included detailed planning guides with steps to think through before implementation along with evidence-based suggestions anticipated to optimize implementation based on IMPULSS data and other prior research [14, 20, 30, 31] |
Included printable handout to document their plan (including key decisions necessary when planning) | ||
Executing | Carrying out or accomplishing the implementation according to plan [29] | Included strategy for a champion or designated person who checks to ensure the plan is being executed with consistency |
Reflecting and evaluating | Quantitative and qualitative feedback about the progress and quality of implementation accompanied by regular personal and team debriefing about progress and experience [29] | Included a detailed optimization planning guide with steps for quality assurance, a sample spreadsheet to track expected case numbers, and other tools specific to creating an optimized program |
IPDAS Evaluation
As shown in Table 2, 31 of the 64 IPDAS checklist criteria were met by the IMPULSS toolkit prior to user testing. The toolkit was determined to meet criteria in the content domain given how it described positive and negative features of options, compared probabilities using the same denominator, time period, and scale, used visual diagrams, and allowed for viewing probabilities based on a user’s own situation. The toolkit met criteria within the development process domain by providing references to the scientific evidence and guidelines used, because it was written at a level to be understood by the majority of intended users (i.e., physicians, pathologists, genetic counselors), and given how it described the quality of scientific evidence – including lack of evidence – when applicable. The effectiveness domain criteria met by the toolkit included helping users recognize there was a decision to be made, listing the options and features of each option, and describing that organizational values may affect decisions (shown in Fig. 2).
IPDAS domain . | Number of IPDAS criteria . | Examples of revisions made prior to user testing . | |||
---|---|---|---|---|---|
met . | N/A . | not feasible . | planning to meet . | ||
Content | 19 | 2 | 3 | 4 | Prompt users to consider what matters most to their institution |
Add suggestions for sharing information about UTS with other potential stakeholders | |||||
Add information about how to contact a LSSN expert | |||||
Include printable PDF planning guides to help discuss decisions with others | |||||
Development process | 9 | 1 | 6 | 13 | Show positive and negative features of approaches in more equal detail |
Include developer credentials | |||||
Add additional evidence | |||||
Create a technical document | |||||
Create specific questions for user testing | |||||
Effectiveness | 3 | 3 | 1 | 0 | None |
Total | 31 | 6 | 10 | 17 |
IPDAS domain . | Number of IPDAS criteria . | Examples of revisions made prior to user testing . | |||
---|---|---|---|---|---|
met . | N/A . | not feasible . | planning to meet . | ||
Content | 19 | 2 | 3 | 4 | Prompt users to consider what matters most to their institution |
Add suggestions for sharing information about UTS with other potential stakeholders | |||||
Add information about how to contact a LSSN expert | |||||
Include printable PDF planning guides to help discuss decisions with others | |||||
Development process | 9 | 1 | 6 | 13 | Show positive and negative features of approaches in more equal detail |
Include developer credentials | |||||
Add additional evidence | |||||
Create a technical document | |||||
Create specific questions for user testing | |||||
Effectiveness | 3 | 3 | 1 | 0 | None |
Total | 31 | 6 | 10 | 17 |
LSSN, Lynch syndrome screening network; UTS, universal tumor screening.
Six IPDAS criteria were deemed “not applicable” at the organization level, including providing security for personal health information entered into the decision aid; helping patients become involved in preferred ways; and using stories that represent a range of positive and negative experiences. Ten criteria were considered “not feasible/salient,” including reporting how often the decision aid is updated (although we do list the date it was last updated) and showing the decision aid improves the match between the chosen option and the features that matter most to the informed user. Plans were made to meet 17 additional criteria through further toolkit revisions that could be tested and optimized through user testing interviews. For example, in areas within the toolkit where we made a recommendation related to UTS program design and implementation, rather than including multiple options (with pros and cons) we added references to provide evidence for the recommendation. Our intent was to distinguish this content (a scientific recommendation/guideline) from decisions where institutional resources or preferences and values should be considered.
User Testing Results
A total of six participants took part in the user testing, including two genetic counseling graduate students, and four genetic counselors. All participants were female and either had experience implementing UTS programs or could be member of a team who would implement UTS. Two participants were classified as “Experienced Implementers” and four participants were classified as “Non-Experienced Implementers.” Interviews lasted between 60 and 120 min, inclusive of completing the survey at the end. The average survey score for toolkit acceptability (AIM) was 4.33 out of 5 and the average survey score for appropriateness (IAM) [36] was 4.25 out of 5. No participants completely disagreed with any of the AIM or IAM items.
Overall, most usability feedback was constructive, and the majority of positive feedback came from “Non-Experienced Implementers.” Positive feedback on the toolkit included the ability to view advantages/disadvantages (pros and cons) of the three most common UTS protocols side-by-side with information designed to help compare these UTS protocol options along with additional options of identifying LS through either “direct to germline” testing (skipping tumor screening) or paired tumor/germline sequencing (shown in Fig. 3). Comparisons of cost and detection rates for all five options are made using updated economic modeling conducted by the IMPULSS study team [43], and the toolkit includes a button to access the user-friendly, excel-based, modeling tool which allows individuals to input site-specific information and compare different program implementation scenarios in more depth using local information and parameters. Participants also appreciated the inclusion of UTS procedure flowcharts for different protocol options which could be reached through a labeled button within the planning guide page of the toolkit (shown in Fig. 4) and through the toolkit navigation menu. Multiple participants also spoke positively about the quantity and quality of evidence presented throughout the toolkit and the overall visual appeal.
The majority of constructive feedback from “Experienced Implementers” related to content, whereas “Non-Experienced Implementers” provided relatively equal amounts of constructive feedback regarding content, visual appeal, and usability. Feedback from “Experienced Implementers” included suggestions for the addition of information (e.g., adding more detailed information on IHC and MSI procedures and results), while “Non-Experienced Implementers” suggested where and how to clarify information (e.g., adding roll-overs to describe various terms). All constructive feedback was synthesized into themes from both user types and included suggestions such as changing the language to avoid confusion, reducing the amount of text on pages, and simplifying navigation (Table 3). Although Table 3 represents a synthesis of all constructive feedback received, many additional alterations, including substantial changes (e.g., content, organization) and minor edits (e.g., wording), were made to the toolkit to address nearly all constructive comments made by users throughout the iterative user testing process.
Feedback theme . | Constructive feedbacka . | Toolkit revision . |
---|---|---|
Change language to avoid confusion | Using “positive” and “negative” in reference to IHC results may be confusing because “positive” could be misinterpreted as MMR proteins being present | “Positive” and “negative” were replaced with “abnormal” and “normal” when referencing IHC results throughout the toolkit |
The term “reflex testing” to refer to testing that is performed immediately after an abnormal IHC or MSI result may be confusing because users may not be familiar with this term | “Reflex testing” was replaced with “additional testing” throughout the toolkit, and a rollover (pop-up text with definition) was added to explain that this additional testing is done immediately following an abnormal IHC or MSI result before referring for genetic counseling/testing | |
The term “hypermethylation” in reference to MLH1 promotor hypermethylation may be confusing as there are numerous tests that detect hypermethylation | “Hypermethylation” was replaced with “MLH1 promotor hypermethylation” throughout the toolkit | |
Statements that direct to germline testing is “not recommended” may be confusing because it is recommended for some individuals with CRC and EC | Wording was changed from “not recommended” to “not recommended for all patients with CRC and EC” to ensure it was clear that the toolkit refers to a universal approach | |
Reduce the amount of text on pages | A summary list of each professional recommendation supporting UTS requested | Page was changed to include a single summary of recommendations with icons hyperlinked to each individual recommendation |
Amount of text on planning guide pages was overwhelming | Planning guides were changed to include questions only, requiring users to click on questions to view pages with answers/more information. PDF versions of planning guides with both questions and answers were added to the “printable documents” section | |
Simplify navigation | Users found it confusing to use the “close” button located on each planning guide question page to return to the main page each time they finished with a question. It was also hard to remember which questions they had already reviewed | “Close” buttons were replaced with “next” buttons that direct users to the next planning guide question rather than take them back to the main planning guide page |
Difficult to locate and navigate back to the home page while exploring the toolkit | Home icons were added to the corner of the main toolkit pages and an instruction was added to make users aware of this feature | |
Confusion because the outline/menu to navigate the toolkit did not match the order of the toolkit pages | Direct to germline and tumor sequencing planning guides were moved from the “planning” to the “considering new approaches” section of the outline/menu to align with toolkit pages |
Feedback theme . | Constructive feedbacka . | Toolkit revision . |
---|---|---|
Change language to avoid confusion | Using “positive” and “negative” in reference to IHC results may be confusing because “positive” could be misinterpreted as MMR proteins being present | “Positive” and “negative” were replaced with “abnormal” and “normal” when referencing IHC results throughout the toolkit |
The term “reflex testing” to refer to testing that is performed immediately after an abnormal IHC or MSI result may be confusing because users may not be familiar with this term | “Reflex testing” was replaced with “additional testing” throughout the toolkit, and a rollover (pop-up text with definition) was added to explain that this additional testing is done immediately following an abnormal IHC or MSI result before referring for genetic counseling/testing | |
The term “hypermethylation” in reference to MLH1 promotor hypermethylation may be confusing as there are numerous tests that detect hypermethylation | “Hypermethylation” was replaced with “MLH1 promotor hypermethylation” throughout the toolkit | |
Statements that direct to germline testing is “not recommended” may be confusing because it is recommended for some individuals with CRC and EC | Wording was changed from “not recommended” to “not recommended for all patients with CRC and EC” to ensure it was clear that the toolkit refers to a universal approach | |
Reduce the amount of text on pages | A summary list of each professional recommendation supporting UTS requested | Page was changed to include a single summary of recommendations with icons hyperlinked to each individual recommendation |
Amount of text on planning guide pages was overwhelming | Planning guides were changed to include questions only, requiring users to click on questions to view pages with answers/more information. PDF versions of planning guides with both questions and answers were added to the “printable documents” section | |
Simplify navigation | Users found it confusing to use the “close” button located on each planning guide question page to return to the main page each time they finished with a question. It was also hard to remember which questions they had already reviewed | “Close” buttons were replaced with “next” buttons that direct users to the next planning guide question rather than take them back to the main planning guide page |
Difficult to locate and navigate back to the home page while exploring the toolkit | Home icons were added to the corner of the main toolkit pages and an instruction was added to make users aware of this feature | |
Confusion because the outline/menu to navigate the toolkit did not match the order of the toolkit pages | Direct to germline and tumor sequencing planning guides were moved from the “planning” to the “considering new approaches” section of the outline/menu to align with toolkit pages |
aFeedback in this column may represent a synthesis of similar feedback from multiple users.
Discussion
We used data from the larger IMPULSS study to create a novel toolkit (the IMPULSS toolkit) to aid organizational decision-makers in the implementation and optimization of LS UTS programs. We then evaluated the toolkit for adherence to IPDAS standards for decision aids and recruited both experienced and non-experienced implementers to participate in toolkit user testing. Consistent with other studies, we found that useful feedback was obtained from both subject-matter experts and novices [44, 45] and used this to improve toolkit content and usability.
The revised toolkit, which is available on the LSSN website at www.lynchscreening.net, can be used to guide organizational decision-makers who may be at different stages in UTS implementation (e.g., trying to implement for the first time in their organization), as well as those who are experienced but wish to consider ways to either improve/optimize their UTS program or change to a different approach for identifying patients with LS (e.g., going straight to germline testing without screening the tumor). All sections of the toolkit can work independently or sequentially to meet the needs of the organizational decision-makers and can be revisited over time as needed.
Because the LSSN has been in existence for over a decade [46, 47], guidance for implementing UTS programs was already a main feature of the LSSN website. However, the website’s last major update was in 2014 based on user testing with LSSN members and the site continued to be hampered by navigation challenges. Specifically, website information was displayed in a dense and static text format, provided information primarily for implementing a program where none existed, and had limited to no decision-making guidance for program optimization and evolution in the face of changing evidence. Addressing these challenges through a toolkit with organizational-level guidance and decision-making assistance was a goal of the IMPULSS study, along with identifying the variability across programs and difference-makers between organizations that had not implemented a UTS program, those that had optimized implementation and those that had implemented but not optimized their UTS program [27, 31].
The resulting toolkit described by the development process above is focused on key CFIR 1.0 constructs found to make a difference in whether a UTS program was implemented at all, or if implementation was optimal or suboptimal [31]. Specifically, those without a program lacked an understanding of the relative advantages and evidence favoring UTS or lacked knowledge about UTS in general. Thus, the “Deciding whether to implement UTS” button leads to content discussing evidence to assist in making the case for doing UTS. Additionally, this section includes advice for identifying, planning, and engaging with all key stakeholders as this was another critical component identified as minimally necessary for program implementation [31]. Once ready to implement, the “Planning how to implement” button of the toolkit educates about the different ways UTS can be done and provides information about key decisions that need to be made when planning for successful and optimal implementation of UTS programs. In the Planning section and the “Improving an existing UTS program” section (i.e., optimizing), we stress the importance of one or more implementation champions, as this is needed to start a UTS program. We also discuss how a maintenance champion is needed to help ensure UTS programs are functioning optimally and that patients are truly benefiting from UTS. The “Improving an existing program” section includes additional ways to optimize programs through a positive inner setting (characterized by strong networks and communication and support from leadership) because a positive inner setting was a common feature consistent among organizations with optimized UTS programs. Also included in the “Improving an existing program” section is guidance to ensure all stakeholders continue to hold a positive attitude and understand the advantages and evidence favoring UTS as this was consistently found only among sites with ongoing planning and engaging [31]. During our work to identify these difference makers, we also discovered that the changing guidelines and testing options available to systematically identify patients with LS created a need among organizational stakeholders for decision guidance around whether or when to change an existing program, regardless of whether the program was optimized or not. This provided the impetus for the “Considering different options” section. Other novel additions to the tool besides the section on considering different options include the economic modeling data that compares alternative options (direct to germline – without screening a tumor, and paired tumor-germline testing) with the three commonly used protocols for implementing UTS and navigational capabilities to help find information that is needed or desired to aid this type of organizational decision-making.
Integrating information from the prior user testing of the LSSN website and the research findings from the larger IMPULSS study during toolkit development provides rigor and is expected to improve its effectiveness. Additional rigor comes from our novel process of adapting IPDAS criteria to evaluate the toolkit design for its potential to facilitate organizational decision-making (a goal of the toolkit). Although the purpose of the IPDAS is to guide the development of decision aids that will assist patients in making quality informed decisions based on evidence and individual values and goals, we found the IPDAS valuable for improving the IMPULSS toolkit. Based on our evaluation using IPDAS, printable planning guides, including more information on positive and negative features of the different approaches to be decided upon, as well as messages guiding users to consider what matters most to their organization, were added to the toolkit. To our knowledge, IPDAS has not been used in this way, and some IPDAS criteria required adaptation while other criteria had to be removed to ensure all criteria were applicable for organizational decision-making. We believe these changes help to improve the validity of this instrument when applying it to organizational decision-making, but adaptations should be further validated with other organizational decision-making tools developed to facilitate program implementation.
User testing guided further iterative modification of the toolkit. Overall, testers were positive about the toolkit and found it both acceptable and appropriate based on their AIM and IAM scores [36]. These findings are important, as lack of acceptability has been reported as a challenge to implementation, and lack of appropriateness may indicate “pushback” to implementation [37, 48]. Most positive feedback was from “Non-Experienced Implementers,” which aligns with previous reports that subject-matter experts provide significantly fewer praise comments than non-experts [43]. Reasons for this may be that an individual is more critical when they have a greater knowledge of the subject matter [47]. One noted positive feedback element across all testers included appreciating the quantity and quality of evidence provided throughout the toolkit, specifically the inclusion of professional guideline summaries. Not only does this align with IPDA, but also addresses the lack of awareness of guidelines and lack of guideline clarity which have been reported previously and identified in IMPULSS data as barriers to successful UTS implementation [26, 28, 30, 31].
Strengths of this study include the rigorous and iterative process to create the IMPULSS toolkit through connection to CFIR constructs determined to be key difference makers in implementation, evaluation, and adjustment to IPDAS standards for decision aid development, and user testing with both experienced and non-experienced implementers of UTS programs. The user testing evaluation was limited by convenience sampling of six participants. Nevertheless, users were able to provide a substantial amount and variety of feedback and this sample size remains in line with guidelines recommending 3–5 individuals may be sufficient for usability feedback [33]. Although data from different stakeholder types (e.g., pathologists, oncologists, administrators, surgeons, genetic counselors, etc.) were used in developing the toolkit, users who provided feedback as part of our iterative refinement process were all genetic counselors or genetic counseling students. Genetic counselors can be key players in LS UTS programs [20] and frequently gather and synthesize information for other organizational decision-makers [20, 49]. However, given this limitation, it would be valuable to elicit toolkit feedback from other stakeholder types.
Future directions under consideration include exploring the experiences of individuals using the toolkit to facilitate implementation for the first time in their organization or to optimize a non-optimized program. Currently, UTS is widely underutilized and most institutions that have UTS programs are not fully optimized [1, 16, 17, 19, 20, 23, 31]. Therefore, if future studies find the toolkit is effective at promoting the optimal implementation of LS UTS programs, it would mean the toolkit holds the potential to increase the detection of LS and subsequently reduce cancer-related morbidity and mortality.
In summary, the development of the IMPULSS toolkit was guided by the CFIR and evaluated for its ability to promote value-consistent decision-making within an organization by adapting IPDAS. The toolkit contains information to address and overcome reported challenges such as lack of knowledge of LS and screening guidelines, lack of a clear champion to head implementation, uncertainties regarding cost, and complexity of implementation [26, 28], as well as evidence-based information regarding UTS program optimization. User testing found the IMPULSS toolkit to be acceptable and appropriate to guide UTS implementation, optimization, and re-evaluation, and resulted in multiple improvements to content, usability, and visual appeal.
Acknowledgments
We thank Jessica Hunter for additional review of the paper and the larger IMPULSS study team and participants.
Statement of Ethics
This project is approved by the Geisinger IRB (2017–0238). Informed consent to participate in user interviews was obtained verbally at the time of the interview. This waiver of written informed consent was approved by the Geisinger IRB (2017–0238).
Conflict of Interest Statement
The authors have no conflicts of interest to declare.
Funding Sources
This project was supported by the National Cancer Institute (NCI) twenty-first Century Cures Act – Beau Biden Cancer Moonshot (R01CA211723 PI Rahm). The content of this article is the sole responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funder had no role in the design, data collection, data analysis, and reporting of this study.
Author Contributions
Alanna Kulchak Rahm: conceptualization, data curation, methodology, writing – original draft, and writing – review and editing. Tara Wolfinger and Deborah Cragun: conceptualization, data curation, formal analysis, investigation, methodology, validation, writing – original draft, and writing – review and editing. Zachary M. Salvati and Jennifer L. Schneider: data curation, investigation, validation, and writing – review and editing.
Data Availability Statement
The data that support the findings of this study are not publicly available due to their containing information that could compromise organizational or individual identities but are available from the corresponding author (A.K.R.) upon reasonable request.