Skip to main content

Educating, training, and exercising for infectious disease control with emphasis on cross-border settings: an integrative review

Abstract

Introduction

Points of entry and other border regions educate, train, and exercise (ETEs) their staff to improve preparedness and response to cross-border health threats. However, no conclusive knowledge of these ETEs’ effectiveness exists. This study aimed to review the literature on ETEs in infectious disease control concerning their methods and effect, with an emphasis on cross-border settings and methods that enlarge ETEs’ reach.

Methodology

We systematically searched for studies in the databases Embase, Medline, Web of Science, PsycInfo, ERIC, and Cinahl. After successively screening titles and abstracts, full-texts, and citations, 62 studies were included using in- and exclusion criteria. Data were extracted using a data-extraction form. Quality assessment was performed. We developed a theoretical framework based on which we analyzed the ETE context (target group, recruitment, autonomy, training needs), input (topic, trainers, development and quality of materials), process (design, duration, interval, goals), evaluation (pre-, post- follow-up tests), and outcome (reaction, learning, behavior, and system).

Results

We found a limited number of published evaluations of ETEs in general (n = 62) and of cross-border settings (n = 5) in particular. The quality assessment resulted in seven ETE methodologies and 23 evaluations with a ‘good’ score. Both general studies and those in a cross-border setting contain a low-moderate detail level on context, input, and process. The evaluations were performed on reaction (n = 45), learning (n = 45), behavior (n = 9) and system (n = 4), mainly using pre- and post-tests (n = 22). Online learning methods have a high potential in enlarging the reach and are effective, particularly in combination with offline training. Training-of-trainer approaches are effective for learning; new ETEs were developed by 20–44% of participants until six months after the initial training.

Conclusion

Our study reveals a limited number of publications on ETEs in infectious disease control. Studies provide few details on methodology, and use mainly short-term evaluations and low level outcomes. We call for more extensive, higher-level evaluation standards of ETEs, and an easy and sustainable way to exchange evaluations within the workforce of infectious disease control in cross-border settings. The theoretical framework developed in this study could guide future development and evaluation of ETEs in infectious disease control.

Background

The risk of cross-border transmission of infectious disease pathogens increases with the rise in global travel of people and transfer of goods [1]. Travelers, goods or vectors infected in one place could transmit diseases to other travelers during their journey or infect the population in the country of destination. Locally, at points of entry (POEs) – airport, ports and ground-crossings – management of high numbers of infected or exposed travelers can be challenging and would have a significant economic impact. During the SARS and current COVID-19 pandemics, for example, entry- and exit screening was implemented at POEs worldwide [2, 3], as was contact tracing performed for hundreds of travelers [4]. Capacities and procedures for management of public health events at desginated POEs have been agreed by the WHO State Parties in the International Health Regulations (IHR) 2005 [5]. However, translating capacity into an appropriate, timely, and efficient response to cross-border spreading requires collaboration and communication between many disciplines, levels, and countries [6], and subsequently, ongoing efforts to stay prepared. To support many POEs at the same time, many partners, the World Health Organization, and the European Union have been organizing multi-national training programs and simulation exercises [7, 8].

Despite all these efforts, we currently have no insight into the different education, training, and exercises (ETEs) that are carried out on POEs and what their effect is. A literature review in 2017, studying training on infectious disease control, reported that the included studies contained insufficient detail on the methodologies of training and did not report any results [9]. To employ future efforts (time, costs, intentions) as efficient as possible, we integratively reviewed the available scientific literature [10, 11] to identify 1) the different ETE methodologies to train professionals in infectious disease management, 2) how these ETEs are evaluated and 3) what evidence is available for their effectiveness, with a particular attention on cross-border settings, such as POEs.

The theoretical framework

To research the existing body of literature, we built a theoretical framework based on integrated theories and principles of effective teaching and learning. We combined the seminal Kirkpatrick [12, 13], Input Process Outcome [14] and Context Input Reaction and Outcome models [14, 15], the principles of adult learning [16, 17], the Self-Determination Theory on motivation [16], and techniques supporting sustainability [18]. In short, our framework states that ETEs rely on their context, input, and process and results in outcomes that can be evaluated at several points in time, and at four different levels (Fig. 1). The extensive theoretical background can be found in Additional file 1.

Fig. 1
figure 1

The context, input, and process affect the outcome of education, training and exercises. Outcome of education, training, or exercises can be evaluated at four levels (Kirkpatrick 1996). Lower levels are easier assessable, while higher levels show better sustainability of outcomes

Context, input, and process

The context comprises the environment of the learner [13]. This context influences learning, and mainly the application and implementation of what is learned. An example is the participants’ ability to change existing practices in a larger system. Learning in a context that welcomes change stimulates learning and its application. Other contextual factors are the workload, the training needs, and the autonomy of learning and application for the specific target group. A context of specific interest is the cross-border setting, here defined as a setting with interaction between different nation-states, such as in border regions, points of entry, or other multi-country settings.

The input covers the external conditions of the ETE, such as the thoroughness and quality of the material development, participants’ prior knowledge, the ETE topic, and the facilitators’ experience [13, 14]. Regarding this last factor, the training-of-trainer (TOT) approach is of interest. In a TOT design, participants are raised as trainers or facilitators to deliver ETEs themselves, through which the reach of an ETE can be enlarged. However, the trainers’ quality should remain on a sufficient level.

The process comprises the implementation and design [13]. Either more classical designs are used, such as education based on presentations, training with workshops, or table-top exercises; or more innovative designs are used that enlarge an ETE’s reach or enhance realism. Other process factors are clarity of learning goals, interactivity and problem-based learning, and the duration and frequency of learning moments.

Evaluation & Outcome

According to our theoretical framework, the context, input, and process affect the effectiveness or outcome of an ETE. Three evaluation moments are distinguished; the pre-test right before the ETE, the post-test right after the ETE, and the follow-up test one to several months after the ETE. The pre-test is used to set the baseline for learning, the post-test is used to see the direct and short-term effect of the ETE, and the follow-up test assesses the sustainability of the effect over time. Also, control groups are required to exclude external effects. The ETE outcome can be evaluated at four levels: reaction, learning, behavior, system (Fig. 1) [12, 13]. The reaction level assesses participants’ satisfaction, either quantitatively or on content. The learning level assesses the improvement of knowledge, skills, or attitudes. Although knowledge and skills are best assessed using tests or demonstrations instead of self-assessments, for this study, both these objective and subjective measures are interpreted as learning. The behavior level assesses the change in individual working practice. Because objectively measuring behavioral change is often complicated and time-consuming, we include both objective and self-assessed change at this level. On the system-level, change is organizational. Examples are standard operating procedures, contingency plans, or the information or communication flow through an organization. While reaction and learning are more easily assessable, behavioral, or system change indicates higher sustainability of the outcome [12, 13]. Although lower levels are indispensable in motivating, monitoring and purposefully investing in the professionals that make up the public health system, the system level addresses the public health roles from a macro perspective. Outcomes on this level are therefore most relevant from a public health perspective.

Education, training, and exercises

Based on our theory, education, training, and exercises are treated alike; these all aim at improving performance. Nevertheless, their differences are defined as follows: education is a process of individual learning in a general sense leaving several options for application available; training is a more practical and specified way of learning, also addressing practical aspects; exercises are a practical simulation of real practice.

Methodology

Literature search

To collect evaluations of ETEs in infectious disease control, we conducted a systematic, electronic search in the databases of Cinahl, Embase, Eric, Medline, PsycInfo, and Web of Science. The search period covered the period between the start of the databases (Cinahl: 1982; Embase: 1974; Eric: 1965; Medline 1946; PsycInfo: 1967; Web of Science: 1900) until 24 September 2018. We searched for a combination of “public health”, “infectious disease”, “cross-border”, “effectiveness”, “training” and their synonyms. The search strategy can be found in Additional file 2.

Inclusion criteria

First, we screened titles and abstracts and included studies that described an evaluation of an ETE with a topic in infectious disease control from a public health perspective, or if compliance remained unsure. Subsequently, studies’ full texts were screened. Studies were included if an evaluation of the ETE was described in the paper and public health professionals, either on the local, regional, or national level, were among the target population. Studies were excluded if no public health professionals were included as participants or when the topic was restricted to research, a specific therapy, such as the use of anti-virals in a therapeutic setting, or laboratory practice. An overview of the in- and exclusion criteria is shown in Fig. 2. The reference lists of included studies were screened for additional relevant studies, using the same criteria. For both the abstract- and full-text screening, the first 25% of studies was screened independently by two authors (DdR, EB) and compared afterward. Any disagreements between the authors were discussed until consensus was reached, before continuing with the other 75% (DdR). In total, 62 studies could be included.

Fig. 2
figure 2

Flowchart of the systematic literature search

Quality assessment

We assessed the quality of the ETE’s methodology and the quality of the evaluation for all studies. The first assessment was based on six questions from the Quality Standards in Education and Training Activities of the Youth Department of the Council of Europe 2016 [19], the second on six questions of the NICE Quality appraisal checklist for qualitative studies [20]. The quality assessment form can be found in Additional file 3. For both parts, a maximum of twelve points could be scored, leading to a bad, moderate or good score for tertiles. The first 25% of studies was scored independently by two authors (DdR, EB). After comparing and discussing the scores, one author continued (DdR).

Data extraction and analysis

We performed an integrative review, inspired by the steps of Whittemore and Knafl [10, 11]. First, we designed a data extraction form based on the theoretical framework (Fig. 1). Then, we extracted data on variables of context, input, process, and outcome, as shown in Tables 1 and 2, along with basic study characteristics, such as the journal, publication year, country, and funding issues. We analyzed the context, input, process and the four outcome variables by describing their occurrence and variety. Sub-analyses were performed for studies in a cross-border setting or with a TOT approach. If many studies described one of the four outcome variables, results were subdivided according to education, training, and exercises, or even between classical and innovative study designs. If directions of outcomes highly differed, we compared for context, input, and process characteristics.

Table 1 Baseline characteristics of the included studies

11

Table 2 Results

The results were presented in line with the theoretical framework. First, factors of context, input, and process were presented; after, the outcomes per level are described, if possible, referring to the context, input, and process characteristics. In this way, both an overview is generated of characteristics of ETE in infectious disease control, their accuracy in reporting, and possible links between the context, input, and process of ETEs and the outcome.

Results

Literature search

In Total, 2201 unique studies were identified. After applying the in- and exclusion criteria for titles and abstracts, 186 full-texts were screened, leading to 51 inclusions. Citation screening led to the inclusion of another 11 studies. Figure 2 shows the flowchart of the search and selection process. The quality assessment resulted in seven studies with a good score for training (score ≥ 9), and 23 with a good score for the evaluation (score ≥ 9). Ten studies had a good quality score after combining the scores (score ≥ 17). All scores can be found in Additional file 4.

Context

Five studies covered ETE in a cross-border setting, either a border region (n = 2), a point of entry (n = 2), or a multi-country setting aimed at international cooperation (n = 1). All other ETEs were in a non-cross border setting.

Target group

The target group of the ETE varied among studies, but was often improperly described in the studies. Examples are ‘public health leaders’, and ‘all staff of regional health departments’. Other studies specified a wide variety of professionals with different tasks in emergency preparedness or mixed public health professionals with emergency responders, university staff, and civilians. Participants’ motivations to participate are hardly derivable.

Recruitment & Autonomy

The majority of studies left any recruitment technique or clarified participants’ motivation unnoticed. Three studies reported mandatory participation, six studies highlighted the free choice of people participating, and two reported on freely available online courses. In Hoeppner et al., participants had to apply for participation, thereby suggesting motivation [49]. Fowkes et al. 2010 formulated their highly motivated participants as a limitation in the interpretation of their identified effectiveness of the ETE [45].

Training needs

In total, eleven studies performed a training needs assessment among the target population before designing the ETE. Also, training needs were obtained via literature studies, the ETE designers’ experienced-based vision [23], or by inquiry of disaster plans and local emergency management policies [55, 69]. Several studies specifically aimed to identify gaps and needs through the exercise [56, 68].

Input

Training topic

The studies discussed a wide variety of ETE topics. Twenty-three studies focused on preparedness and seventeen on response. The main topics were bioterrorism (n = 8), a pandemic (n = 8), or a specific disease outbreak (n = 9), of which five focused on influenza (n = 5). Odd ones out were among others training on risk communication [41], leadership [64], and one health [33]. Five studies, all TOTs, incorporated didactics as a training topic.

Trainers

A minority of studies indicated to have competent, experienced trainers or facilitators (n = 18). A majority of studies described the trainers without showing their experience or competence, by generally describing them as “instructor” or “university staff”, or left trainers completely unreported (n = 30).

Development & quality of the material

The development of learning material was discussed in all but seventeen studies. Most theories were derived from constructivist learning principles, such as the Adult Learning Theory [37, 60], or problem-based learning [81]. Other used theories included the Dreyfus model [59], theory from Benner [49, 59], continuing education [28, 59], and blended learning [36]. ETEs were also based on existing competencies [37, 44, 50, 71], previously existing materials, and developers’ experience from previously performed training or exercises. The developers of the material were mostly public health professionals (n = 12), followed by people from universities or public health schools (n = 10). The help of higher departments, such as from ministry level, the national center for disease control, or the WHO, were named several times [32, 63, 74]. In two studies, graphical designers were involved in the development of realistic images or virtual environments [76, 82].

Process

Classical designs

Eight studies described educational programs as part of university programs or courses, of which Yamada et al. describe an interdisciplinary and problem-based methodology during education [81], and Orfaly & Biddinger et al. and Rega et al. integrated table-top exercise in university courses [61, 67]. In the other six studies, methods were weakly described, merely referring to university programs or courses.

Nineteen studies evaluated a training of which several combined their training session with an exercise [25, 65] or real-life project [64]. Two studies left their training methodology unspecified [35, 42]. Of the other studies, all except one supported interactivity among learners or between learners and trainers by referring to interactive lectures or discussion. Detailed descriptions of training designs lacked and were restricted to summarizing words such as “using participatory methods” [31] or “an online lecture” [36, 63]. Studies delivering any detail on methodology refer to the adult learning principles, active learning, interactivity, multi-disciplinarity, or participatory methods, and explicitly away from passive methods.

Exercises were described in 24 studies, of which sixteen were table-top exercises and six simulation exercises specifically. The most common elements of table-top exercises in these studies were a lecture beforehand; a presentation of the scenario; an initial individual response; a pre-arranged and guided discussion in small, multi-disciplinary groups of local partners. Subsequently, a presentation in a larger group and a debriefing followed. Often, more than one scenario was included in the exercise. Most considerable differences between studies are the detail level of described methodology, and whether individuals, small- or large groups have to respond. Again we see more detailed study descriptions for studies that refer to the adult learning principles.

Innovative design - wide reach

Seven studies had a TOT design, of which three integrated the second wave of training. This second wave was delivered by the TOT participants [33, 54, 74], whereupon participants could immediately apply what was learned. All TOTs contained mixed methods. Often passive methods, such as lectures or presentations, were combined with active methodologies, such as guided discussions, clinical training, or active presenting. For two TOTs, the used ETE methodologies were largely unknown.

Seven studies studied ETE with online or new methodologies such as a virtual reality training [76], audience response system [77], the use of the intranet for training [52], e-modules [28, 31, 50], and combinations of e-learning and on-site learning [36]. Online ETEs had natural opportunities to spread the learning moments over a longer period. Also, participants were able to follow the ETE at their own pace. Some simulation exercises also used online methodologies in the form of blog websites where participants had to respond from their office to signals [21, 22, 52].

Innovative design - enhanced realism

Elements that were described to enhance the feeling of reality were among others the use of real work locations such as at an airport [56]; a computer simulation model generating feedback depending on participants’ decisions in a simulation exercise [26, 27]; interaction with scenario cards guiding each exercise to different possible outcomes [70]; initial ambiguity in an exercise case and drop-out of participants during the exercise [71]; moulaged or simulating patients [29]; and external consultations of experts during the exercise [75]. Rega & Fink 2013 report on a semester-long simulation exercise to keep up a realistic time frame [67].

Duration, interval & goals

General duration of ETEs varied between 30-min training and years-long curricula. TOTs mostly lasted several days to weeks. Educational courses lasted between 14 h and two years, training between 14 h and one year. Fifteen studies did not elaborate on the duration of the ETE. The interval and time between intervals are hardly described. The goals of ETE were addressed in most studies (n = 47), although often stated on the organizational level or implicitly integrated into the text instead of presenting trainable and measurable competencies. An overview of the outcomes on context, input and process are shown in Table 1.

Evaluation & Outcome

System-performance

System-performance was evaluated by four studies that used participants’ evaluations of organizational achievements after the ETE [32, 38], or external evaluations [45, 48]. None of the studies assessed the system effects of ETEs in a cross-border setting. Becker et al. 2012 evaluated a postgraduate education curriculum after two years in a developing setting [32]. This curriculum impressively increased the local public health system. The three other studies (n = 682; 1496; unknown) evaluated several table-top- and simulation exercises. These exercises seem effective on the system level regarding improving a prepared workforce by emergency planning [45], relationships among colleagues [48], and communication systems [48]. Potter et al. 2005 did not aim to evaluate system-performance but had a coincidental finding on this level: right after the training period, a real infectious disease outbreak occurred. According to the involved professionals, the response was well managed because the members of the response team had become acquainted with each other during the training [64].

Behavior

Nine studies, including two TOTs [60, 62], evaluated the outcomes on a behavioral level. Evaluation of behavior was primarily timed directly after the ETE, while six studies performed an additional follow-up test. Behavioral change was mainly self-assessed by participants, leading to subjective measurements. In one study, local supervisors were appointed to assess trainees’ behavioral change [36]; another used a report on ministry level next to participants’ self-assessments [40]. No control-groups were used.

The educational curricula seem to change behavior such as initiating the updating of plans, expanding professional networks, and improving collaboration (n > 244). Table-tops lead, according to ministries’ reports, to increased development of further exercises and a more regular assessment of public health preparedness (n = unknown). Online modules had a low response rate (< 18%), but changed behavioral intentions among responding participants (n > 55) [28, 63]. According to local supervisors (n = 511), the combination of online learning and on-site training led to improved work performance. One study reported on behavioral change after table-tops in a multi-country setting but did not mention any result in interaction between countries [40]. According to Orfaly et al. 2005 and Otto et al., TOTs seem moderately effective, since 20 and 44%, respectively, conducted exercises after six months (n = 118; n = 168) [60, 62].

Learning – knowledge

Thirty-three studies used knowledge to evaluate the effect of an ETE, including four TOTs, and four ETEs in a cross-border setting. The majority of knowledge was evaluated in pre- and post- knowledge tests (n = 20) compared to self-assessments of knowledge using Likert scales. Compared to studies using knowledge tests, those using self-assessments reported more detail on how knowledge had improved. Knowledge particularly improved on organizational and functional content, such as understanding response protocols or describing functional roles or the chain of command within an organization. This is understandable since self-assessments can explicitly ask what they aim for, while knowledge tests can only provide a test score. No control-groups were used, one study compared two groups that were exposed to two different methodologies [76].

Knowledge shows a clear increase directly after ETEs. The five studies that used knowledge tests, performed follow-up tests and reported the results show a scientificly significant improved knowledge level directly after the ETE and up to 12 months after [42, 76, 78,79,80]. Response rates were unknown, and the duration of these ETE programs varied between fourteen hours and four weeks. Umble et al. showed equal increase in knowledge between classical education and a broadcast [76]. Regarding ETEs in a cross-border setting, all using mixed methods and clearly stated their goals, knowledge increase was shown after table-tops and training. However, these studies used self-assessments or unknown scoring methodologies.

Learning – skills

Twenty-one studies evaluated an ETE on skills, including three TOTs but none in a cross-border setting. Practiced skills vary from a majority of organizational, communicational, team, and leadership skills, to a minority of more medical skills such as surveillance or the use of personal protective equipment. Except for one study using skill demonstrations [74], most studies performed self-assessment of improvements comparing pre- and post-tests. Seven studies also performed a follow-up test.

According to participants’ self-assessments, all ETEs were effective skill-builders. A statistically significant increase in skills is shown for training, while this outcome remains insignificant for most tabletop- and simulation exercises. Follow-up evaluations indicated even a further increase in skills in the period after the ETE, although these results are self-assessed and mainly statistically insignificant. Two TOTs showed a significant increase in planning, implementation, and evaluation after a table-top exercise [33, 54]; follow-up results were unavailable here.

Learning – attitude

Fifteen studies reported on a change in attitude, including one for a TOT [43], and one for several table-tops in a cross-border setting [40]. The evaluated attitudes comprised the awareness of and motivation to develop future preparedness plans and programs, or an increase in confidence. We saw mainly training and exercises evaluating attitude. Attitude was assessed by rating statements.

We saw a sustainable change in attitude directly and 1–3 months after both online and face-to-face training. These training programs lasted between 1,5 and 14 h but had unclear methods. Table-top exercises varied in their capability to change attitude, since both significant change [72, 75] and fairly indifference [34, 71] was shown, indicating that more detailed evaluation is required. The table-tops in a cross-border setting seemed to enhance participants’ motivation to develop and exercise programs [40]. Dickmann et al. 2016 reported a relation between knowledge and attitude: participants with higher knowledge also had congruent confidence levels to respond and advocate for change [41]. Data regarding TOTs do not suffice aggregation of results.

Reaction

Forty-five studies assessed ETE on the reaction level, mostly by participants rating statements on satisfaction and methodology using Likert scales, directly after the ETE. The ETEs in crossborder settings show high satisfaction among participants regarding table-tops and simulation exercises. One TOT showed satisfied participants of the second wave of training. We will present the results for different designs.

Training programs scored satisfactorily directly after the training, despite the substantial differences in design: after a 30-min pandemic preparedness training [46], 98% of participants thought the program valuable, as thought 95% after several face-to-face modules on emergency preparedness [44], and 92–96% after a preparedness training of 14 days [78, 80]. Remarkably, the one study performing a follow-up test identifies the lowest satisfaction of all training programs, with a mean score of 4/5 after a 2-day Zika response training [35].

Only one study evaluated reaction after an exercise with a follow-up test [40], all others were restricted to post-tests. Table-top exercises overall scored high on satisfaction, mainly based on their potential to practice together (77% agreed [34]), to build relationships (80–90% agreed [58]); to improve emergency or contingency planning (73% agreed [34]); and to identify gaps (89% [62] and 77% [58] agreed). Biddinger et al. identified higher satisfaction among regional exercise respondents compared with single institution respondents regarding their understanding of agencies’ roles and responsibilities (p < 0.001), engagement in the exercise (p = 0.006), and satisfaction with the combination of participants (p < 0.001) [34]. The right combination of participants was in several studies scored as one of the most valuable aspects. A disadvantage of table-top exercises was the lack of identification of key gaps in individuals’ performance [40]. Further made recommendations for exercises were: to clearly formulate specific objectives; to be as realistic as possible; to ground practical response in theory; to be designed around issue-areas rather than scenarios; to have a forced, targeted and time delineated discussion and decision making; to have limited number of participants but to include all key perspectives and especially leadership perspectives; to be collaboratively designed and executed with representatives from participating agencies, external developers, and facilitators; to have networking possibilities; and to use trained evaluators.

Simulation exercises were less assessed on reaction, and outcomes show a slightly lower satisfaction than the table-top exercises. However, in three studies, “most participants” or over 80% of participants still agreed on their readiness being increased by simulation. The full-scale simulation at an airport stresses the need for specific goals, in this way preventing deprioritizing the public health response by trying to test everything at the same time [56]. Also, it is paramount to have clear roles and responsibilities of the various agencies involved, and to have all required capacity available [56]. One study showed a positive relationship between the duration and the contact and communication between health departments after a joint exercise [22].

Ten studies reported reaction directly after innovative methodologies. Several studies added online blogs, pages, or systems to a simulation exercise [22], a lecture [50], or a combination of classical designs. Other studies evaluated pure technologies such as an audio-response system [77], or a virtual reality environment [82]. For innovative methods, satisfaction was generally high, although technical issues were often reported. For example, the e-modules in Baldwin et al. were launched via the intranet of a public health organization [30], thereby benefitting from high accessibility but facing extensive, unforeseen updates, a rigidness for change and delayed updated because ownership was not designated. The VR environment exercise met its objectives and was time well spent, but the participants and authors suggest further technology innovations before this method can be used at large scale [82]. An overview of all outcomes, including those not mentioned above [24, 39, 47, 51, 53, 57, 66, 73], are shown in Table 2.

Conclusions

This study aimed to review the different ETE methodologies that are used by professionals in infectious disease management, how these methodologies are evaluated, and what their effect is. We have a particular focus on cross-border settings, such as POEs, and methodologies with a wide reach. We identified various types of ETEs – from nationwide online preparedness programs till the hands-on local trainings during an outbreak - but with generally few details on the exact methodology. Both the lack of details and the predominance of short-term and subjective evaluations impede conclusions on what methods in which settings lead to both positive and sustainable outcomes. Our results point out the need for standardized evaluations, preferably with a long-term scope, that are shared among trainers and organizers. We developed a theoretical framework that can be used to structure future evaluations. These evaluations, then, will hopefully not only inspire future developers to come up with successful ETE designs but also lead to recommendations for the best exercise-effect ratio.

Reports on system and behavioral level outcomes are scarce, leaving us with a majority of lessons learned on lower outcome levels as learning and reaction. While the convincing and sustainable increase in knowledge and skills are hopeful indications for system improvements and support the use of ETEs for learning, several intervening factors are possible. Among others, evaluation tests in itself are one of the most sustainable learning techniques [18]. The knowledge tests and demonstrations activate knowledge and skills and might be responsible for the effect. Also, control groups are missing while often immediate causes are seen for the organization of an ETE, such as a growing pandemic or a recent bioterrorism attack. These events require ETEs, but might also lead to greater attention for and learning about the subject despite any ETE. We saw a learning effect that increases during follow-up and is independent of ETE duration, which is further supporting this confounding effect.

While cross-border infectious disease control receives international attention, and in Europe alone, almost all countries have designated POE to be prepared to handle cross-border health threats, we identified only five studies describing an ETE evaluation in a cross-border setting. This is too low a number to draw general conclusions about the effectiveness of ETEs in a cross-border setting. Findings from ETEs in general infectious disease management should be used for this setting untill more specific evaluations are available. However, one crucial difference between the cross-border and non-cross-border setting is the larger and more diverse set of stakeholders that are involved in cross-border settings. Not only several countries are involved, but also information and cooperation are needed between general public health, health professionals, and specific port, airport and ground-crossing officials. While the studies in cross-border settings did not elaborate on their specific settings, many other studies in our review identified a strengthened network, better knowledge of roles & responsibilities, and enhanced relations among the most valuable aspects of training and exercises. In other disciplines, it was also discovered that sharing the same language [83], the focus on relationships, and collaborative management skills [84] are essential factors of collaborative learning. We consider these findings as prudent support for training and exercising in cross-border settings.

Because cross-border health threat prevention requires collaboration between countries and a shared minimum level of functioning, we considered TOT approaches and online methodologies for their potential to reach several locations and a large audience at once. Both methodologies have potential and some remaining challenges. TOTs seem as effective in learning as other training methods, and their participants are satisfied. Unfortunately, TOTs are only moderately effective regarding their principal goal: the organization and delivery of future ETEs. In this way, the potential exponential increase in delivered training sessions and trainees compared to single direct training remains limited. To reach their potential, the barriers that TOT participants perceive, such as a lack of confidence, time or resources for ETE delivery, or other priorities during their duties, should be taken seriously in future TOT programs. Online methodologies overcome specific barriers that were identified for TOT approaches. Both in our study as in another recent review on undergraduate medical education indicate that online learning “enhances knowledge and skills”, while evidence is lacking “that offline learning works better” [85]. However, technical issues and a lack of ownership of the online environments are remaining barriers. Also, we only had a low number of studies to evaluate. We call for more, enhanced evaluation of ETEs using innovative and online methods, which is stressed recently by other reviews and the WHO [86, 87].

This review has several strengths and limitations. First, we restricted our analysis to what was available in the peer-reviewed literature databases and did not study the body of grey literature. Although it is very probable that more evaluations are performed, orienting searches in the grey literature yielded a limited amoung of evaluations, indicating that the majority of ETEs in a crossborder setting are not made public. However, the theoretical framework developed in this study can be used on a wide variety of ETEs, including those not publicly available within public health organizations. Furthermore, this theoretical framework can be used to support the design and evaluation of ETEs, and a more complete reporting in the peer-reviewed literature.

Second, the didactic scope of our review can be seen both as a limitation and a strength. The collaborative evaluation of education, training and exercising leads to broad and generic conclusions possibly limiting conclusions on individual ETEs whose goals widely vary among each other. However, restricting our results to either education, training or exercising specifically is also problematic. Although general distincitions are possible, on an individual level these are often arbitrary. As our results show, exercises are often taken as part of training or educational programs. For example, organization-wide exercises are used for training on the system level – is an organization prepared to respond effectively -, but training is also used for the handling of individual patients. We, therefore, chose to evaluate all three ways of learning, using the same four levels of evaluation. In this way, the results in this study do not only display the effect of the ETEs, but also specify these to the evaluation level.

Thirdly, we restricted our focus on infectious disease prevention and control in a public health setting. While public health responses to chemical, radiological or nuclear threats demand another set of professionals, they share many of the aspects of contamination and could be included in future reviews. The theoretical framework as developed in this study, may be well applicable to use for evaluations in these adjacent disciplines. We consider it a strength that, to the best of our knowledge, this is the first attempt to assess ETEs in infectious disease control systematically. In addition to previous efforts [9], we studied evaluations and outcomes with greater detail and with the comprehensive framework we developed, we have contributed to the body of knowledge regarding the performance of systematic reporting and evaluation of ETEs.

Future studies should focus on the development of a standardized evaluation format integrating details of context, input, and process and suggesting planning and questionnaires for evaluations. Future training developers should first focus on the formulation of clear ETE goals, then attach the required outcome level and subsequently choose the appropriate evaluation methods. For example, if one intends to improve an airport’s capability to prevent secondary transmissions during a case of tuberculosis on a plane, then this goal is formulated on the system level. However, if the goals on the system level are not met, the formulation of goals and evaluations of outcomes on individual behavior, knowledge, skills are required to what and who should be further supported. Choosing the appropriate evaluation methods might involve requesting access to track-records, including external observators, planned skill demonstration, or validated knowledge tests. Lastly, we highly recommend sharing evaluations and lessons learned of ETEs on a broad scale to directly support co-organizers and provide policymakers with the chance to deploy costs, time, and capacity towards the optimal effect. By standardizing the evaluation of ETEs, comparisons with methods in general adult education would become possible and provide an even broader base for recommendations on effect. We call for international efforts to facilitate this sharing of evaluations and experience, for example, through maintaining a sustainable electronic training platform where standard information can be registered, and exchanged about the set-up, implementation and evaluation of ETEs. A standard set of scenarios in cross-border setting or training materials could further encourage this development. Previous time-restricted projects have shown their potential [88], but a sustainable option has been missing.

We conclude that although extensive training and education programs exist in infectious disease control, recent literature can only partly and prudently prove their added value, especially in cross-border settings. We see promising results for online methodologies reporting similar results as offline training, although relationship building and networking are among the aspects most valued by participants of face-to-face training. Above all, future developers of ETEs should not forget the long-term perspective of their efforts; sharing the evaluations benefits a crowd of colleague organizers from detailed and thorough reporting and evaluation. This paper, therefore, presents a call for publishing ETE evaluations in order to facilitate overall system learning and preparations of a workforce that can cope with the perpetual challenges of global infectious disease control.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

ETE:

Education, training, or exercise

TOT:

Training-of-trainers

POE:

Point of entry: a port, airport or ground-crossing

References

  1. Fauci AS, Morens DM. The perpetual challenge of infectious diseases. N Engl J Med. 2012;366:454–61.

    CAS  PubMed  Google Scholar 

  2. Goubar A, Bitar D, Cao WC, Feng D, Fang LQ, Desenclos JC. An approach to estimate the number of SARS cases imported by international air travel. Epidemiol Infect. 2009;137(7):1019–31 https://0-doi-org.brum.beds.ac.uk/10.1017/S0950268808001635 Epub 2008 Dec 15.

    CAS  PubMed  Google Scholar 

  3. Jernigan DB. CDC COVID-19 Response Team. Update: public health response to the coronavirus disease 2019 outbreak – united states, February 24, 2020. MMWR. 2020;69(8):216–9.

    CAS  PubMed  Google Scholar 

  4. Vogt TM, Guerra MA, Flagg EW, Ksjazek TG, Lowther SA, Arguin PM. Risk of severe acute respiratory syndrome-associated coronavirus transmission aboard commercial aircraft. J Travel Med. 2006;13(5):268–72.

    PubMed  PubMed Central  Google Scholar 

  5. World health Organization. International health regulations (2005). Geneva: World health Organization; 2008.

    Google Scholar 

  6. World Health Organization. International health regulations (2005): A guide for public health emergency contingency planning at designated points of entry: WHO; 2012. [Cited 8 November 2019]. Available from: https://www.who.int/publications/i/item/international-health-regulations-(-2005)-a-guide-for-public-health-emergency-contingency-planning-at-designated-points-of-entry.

  7. World Health Organization. Strengthening health security by implementing the International Health Regulations (2005): Ports, airports and ground-crossings. WHO. [Cited 8 November 2019]. Available from: https://www.who.int/ihr/publications/ports_airports/en/.

  8. European Union Healthy Gateways Joint Action. EU Healthy Gateways Joint action preparedness and action at points of entry (ports, airports, ground crossings): EU healthy gateways; 2018. [Cited 8 November 2019]. Available from: https://www.healthygateways.eu/.

  9. Yeskey K, Hughes J, Galluzzo B, Jaitly N, Remington J, et al. Ebola virus training: a needs assessment and gap analysis. Emerg Pathog Health Secur. 2017;15(3):225–9.

    Google Scholar 

  10. Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–53.

    PubMed  Google Scholar 

  11. Whittemore R. Combining evidence in nursing research: methods and implications. Nurs Res. 2005;54(1):56–62.

    PubMed  Google Scholar 

  12. Kirkpatrick D. Great ideas revisited: Revisiting Kirkpatrick’s Four-level model. Train Dev. 1996;50(1):54–9.

    Google Scholar 

  13. Reio TG, Rocco TS, Smith DH, Chang E. A critique of Kirkpatrick’s evaluation model. New Horiz Adult Educ Hum Resour Dev. 2017;29(2):35–53.

    Google Scholar 

  14. Bushnell DS. Input, process, output: a model for evaluating training. Train Dev J. 1990;44(3):41–3.

    Google Scholar 

  15. Barden V. Book review: Peter Warr, Michael Bird & Neil Rackham: Evaluation of Manamgement Training. London: Gower Press; 1970. p. 112.

    Google Scholar 

  16. Taylor DCM, Hamdy H. Adult learning theories: implications for learning and teaching in medical education: AMEE guide no. 83. Med Teach. 2013;35(11):1561–72.

    Google Scholar 

  17. Discroll MP. Psychology of learning for instruction. Harlow: Pearson Education Limited; 2014.

    Google Scholar 

  18. Dunlosky J, Rawson KA, Marsh EJ, Nathan MJ, Willingham DT. Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psycyol Sci Public Interest. 2013;14(1):4–58.

    Google Scholar 

  19. Youth Department Council of Europe. Quality standards in education and training activities of the Youth Department of the Council of Europe. [Unknown Publishing location] DDCP-YD/ETD 202. 2016.

    Google Scholar 

  20. National Institute for Health and Care Excellence (NICE). Methods for the development of NICE public health guidance. 3rd ed: NICE; 2012. [Cited 11 October 2019]. Available from: https://www.nice.org.uk/process/pmg4/chapter/appendix-h-quality-appraisal-checklist-qualitative-studies#checklist-2.

  21. Ablah E, Nickels D, Hodle A, Wolfe DJ, Orr S, et al. Public health investigation: a pilot, multi-county, electronic infectious disease exercise. AJIC. 2007;35(6):382–6.

    Google Scholar 

  22. Ablah E, Nickels D, Hodle A, Wolfe DJ. Public health investigation: focus group study of a regional infectious disease exercise. Public Health Nurs. 2008;25(6):546–53.

    PubMed  Google Scholar 

  23. Aiello A, Khayeri MY, Raja S, Peladeau N, Romano D, et al. Resilience training for hospital workers in anticipation of an influenza pandemic. J Contin Educ Health Prof. 2011;31(1):15–20.

    PubMed  Google Scholar 

  24. Alexander LK, Dail K, Davis MV, Hajat A, Rothney E, et al. A Pilot Hybrid Internet/Classroom-based Communicable Disease Continuing Education Course for Public Health Nurses in North Carolina: Lessons Learned. J Public Health Manage Pract. 2005;November(Suppl):119–22.

    Google Scholar 

  25. Alexander LK, Dail K, Horney JA, Davis MV, Wallace JW, Maillard JM, MacDonald P. Partnering to meet training needs: a communicable-disease continuing education course for public health nurses in North Carolina. Public Health Report. 2008;123(2):36–43.

    Google Scholar 

  26. Araz OM, Jehn M, Lant T, Fowler JW. A new method of exercising pandemic preparedness through an interactive simulation and visualization. J Med Syst. 2012;36:1475–83.

    PubMed  Google Scholar 

  27. Araz OM, Jehn M. Improving public health emergency preparedness through enhanced decision-making environments: a simulation and survey based evaluation. Technol Forecasting Soc Change. 2013;80:1775–81.

    Google Scholar 

  28. Atack L, Luke R. Impact of an online course on infection control and prevention competencies. J Adv Nurs. 2008;63(2):175–80.

    PubMed  Google Scholar 

  29. Atlas RM, Clover RD, Carrico R, Wesley G, Thompson M, McKinney WP. Recognizing biothreat diseases: realistic training using standardized patients and patient simulators. J Public Health Manage Pract. 2005;November(Suppl):143–6.

    Google Scholar 

  30. Baldwin K, LaMantia J, Proziack L. Emergency preparedness and bioterrorism response: development of an educational program for public health personnel. Public Health Nurs. 2005;22(3):248–53.

    PubMed  Google Scholar 

  31. Bazeyo KM, Bagonza J, Halage A, Okure G, Mugagga M, et al. Ebola a reality of modern public health; need for surveillance, preparedness and response training for health workers and other multidisciplinary teams: a case for Uganda. Pan Afr J. 2015;20(404):1–12.

    Google Scholar 

  32. Becker KM, Ohuabunwo C, Ndjakani Y, Nguku P, Nsubuga P, et al. Field epidemiology and laboratory training programs in West Africa as a model for sustainable partnerships in animal and human health. J Am Vet Med Assoc. 2012;241(5):572–9.

    PubMed  Google Scholar 

  33. Berrian AM, Smith MH, Van Rooyen J, Martinez-Lopez B, Plank MN, Smith WA, Conrad PA. A community-based one health education program for disease risk mitigation at the human-animal interface. One Health. 2018;5:9–20.

    PubMed  Google Scholar 

  34. Biddinger PD, Savoia E, Massin-Short SB, Preston J, STOTo MA. Public health emergency preparedness exercises: lessons learned. Public Health Rep. 2010;125(5):100–6.

    PubMed  PubMed Central  Google Scholar 

  35. Cathcart LA, Ramirez-Leon G, Orozco YA, Flanagan EA, Young SE, et al. An efficient model for designing medical countermeasure just-in-time training during public health emergencies. Am J Public Health. 2018;108:212–4.

    Google Scholar 

  36. Chandler T, Qureshi K, Gebbie K, Morse S. Teaching emergency preparedness to public health workers: use of bleded learning in web-based training. Public Health Rep. 2008;123:676–80.

    PubMed  PubMed Central  Google Scholar 

  37. Chiu M, Polivka BJ, Stanley SAR. Evaluation of a disaster-surge training for public health nurses. Public Health Nurs. 2011;29(2):136–42.

    PubMed  Google Scholar 

  38. Craig AT, Armstrong PK. Exercise Paton: a simulation exercise to test new south wales emergency departments’ response to pandemic influenza. CDI. 2007;31(3):310–3.

    PubMed  Google Scholar 

  39. Dausey DJ, Buehler JW, Lurie N. Designing and conducting tabletop exercises to assess public health preparedness for manmade and naturally occurring biological threats. BMC Public Health. 2007;7:92 https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2458-7-92.

    PubMed  PubMed Central  Google Scholar 

  40. Dausey DJ, Moore M. Using exercises to improve public health preparedness in Asia, the middle easts and Africa. BMC Res Notes. 2014;7(474):1–7.

    Google Scholar 

  41. Dickmann P, Abraham T, Sarkar S, Wysocki P, Cecconi S, Apfel F, Nurm U. Risk communication as a core public health competence in infectious disease management: development of the ECDC training curriculum and programme. Eurosurveillance. 2016;21(4):1–5.

    Google Scholar 

  42. El-Bahnasawy MM, Labib NA, Abdel-Fattah MAH, Ibrahim AHA, Morsy TA. Selected infectious disease disaasters for nursing staff training at egyptian eastern border. J Egypt Soc Parasitol. 2014;44(1):41–54.

    PubMed  Google Scholar 

  43. Faass J, Greenberg M, Lowrie KW. Defending a moving target: H1N1 preparedness training for the transit industry. Health Promot Pract. 2013;14(1):24–9.

    PubMed  Google Scholar 

  44. Fowkes V, Blossom J, Anderson HK, Sandrock C. Emergency preparedness for health professionals in a statewide AHEC program: the first two years. Acad Med. 2007;82:781–7.

    PubMed  Google Scholar 

  45. Fowkes V, Blossom HJ, Sandrock C, Mitchell B, Brandstein K. Exercises in emergency preparedness for health professionals in community clinics. J Community Health. 2010;35:512–8.

    PubMed  PubMed Central  Google Scholar 

  46. Gershon RRM, Vandelinde N, Magda LA, Pearson JM, Werner A, Prezant D. Evaluation of a pandemic preparedness training intervention for emergency medical services personnel. Prehosp Disaster Med. 2010;24(6):508–11.

    Google Scholar 

  47. Grillo M, Woodland K, Talavera G, Shaffer R, Brodine S. Short-term transfer of knowledge assessment in the military international HIV training program (MIHTP). Curr HIV Res. 2017;15:188–201.

    CAS  PubMed  Google Scholar 

  48. Hegle J, Markiewicz M, Benson P, Horney J, Rosselli R, MacDonald P. Lessons learned from North Carolina public health regional surveillance teams’ regional exercises. Biosecur Bioterror. 2011;9(1):41–7.

    PubMed  Google Scholar 

  49. Hoeppner MM, Olson D, Larson SC. A longitudinal study of the impact of an emergency preparedness curriculum. Public Health Rep. 2010;5(125):24–32.

    Google Scholar 

  50. Horney JA, MacDonald PDM, Rothney EE, Alexander LK. User Patterns and SatisfactionWith On-line Trainings Completed on the North Carolina Center for Public Health Preparedness Training Web Site. J Public Health Manage Pract. 2005;November(Suppl):90–4.

    Google Scholar 

  51. Hueston WD. Joint degree programs in public health. JVME. 2008;35(2):153–9.

    PubMed  Google Scholar 

  52. Johnson YR, Herrmann JA, Wallace RL, Troutt HF, Myint MS. Development and implementation of a functional exercise to assess public health agency response to foodborne terrorism. J Homeland Secur Emerg Manage. 2009;6(1):49.

    Google Scholar 

  53. Kohn S, Barnett DJ, Galastri C, Semon NL, Links JM. Public health-specific National Incident Management System Trainings: building a system for preparedness. Public Health Rep. 2010;5(125):43–50.

    Google Scholar 

  54. Livet M, Richter J, Ellison L, Dease B, McClure L, Feigley C, Richter DL. Emergency preparedness academy adds public health to readiness equation. J Public Health Manage Pract. 2005;November(Suppl):4–10.

    Google Scholar 

  55. Macario E, Benton LD, Yen J, Torres M, Macias-Reynolds V, Holsclaw P, Nakahara N, Connell JM. Public Health Nurs. 2007;24(1):66–72.

    PubMed  Google Scholar 

  56. Martin G, Boland M. Planning and preparing for public health threats at airports. Glob Health. 2018;14(28):1–5.

    Google Scholar 

  57. Mitka M. Bioterror exercises tests agencies’ mettle. (reprinted). JAMA. 2003;289(22):2927–8.

    PubMed  Google Scholar 

  58. Morris JG, Greenspan A, Howell K, Gargano LM, Mitchell J, Jones JL, Potter M, Isakov A, Woods C, Hughes JM. Southeastern Center for Emerging Biologic Threats Tabletop Exercise: foodborne toxoplasmosis outbreak on college campuses. Biosecur Bioterror. 2012;10(1):89–97.

    PubMed  PubMed Central  Google Scholar 

  59. Olson D, Hoeppner M, Larson S, Ehrenberg A, Leitheiser AG. Lifelong learning for public health practice education: a model curriculum for bioterrorism and emergency readiness. Public Health Rep. 2008;123(2):53–64.

    PubMed  PubMed Central  Google Scholar 

  60. Orfaly RA, Frances JC, Campbell P, Whittemore B, Joly B, Koh H. Train-the-trainer as an educational model in public health preparedness. J Public Health Manage Pract. 2005;November(Suppl):123–7.

    Google Scholar 

  61. Orfaly RA, Biddinger PD, Burstein JL, Leaning J. Integration of academia and practice in preparedness training: the Harvard School of Public Health experience. Public Health Rep. 2005;120(1):48–51.

    PubMed  PubMed Central  Google Scholar 

  62. Otto JL, Lipnick RJ, Sanchez JL, DeFraites RF, Barnett DJ. Preparing military installations for pandemic influenza trhough tabletop exercises. Mil Med. 2010;175:7–13.

    PubMed  Google Scholar 

  63. Peddecord KM, Holsclaw P, Gomez Jacobson I, Kwizera L, Rose K, Gersberg R, Macias-Reynolds V. Nationwide satellite training for public health professionals: web-based follow-up. J Contin Educ Health Prof. 2007;27(2):111–7.

    PubMed  Google Scholar 

  64. Potter MA, Burns HK, Barron G, Grofebert A, Bednarz GD. Cross-sector leadership development for preparedness. Public Health Rep. 2005;125(1):109–15.

    Google Scholar 

  65. Quiram BJ, Carpender K, Pennel C. The Texas Training Initiative for Emergency Response (T-TIER): An Effective Learning Strategy to Prepare the Broader Audience of Health Professionals. J Public Health Manag Pract. 2005;November(Suppl):83–9.

    Google Scholar 

  66. Qureshi KA, Gershon RM, Merrill JA, Calero-Breckheimer A, Murrman M, et al. Effectiveness of an emergency preparedness training program for public health nurses in New York City. Fam Commun Health. 2004;24(3):242–9.

    Google Scholar 

  67. Rega PP, Fink BN. Immersive simulation education: a novel approach to pandemic preparedness and response. Public Health Nurs. 2013;31(2):167–74.

    PubMed  Google Scholar 

  68. Richter J, Livet M, Jill S, Feigley CE, Scott G, Richter DL. Coastal terrorism: Using tabletop discussions to enhance coastal community infrastructure through relationship building. J Public Health Manag Pract. 2005;November(Suppl):45–9.

    Google Scholar 

  69. Rottman SJ, Shoaf KI, Dorian A. Development of a Training Curriculum for Public Health Preparedness. 2005;November(Suppl):128–31.

  70. Sandstrom BE, Eriksson H, Norlander L, Thorstensson M, Cassel G. Training of public health personnel in handling CBRN emergencies: a table-top exercise card concept. Environ Int. 2014;72:164–9.

    PubMed  Google Scholar 

  71. Sarpy SA, Warren CR, Seth K, Bradley J, Howe R. Simulating Public Health Response to a Severe Acute Respiratory Syndrome (SARS) Event: A Comprehensive and Systematic Approach to Designing, Implementing, and Evaluating a Tabletop Exercise. Public Health Manage Pract. 2005;November(Suppl):75–82.

    Google Scholar 

  72. Savoia E, Biddinger PD, Fox P, Levin D, Stone L, STOTo MA. Impact of tabletop exercises on participants’ knowledge of and confidence in legal authorities for infectious disease emergencies. Disaster Med Public Health Prep. 2009;3(2):104–10.

    PubMed  Google Scholar 

  73. Savoia E, Preston J, Biddinger PD. A consensus process on the use of exercises and after Action reports to assess and improve public health emergency preparedness and response. Prehosp Disaster Med. 2013;28(3):305–8.

    PubMed  Google Scholar 

  74. Soeters HM, Koivogui L, de Beer L, Johnson CY, Diaby D, et al. Infection prevention and control training and capacity building during the ebola epidemic in Guinea. PLoS One. 2018;13(2):1–8.

    Google Scholar 

  75. Taylor JL, Roup BJ, Blythe D, Reed G, Tate TA, Moore KA. Pandemic influenza preparedness in Maryland: improving readiness through tabletop exercise. Biosecur Bioterror. 2005;3(1):61–9.

    PubMed  Google Scholar 

  76. Umble KE, Cervero RM, Yang B, Atkinson WL. Effects of traditional classroom and distance continuing education: a theory-driven evaluation of a vaccine-preventable diseases course. Am J Public Health. 2000;90(8):1218–24.

    CAS  PubMed  PubMed Central  Google Scholar 

  77. Waltz EC, Maniccia DM, Bryde RL, Murphy K, Harris BR, Waldenmaier MN. Training the public health workforce from Albany to Zambia: technology lessons learned along the way. Public Health Rep. 2010;125(5):61–9.

    PubMed  PubMed Central  Google Scholar 

  78. Wang C, Wei S, Xiang H, Wu J, Xu Y, et al. Development and evaluation of a leadership training program for public health emergency response: results from a Chinese study. BMC Public Health. 2008;8:377.

    PubMed  PubMed Central  Google Scholar 

  79. Wang C, Wei S, Xiang H, Xu Y, Han S, et al. Evaluating the effectiveness of an emergency preparedness training programme for public health staff in China. Public Health. 2008;122:471–7.

    PubMed  PubMed Central  Google Scholar 

  80. Wang C, Xiang H, Xu Y, Hu D, Zhang W, et al. Improving emergency preparedness capability of rural public health personnel in China. Public Health. 2010;124:339–44.

    CAS  PubMed  PubMed Central  Google Scholar 

  81. Yamada S, Durand AM, Chen TH, Maskarinec GG. Interdisciplinary problem-based learning as a method to prepare Micronesia for public health emergencies. Dev Hum Resour Health Pacific. 2007;14(1):98–102.

    Google Scholar 

  82. Yellowlees P, Cook JN, Marks SL, Wolfe D, Mangin E. Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program. Biosecur Bioterr. 2007;6(1):36–44.

    Google Scholar 

  83. Moyniham DP. The network governance of crisis response: case studies of incident command systems. J Public Adm Res Theory. 2009;19:895–915.

    Google Scholar 

  84. Koliba CJ, Mills RM, Zia A. Accountability in governacne networks: an assessment of public, private, and nonprofit emergency management practices following hurricane katrina. Public Adm Rev. 2011;71(2):210–20.

    Google Scholar 

  85. Pei L, Wu H. Does online learning work better than offline learning in undergraduate medical education? A systematic review and meta-analysis. Med Educ Online. 2019;24(1):1666538.

    PubMed  PubMed Central  Google Scholar 

  86. Chen F, Lui AM, Martinelli SM. A systematic review of effectiveness of flipped classrooms in medical education. Med Educ. 2017;51(6):585–97.

    PubMed  Google Scholar 

  87. World Health Organization. Simulation Exercise & After-Action Review Analysis Shows Need to Increase Awareness of Benefits: WHO; 2019. [cited 2 August 2019]. Available from: https://extranet.who.int/sph/news/simulation-exercise-after-action-review-analysis-shows-need-increase-awareness-benefits.

  88. European Union AIRSAN project. AIRSAN 2013–2015 [cited 8 November 2019]. Available from: https://www.airsan.eu/.

Download references

Acknowledgements

We wish to gratefully acknowledge Rikie Deurenberg for her help in the development of the search syntax for all data bases.

Funding

This publication has been produced with the support of the European Commission’s Consumers, Health, Agriculture and Food Executive Agency (CHAFEA) for the Healthy Gateways Joint Action (grant agreement no. 801493) and support from the Dutch Ministry of Health, Welfare and Sport. The content represents the views of the author only and is his/her sole responsibility; it cannot be considered to reflect the views of the European Commission and/or the Consumers, Health, Agriculture and Food Executive Agency (CHAFEA) or any other body of the European Union. The European Commission and the Agency do not accept any responsibility for use that may be made of the information it contains.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

DdR, EB and AT designed the study and further contributed to the design of the work. DdR, EB and AT interpreted the data. DdR has drafted the work, while VM, CH, EB, JR and AT have substantively revised it. The authors read and approved the final manuscript

Corresponding author

Correspondence to Doret de Rooij.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Theoretical background. Theoretical background of the data analysis of the review; extended version as compared to the section presented in the article .

Additional file 2.

Search syntax. The search syntax used for this literature review.

Additional file 3.

Quality Assessment form. The form composed for quality assessment of the studies; the first part assessing the quality of the education, training or exercise method and the second part that of the performed scientific study.

Additional file 4.

Results Quality Assessment. The results of the quality assessment are shown here.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Rooij, D., Belfroid, E., Hadjichristodoulou, C. et al. Educating, training, and exercising for infectious disease control with emphasis on cross-border settings: an integrative review. Global Health 16, 78 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12992-020-00604-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12992-020-00604-0

Keywords