What is Clinical Evaluation In Nursing Education
After establishing a framework for evaluating students in clinical practice and exploring one’s own values, attitudes, and biases that may influence evaluation, the teacher identifies a variety of methods for collecting data on student performance. Clinical evaluation methods are strategies for assessing learning outcomes in clinical practice.
That practice may be with patients in hospitals and other health care facilities, with families and communities, in simulation and learning laboratories, or involving other activities using multimedia. Some evaluation methods are most appropriate for use by faculty or preceptors who are on-site with students and can observe their performance; other evaluation methods assess students’ knowledge, cognitive skills, and other competencies but do not involve direct observation of their performance.
There are many evaluation methods for use in nursing education. Some methods, such as keeping journals, are most appropriate for formative evaluation, whereas others are useful for either formative or summative evaluation.
Selecting Clinical Evaluation Methods
There are several factors to consider when selecting clinical evaluation methods to use in a course. First, the evaluation methods should provide information on student performance of the clinical competencies associated with the course. With the evaluation methods, the teacher collects data on performance to judge if students are developing the clinical competencies or have achieved them by the end of the course.
For many outcomes of a course, there are different strategies that can be used, thereby providing flexibility in choosing methods for evaluation. Most evaluation methods provide data on multiple clinical outcomes. For example, a short written assignment in which students compare two different data sets might relate to outcomes on assessment, analysis, and writing.
In planning the evaluation for a clinical course, the teacher reviews the outcomes or competencies to be developed and decides which evaluation methods will be used for assessing them, recognizing that most methods provide information on more than one outcome or competency. In clinical courses in nursing programs, students are evaluated typically on the outcomes of clinical practice, as identified.
These relate to students’ knowledge; to their use of evidence in practice; their higher level thinking skills; their psychomotor , technological, and informatics competencies; communication skills; their values and professional behaviors; their quality and safety competencies; their leadership skills; responsibility; and their self-assessment and development.
Some of these competencies are easier to assess than others, but all aspects should be in the evaluation process. Because of the breadth of competencies students need to develop, multiple strategies should be used for assessment in clinical courses. Second, there are many different clinical evaluation strategies that might be used to assess performance. Varying the methods maintains student interest and takes into account individual needs, abilities, and characteristics of learners.
Some students may be more proficient in methods that depend on writing, whereas others prefer strategies such as conferences and other discussion forms. Planning for multiple evaluation methods in clinical courses, as long as they are congruent with the outcomes to be evaluated, reflects these differences among students. It also avoids relying on one method, such as a rating scale, for determining the entire clinical grade.
Third, the teacher should always select evaluation methods that are realistic considering the number of students to be evaluated, available practice or simulation activities, and constraints such as the teacher’s or preceptor’s time. Planning for an evaluation method that depends on patients with specific health problems or particular clinical situations is not realistic considering the types of experiences with actual or simulated patients available to students.
Some methods are not appropriate because of the number of students who would need to use them within the time frame of the course. Others may be too costly or require resources not available in the nursing education program or health care setting. Fourth, evaluation methods can be used for either formative or summative evaluation.
In the process of deciding how to evaluate students’ clinical performance, the teacher should identify whether the methods will be used to provide feedback to learners (formative) or for grading (summative). With formative clinical evaluation, the focus is on the progression of students in meeting the learning goals (Bonnel , Gomez, Lobodzinski , & West, 2005; Emerson, 2007; Hand, 2006; O’Connor, 2006).
At the end of the rotation, course, or semester, summative evaluation establishes whether the student met those goals and is competent (Gallant, MacDonald, & Smith Higuchi, 2006; Scanlan , Care, & Gessler , 2001; Skingley , Arnott , Greaves, & Nabb , 2006).
In clinical practice, students should know ahead of time whether the assessment by the teacher is for formative or summative purposes. Some of the methods designed for clinical evaluation provide feedback to students on areas for improvement and should not be graded. Other methods such as rating scales and written assignments can be used for summative purposes and therefore can be calculated as part of the course or clinical grade.
Fifth, before finalizing the protocol for evaluating clinical performance in a course, the teacher should review the purpose of each assignment completed by students in clinical practice and should decide on how many assignments will be in the course. What are the purposes of these assignments, and how many are needed to demonstrate competency?
In some clinical courses, students complete an excessive number of written assignments. How many assignments, regardless of whether they are for formative or summative purposes, are needed to meet the outcomes of the course? Students benefit from continuous feedback from the teacher, not from repetitive assignments that contribute little to their development of clinical knowledge and skills.
Rather than daily or weekly care plans or other assignments, which may not even be consistent with current practice, once students develop the competencies, they can progress to other, more relevant learning activities. Sixth, in deciding how to evaluate clinical performance, the teacher should consider the time needed to complete the evaluation, provide feedback, and grade the assignment.
Instead of requiring a series of written assignments in a clinical course, the same outcomes might be met through discussions with students, case analysis by students in clinical conferences, group-writing activities, and other methods requiring less teacher time that accomplish the same purposes.
Considering the demands on nursing faculty members, it is important to consider one’s own time when planning how to evaluate students’ performance in clinical practice (Oermann , 2004).
Observation
The predominant strategy for evaluating clinical performance is observing students in clinical practice, simulation and learning laboratories, and other settings. In a survey of 1,573 faculty members representing all types of prelicensure nursing programs (diploma, 128; associate degree, 866; baccalaureate, 563; and other entry-level, 16), observation of student performance was the predominant strategy used across programs (93 %) (Oermann , Yarbrough, Ard , Saewert , & Charasika , 2009).
Although observation is widely used, there are threats to its validity and reliability. First, observations of students may be influenced by the teacher’s values, attitudes, and biases, as discussed. There may also be over-reliance on first impressions, which might change as the teacher or preceptor observes the student over a period of time and in different situations.
In any performance assessment there needs to be a series of observations made before drawing conclusions about performance. Second, in observing performance, there are many aspects of that performance on which the teacher may focus attention.
For example, while observing a student administer an IV medication, the teacher may focus mainly on the technique used for its administration, ask limited questions about the purpose of the medication, and make no observations of how the student interacts with the patient.
Another teacher observing this same student may focus on those other aspects. The same practice situation, therefore, may yield different observations. Third, the teacher may arrive at incorrect judgments about the observation, such as inferring that a student is inattentive during conference when in fact the student is thinking about the comments made by others in the group.
It is important to discuss observations with students, obtain their perceptions of their behavior, and be willing to modify one’s own inferences when new data are presented. In discussing observations and impressions with students, the teacher can learn about their perceptions of performance; this, in turn, may provide additional information that influences the teacher’s judgment about competencies (Oermann , 2008).
Fourth, every observation in the clinical setting reflects only a sampling of the learner’s performance during a clinical activity. An observation of the same student at another time may reveal a different level of performance. The same holds true for observations of the teacher; on some clinical days and for some classes the teacher’s behaviors do not represent a typical level of performance.
An observation of the same teacher during another clinical activity and class may reveal a different quality of teaching. Finally, similar to other clinical evaluation methods, the outcomes or competencies guide the teacher on what to observe. They help the teacher focus on the observations of performance. However, all observed behaviors should be shared with the students.
Anecdotal Notes
It is difficult if not impossible to remember the observations made by each student for each clinical activity. For this reason teachers need a device to help them remember their observations and the context in which the performance occurred. There are several ways of recording observations of students in clinical settings, simulation and learning laboratories, and other settings such as anecdotal notes, checklists, and rating scales.
Anecdotal notes are narrative descriptions of observations made by students. Some teachers include only a description of the observations and then, after a series of observations, review the pattern of the performance and draw conclusions about it. Other teachers record their observations and include a judgment about how well the student performed (Case & Oermann , in press).
Anecdotal notes should be recorded as close to the time of the observation as possible; otherwise it is difficult to remember what was observed and the context, for example, the patient and clinical situation, of that observation. In the clinical setting, notes can be handwritten on flow sheets, on other forms, or as narratives. They also can be recorded in Personal Digital Assistants (PDAs).
Software is available for teachers to keep a running anecdotal record for each student, or they can use the available software on their PDA. The anecdotal notes can then be exported to the computer for formatting and printing. White and colleagues (2005) described how they used PDAs for clinical evaluation. The evaluation tool is stored in the PDA, and faculty members add their anecdotal notes.
Not only is the PDA valuable in documenting performance related to the course competencies and storing anecdotal notes, at the end of the clinical course there is a completed document on the student’s clinical performance (White et al., 2005). The faculty then synchronizes this information with their computers and transfers their anecdotal notes into a word-processed document to complete the summative clinical evaluation tool.
The goal of the anecdotal note is to provide a description of the student’s performance as observed by the teacher or preceptor. Liberto , Roncher , and Shellenbarger (1999) identified five key areas to include in an anecdotal note:
■ Date of the observation
■ Student name
■ Faculty signature
■ Setting of the observation, and
■ Record of student actions, with an objective and a detailed description of the observed performance (p. 16).
Anecdotal notes should be shared with students as frequently as possible; otherwise they are not effective for feedback. Considering the issues associated with observations of clinical performance, the teacher should discuss observations with the students and be willing to incorporate their own judgments about the performance.
Anecdotal notes also are useful in conferences with students, for example, at midterm and end-of-term, as a way of reviewing a pattern of performance over time. When there are sufficient observations about performance, the notes can serve as documentation for ratings on the clinical evaluation tool.
Checklists
A checklist is a list of specific behaviors or activities to be observed with a place for marking whether or not they were present during the performance (Nitko & Brookhart , 2007). A checklist often lists the steps to be followed in performing a procedure or demonstrating a skill. Some checklists also include errors in performance that are commonly made.
Checklists not only facilitate the teacher’s observation of procedures and behaviors performed by students and nurses learning new technologies and procedures, but they also provide a way for learners to assess their own performance. With checklists, learners can review and evaluate their performance prior to assessment by the teacher. Checklists are used frequently in health care settings to assess skills of nurses and document their continuing competence in performing them.
Whelan (2006) described an annual orthopedic skills day that is used to assess the competency of nurses in one clinical setting. Prior to the skills day, the nurses receive a packet of information about the skills that will be validated. Stations are set up to provide an opportunity for nursing staff members to practice their skills; a checklist is then used to validate their competency.
For common procedures and skills, teachers often can find check lists already prepared that can be used for evaluation, and some nursing textbooks have accompanying skills checklists. When these resources are not available, teachers can develop their own checklists. Initially, it is important to review the procedure or competency to understand the steps in the procedure and critical elements in its performance. The steps that follow indicate how to develop a checklist for rating performance:
1. List each step or behavior to be demonstrated in the correct order.
2. Add to the list specific errors students often make (to alert the assessor to observe for these).
3. Develop the list into a form to check off the steps or behaviors as they are performed in the proper sequence ( Nitko & Brookhart, 2007).
In designing checklists, it is important not to include every possible step, which makes the checklist too cumbersome to use, but to focus instead on critical items and where they fit into the sequence. The goal is for students to learn how to perform a procedure safely and to understand the order of steps in the procedure.
When there are different ways of performing a skill, the students should be allowed that flexibility when evaluated. Exhibit 13.1 provides an example of a checklist developed from the sample competency and performance criteria used in Exhibit 12.2.
Rating Scales
Rating scales, also referred to as clinical evaluation tools or instruments, provide a means of recording judgments about the observed performance of students in clinical practice. A rating scale has two parts:
(a) a list of outcomes, competencies, or behaviors the student is to demonstrate in clinical practice
(b) a scale for rating the student’s performance of them
Rating scales are most useful for summative evaluation of performance; After observing students over a period of time, the teacher draws conclusions about performance, rating it according to the scale provided with the instrument. They also may be used to evaluate specific activities that the students complete in clinical practice, for example, rating a student’s presentation of a case in clinical conference or the quality of teaching provided to a patient.
Other uses of rating scales are to:
(a) help students focus their attention on critical behaviors to be performed in clinical practice
(b) give specific feedback to students about their performance
(c) demonstrate growth in clinical competencies over a designated time period if the same rating scale is used
The same rating scale can be used for multiple purposes. Exhibit 13.2 shows sample behaviors from a rating scale that is used midway through a course; in Exhibit 13.3 those same behaviors are used for the final evaluation, but the performance is rated as “satisfactory” or “unsatisfactory” as a summative rating.
Types of Rating Scales
Many types of rating scales are used for evaluating clinical performance. The scales may have multiple levels for rating performance, such as 1 to 5 or exceptional to below average, or have two levels, such as pass–fail. Types of scales with multiple levels for rating performance include:
■ Letters: A, B, C, D, E or A, B, C, D, F
■ Numbers: 1, 2, 3, 4, 5
■ Qualitative labels: Excellent, very good, good, fair, and poor; Exceptional, above average, average, and below average, and
■ Frequency labels: Always, usually, frequently, sometimes, and never. Exhibits 13.4 and 13.5 provide examples of ratings scales for clinical evaluation that have multiple levels for rating performance.
Some instruments have a matrix for rating clinical performance that combines different qualities of the performance. An example of a matrix is Bondy’s Criterion Matrix, which uses a 5-point scale to rate the quality of a student’s performance based on appropriateness of the performance, qualitative aspects of the performance, and the degree of assistance needed by the student ( Bondy , Jenkins, Seymour, Lancaster, & Ishee , 1997).
Holaday and Buckley (2008) adapted that matrix for their tool, which rates performance at five levels of competence: from dependent to self-directed. A score is generated from the ratings and can be used to convert to a grade. A short description included with the letters, numbers, and labels for each of the outcomes, competencies, or behaviors rated improves objectivity and consistency (Nitko & Brookhart , 2007).
For example, if teachers were using a scale of exceptional, above average, average, and below average, or based on the numbers 4, 3, 2, and 1, short descriptions of each level in the scale could be written to clarify the performance expected at each level. For the clinical outcome “Collects relevant data from patient,” the descriptors might be:
Exceptional (or 4): Differentiates relevant from irrelevant data, analyzes multiple sources of data, establishes comprehensive data base, identifies data needed for evaluating all possible nursing diagnoses and patient problems.
Above Average (or 3): Collects significant data from patients, uses multiple sources of data as part of assessment, and identifies possible nursing diagnoses and patient problems based on the data.
Average (or 2): Collects significant data from patients, uses data to develop main nursing diagnoses and patient problems.
Below Average (or 1): Does not collect significant data and misses important cues in data; unable to explain relevance of data for nursing diagnoses and patient problems. Rating scales for clinical evaluation also may have two levels such as pass–fail and satisfactory–unsatisfactory. A survey of nursing faculty from all types of programs indicated that most faculty members (n = 1,116; 83%) used pass–fail in their clinical courses (Oermann et al., 2009).
This finding is consistent with an earlier survey of 79 nursing programs, randomly selected, that found that 75% (n = 59) of the programs had pass–fail rating scales for clinical evaluation (Alfaro LeFevre , 2004). Exhibits 13.6 and 13.7 are examples of clinical evaluation tools that have two levels for rating performance: satisfactory–unsatisfactory and pass–fail.
Issues With Rating Scales
One problem in using rating scales is apparent through a review of the sample scale descriptors. What are the differences between above average and average? Between a “2” and “1”? Is there consensus among faculty members using the rating scale as to what constitutes different levels of performance for each outcome, competency, or behavior evaluated?
This problem exists even when descriptions are provided for each level of the rating scale. Teachers may differ in their judgments of whether the student collected relevant data, whether multiple sources of data were used, whether the data base was comprehensive or not, whether all possible nursing diagnoses were considered, and so forth.
Scales based on frequency labels are often difficult to implement because of limited opportunities for students to practice and demonstrate a level of skill rated as “always, usually, frequently, sometimes, and never.” How should teachers rate students’ performance in situations in which they practiced the skill perhaps once or twice? Even with two-dimensional scales such as pass–fail, there is room for variability among educators.
Nitko and Brookhart (2007) identified eight common errors that can occur with rating scales applicable to rating clinical performance. Three of these can occur with tools that have multiple points on the scale for rating performance, such as 1 to 5 or below average to exceptional:
1. Leniency error results when the teacher tends to rate all students toward the high end of the scale.
2. Severity error is the opposite of leniency, tending to rate all students towards the low end of the scale.
3. Central tendency error is hesitancy to mark either end of the rating scale and instead use only the midpoint of the scale.
Rating students only at the extremes or only at the midpoint of the scale limits the validity of the ratings for all students and introduces the teacher’s own biases into the evaluation (Nitko & Brookhart , 2007).
Three other errors that can occur with any type of clinical performance rating scale are a halo effect, personal bias, and a logical error.
4. Halo effect is a judgment based on a general impression of the student. With this error the teacher lets an overall impression of the student influence the ratings of specific aspects of the student’s performance.
This impression is considered a “halo” around the student that affects the teacher’s ability to objectively evaluate and rate specific competencies or behaviors on the tool. This halo may be positive, giving the student a higher rating than is deserved, or negative, giving a general negative impression of the student resulting in lower ratings of specific aspects of the performance.
5. Personal bias occurs when the teacher’s biases influence ratings such as favoring nursing students who do not work while attending school over those who are employed while attending school.
6. Logical error results when similar ratings are given for items on the scale that are logically related to one another.
This is a problem with rating scales in nursing that are too long and often too detailed. For example, there may be multiple behaviors related to communication skills to be rated. The teacher observes some of these behaviors but not all of them.
In completing the clinical evaluation form, the teacher gives the same rating to all behaviors related to communication on the tool. When this occurs, often some of the behaviors on the rating scale can be combined. Two other errors that can occur with performance ratings are rater drift and reliability decay (Nitko & Brookhart , 2007).
7. Rater drift can occur when teachers redefine the performance behaviors to be observed and assessed. Initially in developing a clinical evaluation form, teachers agree on the competencies or behaviors to be rated and the scale to be used.
However, over a period of time, educators may interpret them differently, drifting away from the original intent. For this reason faculty members in a course should discuss as a group each competency or behavior on their clinical evaluation form at the beginning of the course and at the mid-point.
This discussion should include the meaning of the competency or behavior and what a student’s performance would “look like” at each rating level in the tool. Simulated experiences in observing a performance, rating it with the tool, and the discussing rationale for the rating are valuable to prevent rater drift as the course progresses. 8. Reliability decay is a similar issue that can occur.
Nitko and Brookhart indicated that immediately following training on using a performance rating tool, educators tend to use the tool consistently across students and with each other. As the course continues, though, faculty members may become less consistent in their ratings.
Discussion of the clinical evaluation tool among course faculty, as indicated earlier, may improve consistency in use of the tool. Bourbonnais, Langford, and Giannantonio (2008) suggested that conferences with students about the meaning of the behaviors on the tool encourage students to assess whether they are meeting the clinical outcomes and to reflect on their performance.
Although there are issues with rating scales, they remain an important clinical evaluation method because they allow teachers, preceptors, and others to rate performance over time and to note patterns of performance. Exhibit 13.8 provides guidelines for using rating scales for clinical evaluation in nursing.
Most nursing faculty use some type of clinical evaluation tool to evaluate students’ performance in their courses (n = 1,534; 98%) (Oermann et al., 2009). Seventy percent of nursing faculty (n = 1,095) reported in a survey that they used one basic tool for their nursing courses that was adapted for the competencies of each particular course. Only 242 (16%) faculty members reported having a unique evaluation tool for each clinical course (Oermann et al.)
Simulation
Simulation is an event or activity that allows learners to experience a clinical situation without the risks. With simulations students can develop their psychomotor and technological skills and practice those skills to maintain their competence. Simulations, particularly those involving human patient simulators, enable students to gain thinking and problem-solving skills, and make independent decisions (Schoening , Sittner , & Todd, 2006).
With human patient simulators and complex case scenarios, students can assess a patient or clinical situation, analyze data, make decisions about priority problems and actions to take, implement those interventions, practice complex technologies, and evaluate outcomes. Lasater (2007) conducted a qualitative study that examined the experiences of beginning nursing students with high-fidelity simulations.
She concluded that although simulations appear to be of value in guiding students’ development of clinical judgment skills, more research is needed on this outcome. Research suggests that simulations speed learning of didactic content and development of skills (Kardong-Edgren , Starkweather , & Ward, 2008; Shepherd, Kelly, Skene , & White, 2007).
Another outcome of instruction with simulations is the practice and repetition they provide. Simulations allow students to repeat performance, both cognitive and psychomotor, until competent. Students can practice interacting with patients, staff, and others in a safe environment as well as making decisions as a health care team (Giddens et al., 2008; Oermann , 2006a).
Simulations are being used more frequently in clinical settings with new graduates and experienced nurses. Kuhrik and associates (Kuhrik , Kuhrik , Rimkus , Tecu , & Woodhouse, 2008) reported using high-fidelity simulations to prepare nurses to respond to oncology emergencies and to enhance the education of nurses who care for bone marrow transplant patients.
Simulations are increasingly important as a clinical teaching strategy, given the limited time for clinical practice in many programs and the complexity of skills to be developed by students. Brown (2008) suggested that simulated scenarios ease the shortage of clinical experiences for students because of clinical agency restrictions and fewer practice hours in a curriculum.
In a simulation laboratory, students can practice skills without the constraints of a real-life situation. Although this practice is important, L. Day (2007) stressed that simulations do not replace actual experiences with patients and learning that results from partnering with preceptors.
Using Simulations for Clinical Evaluation
Simulations are not only effective for instruction in nursing, but they also are useful for clinical evaluation. Students can demonstrate procedures and technologies, conduct assessments, analyze clinical scenarios and make decisions about problems and actions to take, carry out nursing interventions, and evaluate the effects of their decisions. Each of these outcomes can be evaluated for feedback to students or for summative grading. There are different types of simulations that can be used for clinical evaluation.
Case scenarios that students analyze can be presented in paper-and-pencil format or through multimedia. Many computer simulations are available for use in evaluation. Simulations can be developed with models and manikins for evaluating skills and procedures, and for evaluation with standardized patients.
With human patient simulators, teachers can identify outcomes and clinical competencies to be assessed, present various clinical events and situations on the simulator for students to analyze and then take action, and evaluate student decision making and performance in these scenarios.
These high-fidelity simulations (mimicking lifelike situations) are best used for formative evaluation. In the debriefing session that follows, the students as a group can discuss the case, review findings, and critique their actions and decisions, with faculty providing feedback (Jeffries, 2007; Schoening et al., 2006).
Many nursing education programs have set up simulation laboratories with human patient simulators, clinically equipped examination rooms, manikins and models for skill practice and assessment, areas for standardized patients, and a wide range of multimedia that facilitate performance evaluations. The rooms can be equipped with two-way mirrors, video cameras, microphones, and other media for observing and performance rating by faculty and others.
Videoconferencing technology can be used to conduct clinical evaluations of students in settings at a distance from the nursing education program, effectively replacing onsite performance evaluations by faculty. For simulations to be used effectively for clinical evaluation, although, teachers need to be adequately prepared for their role. Nursing programs are finding that it is easy to purchase human patient simulators but not as easy to integrate those experiences into the nursing curriculum (Kardong-Edgren et al., 2008).
Incorporating Simulations Into Clinical Evaluation Protocol
The same principles used for evaluating student performance in the clinical setting apply to using simulations. The first task is to identify which clinical outcomes will be assessed with a simulation. This decision should be made during the course planning phase as part of the protocol developed for clinical evaluation in the course.
When deciding on evaluation methods, it is important to remember that assessment can be done for feedback to students and thus remain ungraded, or be used for grading purposes. Once the outcomes or clinical competencies to be evaluated with simulations are identified, the teacher can plan the specifics of the evaluation. Some questions to guide teachers in using simulations for clinical evaluation are:
■ What are the specific clinical outcomes or competencies to be evaluated using simulations? These should be designated in the plan or protocol for clinical evaluation in a course.
■ What types of simulations are needed to assess the designated outcomes, for example, simulations to demonstrate psychomotor and technological skills; ability to identify problems, treatments, and interventions; and pharmacological management?
■ Do the simulations need to be developed by the faculty, or are they already available in the nursing education program?
■ If the simulations need to be developed, who will be responsible for their development? Who will manage their implementation?
■ Are the simulations for formative evaluation only? If so, how many practice sessions should be planned? What is the extent of faculty and expert guidance needed? Who will provide that supervision and guidance?
■ Are the simulations for summative evaluation (ie, for grading purposes)? If used for summative clinical evaluation, then faculty need to determine the process for rating performance and how those ratings will be incorporated into the clinical grade, whether pass–fail or another system for grading.
■ Who will develop or obtain checklists or other methods for rating performance in the simulations?
■ When will the simulations be implemented in the course?
■ How will the effectiveness of the simulations be evaluated, and who will be responsible? These are only a few of the questions for faculty to consider when planning using simulations for clinical evaluation in their courses.
Standardized Patients
One type of simulation for clinical evaluation uses standardized patients. Standardized patients are individuals who have been trained to accurately portray the role of a patient with a specific diagnosis or condition. With simulations using standardized patients, students can be evaluated on a history and physical examination, related skills and procedures, and communication techniques, among other outcomes.
Standardized patients are effective for evaluation because the actors are trained to recreate the same patient condition and clinical situation each time they are with a student, providing for consistency in the performance evaluation. When standardized patients are used for formative evaluation, they provide feedback to the students on their performance, an important aid to their learning.
Standardized patients are trained to provide both written and oral feedback; they can complete checklists for assessing skills and share those with students and provide immediate one-to-one feedback after the experience (Jenkins & Schaivone , 2007).
In a study by Becker and colleagues (Becker, Rose, Berg, Park, & Shatzer , 2006), undergraduate students viewed their experience with standardized patients as positive. One of the important outcomes was getting written feedback from the standardized patient, which gave them a different perspective of their skills and enabled them to compare their self-assessment with the standardized patient’s evaluation.
Students also indicated that the immediacy of the feedback was invaluable. The opportunity to receive immediate feedback also was identified by graduate nursing practitioner students in a study by Theroux and Pearce (2006).
Objective Structured Clinical Examination
An Objective Structured Clinical Examination (OSCE) provides a means of evaluating performance in a simulation laboratory rather than in the clinical setting. In an OSCE students rotate through a series of stations; at each station they complete an activity or perform a task, which is then evaluated. Some stations assess the student’s ability to take a patient’s history, perform a physical examination, and implement other interventions while being observed by the teacher or an examiner.
The student’s performance can then be rated using a rating scale or checklist. At other stations, students might be tested on their knowledge and cognitive skills—they might be asked to analyze data, select interventions and treatments, and manage the patient’s condition. Most often OSCEs are used for summative clinical evaluation; however, they also can be used formatively to assess performance and provide feedback to students. Newble and Reed (2004) identified three types of stations that can be used in an OSCE.
At clinical stations the focus is on clinical competence, for example, taking a history and performing a physical examination, collecting appropriate data, and communicating effectively. Typically at clinical stations there is interaction between the student and a simulated patient (Newble & Reed). At these stations the teacher or examiner can evaluate students’ understanding of varied patient conditions and management of them and can rate the students’ performance.
At practical stations students demonstrate psychomotor skills, per form procedures, use technologies, and demonstrate other technical competencies. Performance at these stations is evaluated by the teacher or examiner, usually with checklists. Two challenges in using OSCE are student stress from being observed during performance and issues with validity and reliability (Rushforth , 2007).
At the third type of ward, a static ward, there is no interaction with a simulated or standardized patient (Newble & Reed, 2004). This station facilitates the evaluation of cognitive skills such as interpreting lab results and other data, developing management plans, and making other types of decisions about patient care. At these stations the teacher or examiner is not present to observe students.
Games
Games are teaching methods that involve competition, rules (structure), and collaboration among team members. There are individual games such as crossword puzzles or games played against other students either individually or in teams; many games require props or other equipment. Games actively involve learners, promote teamwork, use problem-solving skills, motivate, stimulate interest in a topic, and enable students to relax while learning (Henderson, 2005; Royse & Newton, 2007; Skiba , 2008).
Games, however, are not intended for grading; they should be used only for instructional purposes and formative evaluation. Henderson (2005) described the development of a game lab for nursing students, entitled “Is That Your Final Nursing Answer?” Students rotate into small groups to “play” Nursing Feud, Nursing Jeopardy, and So You Want to be a Millionaire Nurse?
And Wheel of Nursing Fortune, all of which review content from a clinical nursing course. A fifth area in the learning laboratory set up for “play” is titled “What’s Wrong with This Nursing Picture?” (Henderson). In this game, students find violations of nursing principles and common nursing errors made in clinical practice. Answers to game questions given by the student teams are accompanied by a rationale. This is one example of how games can be used for instruction, review, and feedback in nursing education.
Media Clips
Media clips, short segments of a videotape, a CD, a DVD, a video from YouTube, and other forms of multimedia may be viewed by students as a basis for discussions in post clinical conferences, on discussion boards, and for other online activities; used for small group activities; and critiqued by students as an assignment. Media clips often are more effective than written descriptions of a scenario because they allow the student to visualize the patient and clinical situation.
The segment viewed by the students should be short so they can focus on critical aspects related to the outcomes to be evaluated. Media clips are appropriate for assessing whether students can apply concepts and theories to the patient or clinical situation depicted in the media clip, observe and collect data, identify possible problems, identify priority actions and interventions, and evaluate outcomes.
Students can answer questions about the media clips as part of a graded learning activity. Otherwise, media clips are valuable for formative evaluation, particularly in a group format in which students discuss their ideas and receive feedback from the teacher and their peers.
Written Assignments
Written assignments accompanying the clinical experience are effective methods for assessing students’ problem solving, critical thinking, and higher level learning; their understanding of content relevant to clinical practice; and their ability to express ideas in writing. There are many types of written assignments appropriate for clinical evaluation.
The teacher should first specify the outcomes to be evaluated with written assignments and then decide which assignments would best assess whether those outcomes were met. The final decision is how many assignments will be required in a clinical course. Written assignments are valuable for evaluating outcomes in face-to-face, web-based and other distance education courses in nursing.
However, they are misused when students complete the same assignments repetitively throughout a course once the outcomes have been met. At that point students should progress to other, more challenging learning activities.
Some of the written assignments might be done in post clinical conferences as small-group activities, or as part of the discussion board interaction—teachers still can assess student progress toward meeting the outcomes but with fewer demands on their time for reviewing the assignments and providing prompt feedback on them.
Journal Writing
Journals provide an opportunity for students to describe events and experiences in their clinical practice and to reflect on them. With journals students can “think aloud” and share their feelings with teachers. Journals are not intended to develop students’ writing skills; instead they provide a means of expressing feelings and reflections on clinical practice and engaging in a dialogue with the teacher about them.
When journals are used for reflection, they encourage students to make connections between theoretical knowledge and clinical observations and practice (Billings & Kowalski, 2006; Van Horn & Freed, 2008). Journals can be submitted in electronic formats, for example, by e-mail, web blogs, and discussion forums (Billings & Kowalski).
Electronic submission of journals makes it easier for teachers to provide prompt feedback and engage in dialogue with learners, and it simplifies storing the journals. Journals are not the same as diaries and logs. In a diary, students document their experiences in clinical practice with personal reflections; These reflections are meant to remain “personal” and thus are not shared with the teacher or others. A log is typically a structured record of activities completed in the clinical course without reflections about the experience. Students may complete any or all of these activities in a nursing program.
When journals are used in a clinical course, students need to be clear about the objectives—what are the purposes of the journal? For example, a journal intended for reflection in practice would require different entries than one for documenting events and activities in the clinical setting as a means of communicating them to faculty. Students also need written guidelines for journal entries, including how many and what types of entries to make.
Depending on the outcomes, journals may be done throughout a clinical course or at periodic intervals. Regardless of the frequency, students need immediate and meaningful feedback about their reflections and entries. One of the issues in using journals is whether they should be graded or used solely for reflection and growth. For those educators who support grading journals, a number of strategies have been used, such as:
■ indicating a grade based on the number of journals submitted rather than on the comments and reflections in them;
■ grading papers written from the journals;
■ incorporating journals as part of portfolios, which are then graded;
■ having students evaluate their own journals based on preset criteria developed by the students themselves; and
■ requiring a journal as one component among others for passing a clinical course.
There are some teachers who grade the entries of a journal similar to other written assignments. However, when the purpose of the journal is to reflect on experiences in clinical practice and on the students’ own behaviors, beliefs, and values, journals should not be graded. By grading journals the teacher inhibits the student’s reflection and dialogue about feelings and perceptions of clinical experiences.
Nursing Care
Plans Nursing care plans enable the student to learn the components of the nursing process and how to use the literature and other resources for writing the plan. However, a linear kind of care planning does not help students learn how problems interrelate nor does it encourage the complex thinking that nurses must do in clinical practice (Kern, Bush, & McCleish , 2006).
If care plans are used for clinical evaluation, teachers should be cautious about the number of plans required in a course and the outcomes of such an assignment. Short assignments in which students analyze data, examine competing diagnoses, evaluate different interventions and their evidence for practice, suggest alternative approaches, and evaluate outcomes of care are more effective than a care plan that students often paraphrase from their textbooks.
Concept Maps
Concept maps are tools used to visually display relationships among concepts. An example is provided in Figure 13.1. Other names for concept maps are clinical correlation maps, clinical maps, and mindmapped care plans. Concept maps are an effective way of helping students organize data as they plan for their clinical experience; the map can be developed in a preclinical conference based on the patient’s admitting diagnosis, revised during the clinical day as the student collects data and cares for the patient, and then assessed and discussed in post clinical conference (Hill, 2006).
With a concept map students can “see” graphically how assessment data, diagnoses, interventions, and other aspects of care are related to one another. Mueller, Johnston, and Bligh (2001) combined concept maps and care plans into a strategy they called mind-mapped nursing care plans. Students first develop a generic concept map about something that requires planning, such as a trip they might take.
Then they learn how to develop concept maps for general nursing concepts such as immobility. In small groups, students develop the concept map; illustrate how the concept (eg, immobility) affects various body systems; and identify their assessment, actions, and outcomes for each branch on the map. Students also prepare a concept map from a case study and then proceed to using concept maps in clinical practice.
In most cases, concept maps are best used for formative evaluation. However, with criteria established for evaluation, they also can be graded. Couey (2004) suggested that one way to grade concept maps is to ask students to explain the relationships and cross-links among concepts. This could be done in short papers that accompany the concept map, which are then graded by the teacher similar to other written assignments.
Other areas to assess in a concept map for patient care, depending on the goal of the assignment, are: whether the assessment data are comprehensive, whether the data are linked with the correct diagnoses and problems, whether nursing interventions and treatments are specific and relevant , and whether the relationships among the concepts are indicated and accurate.
Case Method, Unfolding Cases, and Case Study Case method, unfolding cases, and case study were described, because they are strategies for assessing problem solving, decision making, and higher level learning. Cases that require application of knowledge from readings and the classroom or an online component of the course can be developed for analysis by students.
The scenarios can focus on patients, families, communities, the health care system, and other clinical situations that students might encounter in their clinical practice. Although these assignments may be completed as individual activities, they are also appropriate for group work. Cases may be presented for group discussion and peer review in clinical conferences and discussion boards.
In online courses, the case scenario can be presented with open-ended questions and, based on student responses, other questions can be introduced for discussion. Using this approach, cases are effective for encouraging critical thinking. By discussing cases as a clinical group, students are exposed to other possible approaches and perspectives that they may not have identified themselves.
With this method, the teacher can provide feedback on the content and thought process used by students to arrive at their answers. One advantage of short cases, unfolding cases, and case studies is that they can be graded. By using the principles described for scoring essay tests, the teacher can establish criteria for grading and scoring responses to the questions with the case. Otherwise cases are useful for formative evaluation and student self-assessment.
Process Recording
Process recordings provide a way of evaluating students’ ability to analyze interactions they have had with patients or in simulated clinical activities. Process recordings are useful for providing feedback to students about their interactional skills, but the analysis of the communication also may be graded. With process recordings, students can reflect on their interactions and what they might have done differently.
For distance education courses, they provide one source of information about student learning in clinical practice and development of communication skills. When portfolios are used for clinical evaluation, process recordings might be included for outcomes related to communication and interpersonal relationships.
Papers
Short papers for assessing critical thinking and other cognitive skills were described.. In short papers about clinical practice, students can:
■ Given a data set, identify patient problems and what additional data need to be collected
■ Compare data and problems of patients for whom they have provided nursing care, identifying similarities and differences
■ Given a hypothetical patient or community problem, identify possible interventions with a rationale
■ Select a patient, family, or community diagnosis, and describe relevant interventions with evidence for their use
■ Identify one intervention used with a patient, family, or community; identify one alternative approach that could be used; and provide a rationale
■ Identify a decision made in clinical practice involving patients or staff, describe why they made that decision, and propose another approach that could be used
■ Identify a problem or an issue they had in clinical practice, critique the approaches they used for resolving it, and identify alternate approaches.
Short written assignments in clinical courses may be more beneficial than longer assignments because with long papers students often summarize from the textbook and other literature without engaging in any of their own thinking about the content (Oermann , 2006b). Short papers can be used for formative evaluation or graded. Term papers also may be written about clinical practice. With term papers, students can critique and synthesize relevant literature and write a paper about how that literature relates to patient care.
Or they might prepare a paper on the use of selected concepts and theories in clinical practice. If the term paper includes the submission of drafts combined with prompt feedback on writing from the teacher, it can be used as a strategy for improving writing skills. Although drafts of papers are assessed but not graded, the final product is graded by the teacher.
There are many other written assignments that can be used for clinical evaluation in a nursing course. Similar to any assignment in a course, requirements for papers should be carefully thought out: What outcomes will be met with the assignment, how will they contribute to clinical evaluation in the course, and how many of those assignments does a student need to complete for competency? In planning the clinical evaluation protocol, the teacher should exercise caution in the type and number of written assignments so that they promote learning without unnecessary repetition.
Portfolio
A portfolio is a collection of projects and materials developed by the student that document achievement of the objectives of the clinical course. With a portfolio, students can demonstrate what they have learned in clinical practice and the competencies they have developed. Portfolios are valuable for clinical evaluation because students provide evidence in their portfolios to confirm their clinical competencies and document new learning and skills acquired in a course.
The portfolio can include evidence of student learning for a series of clinical experiences or over the duration of a clinical course. Portfolios also can be developed for program evaluation purposes to document achievement of curriculum or program outcomes. Portfolios can be evaluated and graded by faculty members based on predetermined criteria.
They also can be used for students’ self-assessment of their progress in meeting personal and professional goals. Students can continue using their portfolios after graduation—for career development, for job applications, as part of their annual performance appraisals, for applications for educational programs, and as documentation of continuing competence (Oermann , 2002).
Nitko and Brookhart (2007) identified two types of portfolios: best work, and growth and learning progress. Best-work portfolios contain the student’s best final products (p. 250). These provide evidence that the student has demonstrated certain competencies and achievements in clinical practice, and thus are appropriate for summative clinical evaluation.
Growth-and-learning-progress portfolios are designed for monitoring students’ progress and for self-reflection on learning outcomes at several points in time. These contain products and work of the students in process and at the intermediate stages, for the teacher to review and provide feedback (Nitko & Brookhart ). For clinical evaluation, these purposes can be combined.
The portfolio can be developed initially for growth and learning, with products and entries reviewed periodically by the teacher for formative evaluation, and then as a best-work portfolio with completed products providing evidence of clinical competencies. The best work portfolio then can be graded. Because portfolios are time-consuming to develop, they should be used to determine if students met the objectives and passed the clinical course, and should be graded rather than prepared only for self-reflection.
The contents of the portfolio depend on the clinical objectives and competencies to be achieved in the course. Many types of materials and documentation can be included in a portfolio. For example, students can include short papers they completed in the course, a term paper, reports of group work, reports and analyzes of observations made in the clinical setting, self-reflections on clinical experiences, concept maps, and other products they developed in their clinical practice.
The key is for students to choose materials that demonstrate their learning and development of clinical competencies. By assessing the portfolio, the teacher should be able to determine whether the students meet the outcomes of the course. There are several steps to follow in using portfolios for clinical evaluation in nursing. Nitko and Brookhart (2007) emphasizes that the first step guides faculty members in deciding whether a portfolio is an appropriate evaluation method for the course.
Step 1: Identify the purpose of the portfolio.
■ Why is a portfolio useful in the course? What goals will it serve?
■ Will the portfolio serve as a means of assessing students’ development of clinical competencies, focusing predominantly on the growth of the students? Will the portfolio provide evidence of the students’ best work in clinical practice, including products that reflect their learning over a period of time? Or, will the portfolio meet demands, enabling the teacher to give continuous feedback to students on the process of learning and projects on which they are working, as well as providing evidence of their achievements and achievements in clinical practice?
■ Will the portfolio be used for formative or for summative evaluation? Or both?
■ Will the portfolio provide assessment data for use in a clinical course? Or will it be used for curriculum and program evaluation?
■ Will the portfolio serve as a means of assessing prior learning and therefore have an impact on the types of learning activities or courses that students complete, for instance, for assessing the prior learning of registered nurses entering a higher degree program or for licensed practical nurses entering an associate degree program?
■ What is the role of the students, if any, in defining the focus and content of the portfolio? Step 2: Identify the type of entries and content to be included in the portfolio.
■ What types of entries are required in the portfolio, for example, products developed by students, descriptions of projects with which the students are involved, descriptions of clinical learning activities and reactions to them, observations made in clinical practice and analysis of them, and papers completed by the students, among others?
■ In addition to required entries, what other types of content and entries might be included in the portfolio?
■ Who determines the content of the portfolio and the types of entries? Teacher only? Student only? Or both?
■ Will the entries be the same for all students or individualized by the student?
■ What is the minimum number of entries to be considered satisfactory?
■ How should the entries in the portfolio be organized, or will the students choose how to organize them?
■ Are there required times for entries to be made in the portfolio, and when should the portfolio be submitted to the teacher for review and feedback?
■ Will teacher and student meet in a conference to discuss the portfolio? Step 3: Decide on the evaluation of the portfolio entries including criteria for evaluation of individual entries and the portfolio overall.
■ How will the portfolio be integrated within the clinical evaluation grade and course grade, if at all?
■ What criteria will be used to evaluate, and perhaps score, each type of entry and the portfolio as a whole?
■ Will only the teacher evaluate the portfolio and its entries? Will only the students evaluate their own progress and work? Or will the evaluation be a collaborative effort?
■ Should a rubric be developed for scoring the portfolio and individual entries?
Is there one available in the nursing education program that could be used? These steps and questions to be answered provide guidelines for teachers in developing a portfolio system for clinical evaluation in a course or for other purposes in the nursing education program.
Electronic Portfolios
Portfolios can be developed and stored electronically, which facilitates updating and revising entries, as compared with portfolios that include hard copies of materials. In addition to easy updating, prior versions of the portfolio can be archived. Students can develop an electronic portfolio in a nursing course and then reformat it for program evaluation purposes, use it in a capstone nursing course, or for a job application. The electronic portfolio can be saved on a local computer, course website, or CD, and can be easily sent to others for feedback or scoring. Some other reasons for using electronic portfolios in a course:
■ They can be shared with others at limited or no cost (eg, on the Web, by e-mail, or as a CD) and updated easily.
■ They can document learning and development over a period of time.
■ They can be modified for class and program assessment, graduation requirements, or a job search.
■ They can include a variety of multimedia.
■ They are interactive, and through use of hypertext, students can connect ideas, projects, and links.
■ They can be designed for review by the student for self-assessment, by the teacher and student, by other students in the clinical course or nursing program, or by prospective employers, depending on the purpose of the portfolio (M. Day, 2004 ;Ring, Weaver, & Jones, 2008).
Conferences
The ability to present ideas orally is an important outcome of clinical practice. Sharing information about a patient, leading others in discussions about clinical practice, presenting ideas in a group format, and giving lectures and other types of presentations are skills that students need to develop in a nursing program. Working with nursing staff members and health care team members of other disciplines requires the ability to communicate effectively.
Conferences provide a method for developing oral communication skills and for evaluating competency in this area. Many types of conferences are appropriate for clinical evaluation, depending on the outcomes to be met. Preclinical conferences take place prior to beginning a clinical learning activity and allow students to clarify their understanding of patient problems, interventions, and other aspects of clinical practice.
In these conferences, the teacher can assess students’ knowledge and provide feedback to them. Post clinical conferences, held at the end of a clinical learning activity or at a predetermined time during the clinical practicum, provide an opportunity for the teacher to assess students’ ability to use concepts and theories in patient care, plan care, assess the effectiveness of interventions, problem solving and thinking critically, collaborating with peers, and achieving other outcomes, depending on the focus of the discussion.
In clinical conferences students can also examine ethical dilemmas; cultural aspects of care; and issues facing patients, families, communities, providers, and the health care system. In discussions such as these, students can examine different perspectives and approaches that could be taken. Another conference in which students might participate is an interdisciplinary conference, providing an opportunity to work with other health providers in planning and evaluating care of patients, families, and communities.
Although many clinical conferences will be face-to-face with the teacher or preceptor on-site with the students, conferences also can be conducted online. In a study by Cooper, Taft, and Thelen (2004), students identified “flexibility” and “ an opportunity for equal participation” as two benefits of holding clinical conferences online versus face-to-face. Criteria for evaluating conferences include the ability of students to:
■ Present ideas clearly and in a logical sequence to the group.
■ Participate actively in the group discussion.
■ Offer ideas relevant to the topic.
■ Demonstrate knowledge of the content discussed in the conference.
■ Offer different perspectives on the topic, engaging the group in critical thinking.
■ Assume a leadership role, if relevant, in promoting group discussion and arriving at group decisions.
Most conferences are evaluated for formative purposes, with the teacher giving feedback to students as a group or to the individual who led the group discussion. When conferences are evaluated as a portion of the clinical or course grade, the teacher should have specific criteria to guide the evaluation and should use a scoring rubric. Exhibit 13.9 provides a sample form that can be used to evaluate how well a student leads a clinical conference or to assess student participation in a conference.
Group Projects
Some group work is short term—only for the time it takes to develop a product such as a teaching plan or group presentation. Other groups may be formed for the purpose of cooperative learning with students working in small groups or teams in clinical practice over a longer period of time. With any of these group formats, both the products developed by the group and the ability of the students to work cooperatively can be assessed. There are different approaches for grading group projects.
The same grade can be given to every student in the group, that is, a group grade, although this does not take into consideration individual student effort and contribution to the group product. Another approach is for the students to indicate in the finished product the parts they contributed, providing a way of assigning individual student grades, with or without a group grade. Students also can provide a self-assessment of how much they contributed to the group project, which can then be integrated into their grade.
Alternatively, students can prepare both a group and an individual product. Nitko and Brookhart (2007) emphasizes that rubrics should be used for assessing group projects and should be geared specifically to the project. This rubric could be used for grading a paper prepared by either a group or an individual student. To assess students’ participation and collaboration in the group, the rubric also needs to reflect the goals of group work.
With small groups, the teacher can observe and rate individual student cooperation and contributions to the group. However, this is often difficult because the teacher is not a member of the group, and the group dynamics change when the teacher is present. As another approach, students can assess the participation and cooperation of their peers. These peer evaluations can be used for the students’ own development, and shared among peers but not with the teacher, or can be incorporated by the teacher in the grade for the group project.
Students also can be asked to assess their own participation in the group. In a study by Elliott and Higgins (2005), students reported that self- and peer assessment were effective strategies to ensure fairness and equity in grading of group projects in nursing. An easy-to-use form for peer evaluation of group participation is found in Exhibit 13.10.
Self Assessment
Self-assessment is the ability of students to evaluate their own clinical competencies and identify where further learning is needed. Self-assessment begins with the first clinical course and develops throughout the nursing education program, continuing into professional practice. Through self-assessment, students examine their clinical performance and identify both strengths and areas for improvement.
Using students’ self-assessments, teachers can develop plans to assist students in gaining the knowledge and skills they need to meet the outcomes of the course. It is important for teachers to establish a positive climate for learning in the course, or students will not be likely to share their self-assessments with them. In addition to developing a supportive learning environment, the teacher should hold planned conferences with each student to review performance. In these conferences, the teacher can
■ give specific feedback on performance,
■ obtain the student’s own perceptions of competencies
■ identify strengths and areas for learning from the teacher’s and student’s perspectives,
■ plan with the student learning activities for improving performance, which is critical if the student is not passing the clinical course, and
■ enhance communication between teacher and student.
Some students have difficulty assessing their own performance. This is a developmental process, and in the beginning of a nursing education program, students may need more guidance in assessing their performance than at the end. For this reason, Ridley and Eversole (2004) suggested providing students with a list of terms that might prompt their self-evaluation.
They ask students to circle the words that best describe their strengths and check terms that suggest areas for improvement. The students include examples of their clinical performance to validate their self-assessment. Self-evaluation is appropriate only for formative evaluation and should never be graded.
Clinical Evaluation In Distance Education
Nursing education programs use different strategies for offering the clinical component of distance education courses. Often preceptors in the local area guide student learning in the clinical setting and evaluate performance. If cohorts of students are available in an area, adjunct or part-time faculty members might be hired to teach a small group of students in the clinical setting.
In other programs, students independently complete clinical learning activities to gain the clinical knowledge and competencies of a course. Regardless of how the clinical component is structured, the course syllabus, competencies to be developed, rating forms, guidelines for clinical practice, and other materials associated with the clinical course can be placed online.
This provides easy access for students, their preceptors, other individuals with whom they are working, and agency personnel. Course management systems facilitate communication among students, preceptors, course faculty, and others involved in the students’ clinical activities. The critical decision for the teacher is to identify which clinical competencies and skills, if any, need to be observed and the performance rated because that decision suggests different evaluation methods than if the focus of the evaluation is on the cognitive outcomes of the clinical course.
In programs in which preceptors or adjunct faculty are available on-site, any of the clinical evaluation methods presented in this topic can be used as long as they are congruent with the course outcomes and competencies. There should be consistency, though, in how the evaluation is done across preceptors and clinical settings.
Strategies should be implemented in the course for preceptors and other educators involved in the performance evaluation to discuss as a group the competencies to be rated, what each competency means, and the performance of those competencies at different levels on the rating scale. This is a critical activity to ensure reliability across preceptors and other evaluators.
Activities can be provided in which preceptors observe video clips of performances of students and rate their quality using the clinical evaluation tool. Preceptors and course faculty members then can discuss the performance and rating.
Alternately, discussions about levels of performance and their characteristics and how those levels would be reflected in ratings of the performance can be held with preceptors and course faculty members. Preceptor development activities of this type should be done before the course begins and at least once during the course to ensure that evaluators are using the tool as intended and are consistent across student populations and clinical settings.
Even in clinical courses involving preceptors, faculty members may decide to evaluate clinical skills themselves by reviewing videotapes of performance or observing students through videoconferencing and other technology with faculty at the receiving end. Videotaping performance is valuable not only as a strategy for summative evaluation, to assess competencies at the end of a clinical course or another designated point in time, but also for review by students for self-assessment and by faculty to give feedback.
Simulations and standardized patients are other strategies useful in assessing clinical performance in distance education. Performance with standardized patients can be videotaped, and students can submit their patient histories and other written documentation that would commonly be done in practice in that situation. Students also can complete case analyzes related to the standardized patient encounter for assessing their knowledge base and rationale for their decisions.
Some nursing education programs incorporate intensive skill acquisitions workshops in centralized settings for formative evaluation followed by end-of-course ratings by preceptors and others guiding the clinical practicum. In other programs, students travel to regional settings for evaluation of clinical skills (Fullerton & Ingle, 2003).
Students can demonstrate clinical skills and perform procedures on manikins and models, with their performance videotaped and transmitted to faculty for evaluation. Some students may need to create videotapes themselves with personal or rented equipment as a means of demonstrating their development of clinical skills over time and documenting performance at the completion of the course.
In those circumstances a portfolio would be a useful evaluation method because it would allow the students to provide materials that indicate their achievement of the course outcomes and clinical competencies. Simulations, analyzes of cases, case presentations, written assignments, and other strategies presented in this topic can be used to evaluate students’ decision making and other cognitive skills in distance education courses. Similar to clinical evaluation in general, a combination of approaches is more effective than one method alone. Exhibit 13.11 summarizes clinical evaluation methods useful for distance education courses.
Conclusion
Many clinical evaluation methods are available for assessing student competencies in clinical practice. The teacher should choose evaluation methods that provide information on how well students are performing the clinical competencies. The teacher also decides if the evaluation method is intended for formative or for summative evaluation. Some of the methods designed for clinical evaluation are strictly to provide feedback to students on areas for improvement and are not graded.
Other methods, such as rating forms and certain written assignments, may be used for summative purposes. The predominant method for clinical evaluation is in observing the performance of students in clinical practice. Although observation is widely used, there are threats to its validity and reliability. Observations of students may be influenced by the values, attitudes, and biases of the teacher or preceptor, as discussed. In observing clinical performance, there are many aspects of that performance on which the teacher may focus attention.
Every observation reflects only a sampling of the learner’s performance during a clinical learning activity. Issues such as this point to the need for a series of observations before drawing conclusions about performance. There are several ways of recording observations of students—anecdotal notes, checklists, and rating scales. A simulation creates a situation that represents reality. A major advantage of simulation is that it provides a clinical learning activity for students without the constraints of a real-life situation.
With high-fidelity simulations, students can respond to changing situations offered by the simulation and can practice skills, conduct assessments, analyze physiological and other types of data, give medications, and observe the outcomes of interventions and treatments they select. One type of simulation for clinical evaluation uses standardized patients, that is, individuals who have been trained to accurately portray the role of a patient with a specific diagnosis or condition.
Another form of simulation for clinical evaluation is an Objective Structured Clinical Examination, in which students rotate through a series of stations completing activities or performing skills that are then evaluated. There are many types of written assignments useful for clinical evaluation depending on the outcomes to be assessed: journal writing, nursing care plan, concept map, case analysis, process recording, and a paper on some aspect of clinical practice.
Written assignments can be developed as a learning activity and reviewed by the teacher and/or peers for formative evaluation, or they can be graded. A portfolio is a collection of materials that students develop in clinical practice over a period of time. With a portfolio, students provide evidence to confirm their clinical competencies and document the learning that occurred in the clinical setting. Other clinical evaluation methods are the conference, group project, and self-assessment.