Nurses Educator

The Resource Pivot for Updated Nursing Knowledge

Nursing Education and Clinical Evaluation by Observation, Anecdotal Notes, Checklists, Rating Scales and Videos

Clinical Evaluation by Observation Observation, Tracking Clinical, Anecdotal, Checklist Rating Scales and Rubrics, Videos as Source of Observational Data for Evaluation in Nursing Education.

Observation as Evaluation Strategy in Nursing Education

Observation is the method most frequently used in clinical performance evaluation. Student performance is compared to clinical competency expectations as designated in course objectives. Faculty observe and analyze the performance, provide feedback on the observation, and determine whether further instruction is needed.

A large national survey specific to faculty clinical evaluation and grading practices confirmed the predominance of observation in clinical evaluation (Oermann, Yarbrough, Saewert, Ard, & Charasika, 2009). Authors noted that continuing issues in clinical evaluation include wide variability in clinical environments, increasingly complex patients, and more diverse students.

Real time observation and delayed video observation are both considered in this discussion. Advantages of observation include the potential for direct visualization and confirmation of student performance, but observation can also be challenging.

Sample factors that can interfere with observations include lack of specificity of the particular behaviors to be observed; an inadequate sampling of behaviors from which to draw conclusions about a student’s performance; and the evaluator’s own influences and perceptions, which can affect judgment of the observed performance (Oermann & Gaberson, 2014).

Faculty should seek tools and strategies that support fair and reasonable evaluation. The more structured observational tools are typically easy to complete and useful in focusing on specified behavior.

Although structured observation tools can help increase objectivity, faculty judgment is still required in interpretation of the listed behaviors. Problems with reliability arise when item descriptors are given different meanings by different evaluators. Faculty training can help minimize this problem.

Tracking Clinical Observation Evaluation Data In Nursing Education

An abundance of information must be tracked in clinical observation. Faculty can benefit from systems to help document and organize this information. Faculty can carry copies of evaluation tools and anecdotal records or can consider the use of mobile devices to help facilitate retrieval and use of clinical evaluation records.

A variety of strategies exist for using mobile devices in the clinical setting (Lehman, 2003). Privacy in mobile device records is also needed. Common methods for documenting observed behaviors during clinical practice vary in the amount of structure. Examples include anecdotal notes, checklists, rating scales and rubrics, and videotapes.

Anecdotal Notes in Nursing Education for Evaluation

Anecdotal or progress notes are objective written descriptions of observed student performance or behaviors. The format for these can vary from loosely structured “plus–minus” observation notes to structured lists of observations in relation to specified clinical objectives. Serving as part of formative evaluation, as student performance records are documented over time, a pattern is established.

This record or pattern of information pertaining to the student and specific clinical behaviors helps document the student’s performance pattern for both summative evaluation and recall during student–faculty conference sessions. The importance of determining which clinical incidents to assess and the need to identify both positive and negative student behaviors is noted (Hall, Daly, & Madigan, 2010; Liberto, Roncher, & Shellenbarger, 1999).

Checklist for Evaluation in Nursing Education

Checklists are lists of items or performance indicators requiring dichotomous responses such as satisfactory–unsatisfactory or pass–fail. Gronlund (2005) describes a checklist as an inventory of measurable performance dimensions or products with a place to record a simple “yes” or “no” judgment.

These short, easy-to- complete tools are frequently used for evaluating clinical performance. Checklists, such as nursing skills check-off lists, are useful for evaluation of specific, well-defined behaviors and are commonly used in the clinical simulated laboratory setting. Rating scales and rubrics, described in the following paragraph, provide more detail than checklists concerning the quality of a student’s performance.

Rating Scales and Rubrics for Evaluation Nursing Education

Rating scales incorporate qualitative and quantitative judgments regarding the learner’s performance in the clinical setting. List of clinical behaviors or competencies is rated on a numerical scale such as a 5-point or 7-point scale with descriptors.

These descriptors take the form of abstract labels (such as A, B, C, D, and E or 5, 4, 3, 2, and 1), frequency labels (e.g., always, usually, frequently, sometimes, and never), or qualitative labels (e.g., superior, above average, average, and below average). A rating scale provides the instructor with a convenient form on which to record judgments indicating the degree of student performance.

This differs from a checklist in that it allows for more discrimination in judging behaviors as compared with dichotomous “yes” and “no” options. Rubrics, considered a type of rating scale, help convey clinically related assignment expectations to students (Suskie, 2009). They provide clear direction for graders and promote reliability among multiple graders. They can support accurate, consistent, and unbiased ratings.

The detail provided in a rubric grid allows faculty to provide rapid and informative feedback to students without extensive writing (Walvoord et al., 2010). Typical parts of a rubric include the task or assignment description and some type of scale, breakdown of assignment parts, and descriptor of each performance level (Stevens & Levi, 2005).

Serving as a scoring guide, rubrics help focus all on expectations for best practices in completing skills and improving communications. Rubric examples exist for providing detailed feedback for clinical-related assignments such as written clinical plans and conference participation. A web search by topic of interest, such as team communication, can provide samples for review.

Skill-based rubrics can provide students direction in their skill practice and learning. Students can use these tools for self-assessments and participate in peer assessments to promote learning. These tools can be distributed to students, as well as be completed by faculty, and tracked or monitored over time (Bonnel & Smith, 2010).

Videos as Source of Observational Data for Evaluation in Nursing Education

Another method of recording observations of a student’s clinical performance is through videos. Often completed in a simulated setting, videos can be used to record and evaluate specific performance behaviors relevant to various clinical settings. Advantages associated with videos include their valuable start, stop, and replay capabilities, which allow an observation to be reviewed numerous times.

Videos can promote self-evaluation, allowing students to see themselves and evaluate their performance more objectively. Videos also give teachers and students the opportunity to review the performance and provide feedback in determining whether further practice is indicated. Use of videos can contribute to the learning and growth of an entire clinical group when knowledge and feedback are shared.

Videos are particularly popular for simulation debriefing as well as evaluation in distance learning situations. Videos can also be used with rating scales, rubrics, checklists, or anecdotal records to organize and report behaviors observed on the videos.

Additionally, an approach to involving students as observers in evaluation is to engage them in observing and evaluating online clinical videos such as the National Institutes of Health (NIH) Stroke Scale training. In this example, video cases were developed based on needed competencies for appropriate use of the Stroke Scale.

Students complete testing specific to these competencies as they refer to the online video cases (NIH Stroke Scale, nd). This approach allows students to participate in a type of standardized testing. There may be additional opportunities for students to observe videos as components of clinical evaluation, for example critique of online videos developed by faculty or others.