Skip to main content

General Assessment Design

Abstract

General Assessment Design 50.png

This website promotes the idea that reflective writing makes experience meaningful. This also applies to assessment: writing is an excellent assessment tool. A good measure of skills learned during the course is a final essay in which the student reflects on what they have learned. If gaining skill in writing is one goal of a program, leaders can evaluate growth of writing, using methods described in this article. The following lit review compares the relative strengths of writing with several other assessment tools.

Theoretical Background: Assessment in Experiential Education

Experiential education is often a more complex kind of learning than that which occurs in most classrooms; it’s spontaneous, follows an organic order, and generally involves the whole being of the students. Assessment in experiential courses must show the relative strength of cognitive, emotional, and sometimes spiritual learning, so measuring growth of individuals or improvement of programs can be difficult. One advantage is that often the same instrument used to test individual students can be compiled and used to evaluate the effectiveness and progress of programs or courses.

Experiential educators who want to evaluate their programs or students must choose between assessment tools that measure progress objectively and those that measure social, psychological, and attitudinal outcomes (Qualters, 2010). Many assessments in the latter category use interviews, focus groups, and written surveys that depend on self-reported data, which is only as reliable as the participants are honest and perceptive (Bailey et al., 2017; Lariviere et al., 2012; Schary & Waldron, 2017). These surveys or tests may include confounding variables such as social desirability of positive outcomes and post-experience euphoria (Ewert & Sibthorp, 2009). Self-reported data can be especially problematic when gathered from vulnerable populations (Lariviere et al., 2012). Memory can be a factor, because most surveys are administered post-program due to the difficulty of collecting data in the field (Bailey et al., 2017; Ewert & Sibthorp, 2009). Finally, few of these self-reported tests measure critical thinking or permanent cognitive change. Heinrich et al. (2015) conclude, “While experiential learning pedagogy addresses many of the knowledge sets, skills, and behaviors of engaged citizenry, it does not by itself develop critical thinking” (p. 375). In addition, many assessment tools are not created using current research concerning how the brain learns (Kirschner et al., 2006).

Another difficulty with measurement of students in experiential programs is that concrete and specific measures of social or psychological aspects of learning are difficult to use outside of laboratory conditions. For example, field conditions make behavioral assessment (such as structured observation) and gathering physiological data (such as heart-rate monitors and measurement of brainwave data) difficult (Lariviere et al., 2012). In addition, many experiential educators want to evaluate knowledge and attitudes specific to a given discipline, such as biology, and these are not measured by psychological or physiological tests.

Objective tests, such as multiple choice tests, measure disciplinary knowledge with validity but do not reveal much about psychological and/or social constructs such as self-efficacy, self esteem, and cohesion (Schary & Waldron, 2017).

In summary, most assessment tools measure only part of what scaffolded experience can teach. However, some researchers have found that reflective essay writing considers both knowledge and attitudes, objective and subjective growth. Not simply a reflective entry in a journal, an essay is a consciously structured synthesis of learning across a period of time. Essay writing is performative and measures ability to articulate knowledge and cognitive growth in a way that a multiple-choice test or a self-reported survey cannot. Also, essay writing aids students in constructing new or deeper understanding (Kellog, 2008).

Basics of Practice

  • Many educators have used essays as effective assessment tools:

    • Wright and Tolan (2002) blended a ropes course with community engagement and successfully used a reflective essay at the end to evaluate student progress. This method allowed students to express knowledge of principles, cognitive growth, and emotional growth in attitudes, self-concept, and confidence. 
    • Heinrich et al. (2015) assigned students to describe a video, write a blog post that summarized course readings, and write an essay that would synthesize course principles and personal observations. They evaluated student statements involving critical thinking against five dimensions developed by the American Association of Colleges and Universities. 
  • Use surveys to measure self-efficacy (confidence in doing a task). These can be good measures of whether students can perform those tasks.
    • Self-efficacy tests can also list specific acts that are part of broader general skills. For example, the ability to find a subject for writing is a specific act that is part of the general skill of essay writing.   
  • You may want to use a rubric to evaluate writing. You could develop this rubric from the learning objectives required by your university, or you could use a rubric developed for  your specific group of learners. 
  • Another tool for evaluating writing is the Self-Reflective Writing Scale developed by So, Bennett-Levy, Perry, Wood, and Wong (2018).
  • Whatever standard you choose, you can code student responses and compare the responses to the standard (learning objectives, rubric, etc.) in order to evaluate both individual and program progress.

Quick Links

 
Teaching Materials and Resources

References

Bailey, A. W., Johann, J., & Kang, H. (2017). Cognitive and physiological impacts of adventure activities: Beyond self-report data. Journal of Experiential Education, 40, 153–169. doi:10.1177/1053825917701250

Ewert, A., & Sibthorp, J. (2009). Creating outcomes through experiential education: The challenge of confounding variables. Journal of Experiential Education, 31, 376–389. Doi:10.1177/105382590803100305

Heinrich, W. F., Habron, G. B., & Johnson, H. L. (2015). Critical thinking assessment across four sustainability-related experiential learning settings. Journal of Experiential Education, 38, 373–393. doi:10.1177/1053825915592890

Kellog, R. T. (2008). Training writing skills: A cognitive development perspective. Journal of Writing Research, 1(1). doi:10.17239/jowr-2008.01.01.1

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41, 75–86. doi:10.1207/s15326985ep4102_1

Lariviere, M., Couture, R., & Ritchie, S. D. (2012). Behavioural assessment of wilderness therapy participants: Exploring the consistency of observational data. Journal of Experiential Education, 35, 290–302. doi:10.1177/105382591203500106

Qualters, D. M. (2010). Bringing the outside in: Assessing experiential education. New Directions for Teaching & Learning, 124, 55–62. doi:10.1002/tl.421

Schary, D. P., & Waldron, A. L. (2017). The challenge course experience questionnaire: A facilitator’s assessment tool. Journal of Experiential Education, 40, 295–307. doi:10.1177/1053825917708400

So, S. H., Bennett-Levy, J., Perry, H., Wood, D. H., & Wong, C. (2018). The self-reflective writing scale (SRWS): A new measure to assess self-reflection following self-experiential cognitive behaviour therapy training. Reflective Practice, 19, 505–521. doi:10.1080/14623943.2018.1536652

Wright, A. N., & Tolan, J. (2002). Prejudice reduction through shared adventure: A qualitative outcome assessment of a multicultural education class. Journal of Experiential Education, 32, 137–154. doi:10.1177/105382590903200204

Further Reading
Bennion, J., Cannon, B., Hill, B., Nelson, R., & Ricks, M. (2019). Asking the right questions: Using open-ended essay responses to evaluate student apprehension of threshold concepts. Journal of Experiential Education, 1(18). DOI: 10.1177/1053825919880202

Galloway, S. P. (2000). Assessment in wilderness orientation programs: Efforts to improve college student retention. Journal of Experiential Education, 23, 75–84.

Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly, 19, 139–158. doi:10.1080/10573560308222

Priest, S. (2001). A program evaluation primer. Journal of Experiential Education, 24, 34–40. doi:10.1177/105382590102400108