When creating and implementing preparedness programming, tracking and measuring outcomes should be a priority. While often difficult to measure, especially over longer periods of time, preparedness outcomes determine the success of resources, time, and efforts put into programming. Often in measuring preparedness focus is placed on outputs rather than outcomes. This is certainly a practice that begs the question of what is driving success: quantity or quality. With advancing technology, outcomes measurement should be getting easier, although one need not have expensive technology to create an effective system of measuring outcomes.
What do I need to know?
Some examples of outcomes measurement are: knowledge gain, knowledge retention, motivation to action, and behavior change.
Knowledge gain assesses the growth of personal knowledge, or awareness, because of a specific activity (presentation, program, event, etc.). In order to accurately analyze knowledge gain from a reoccurring activity, it is important that it be standardized with clear learning objectives. Knowledge gain is a relatively simple type of outcome measurement, driven by what participants learn over the course of the activity. Creating a survey that is taken before and after the activity is an easy, immediate measurement of knowledge gain. The survey itself should remain the same, both before and after, as this creates a control. The survey questions should pertain to assessing knowledge, asking direct questions on information presented in the standardize activity. For instance, “how often should you test the batteries in your smoke alarm,” could be a direct assessment of knowledge for a presentation on fire preparedness. Ideally, these questions should address the most important pieces of the activity, most likely within the learning objectives.
When developing implementation structure, consider comparing the pre-survey with the post-survey for the same individual. This will provide the most accurate analysis of knowledge known prior to an activity and knowledge gained as a result of the activity. An effective way to match surveys could be to print the pre-survey and the post-survey on the same sheet of paper, giving it to the individual to hold onto the whole activity. Further, the survey could ask for a unique identifier, which would allow you to link the pre-survey with the post-survey and any future surveys that may be created. A unique identifier may be the participant’s name, email address, or something that would have a low chance of being replicated by another survey taker.
Studies have indicated that longer-term knowledge retention varies much from immediate knowledge gain. Since disasters do not often occur immediately after an individual has attended disaster preparedness activity, it is important to measure the rate of knowledge retention among participants. An effective way to evaluate knowledge retention is to ask the same questions as the knowledge gain survey as a follow-up to participants weeks and months after the activity. Perhaps it would be wise to craft the common identifier in your knowledge gain survey (above) in a way that allows for communication with participants after time has passed – i.e. email address, phone number. When participants answer the same questions after time has passed, results can be tracked and compared between data sets among participants to compare knowledge before the program, immediately after (knowledge gain) and weeks or months after the presentation (knowledge retention).
Awareness to Action:
Being aware and knowledgeable about risks and hazards is important, however it’s very important to understand if and how that knowledge contributes to steps taken towards preparedness that is, action. Measuring awareness to action is essential for effective programming because it validates whether activities are producing effective results. Creating outcome measurements for awareness to action can be more complex than measuring knowledge gain or retention. Perhaps it would be wise to craft the common identifier in your knowledge gain survey (above) in a way that allows for communication with participants after time has passed – i.e. email address, phone number.
Assuming knowledge gain and retention questions were structured around the main objectives of the activity, the awareness to action questions can be formulated to query whether participants took the directed action. Ideally, these actions should be recommended in activity itself and in any accompanying materials. For instance, based on the example above in the knowledge learned section (above), if the direct knowledge assessment is “how often should you test the batteries in your smoke alarm,” the awareness to action indicator could be “I tested my batteries in my smoke alarm this month.”
Get the latest videos and photos, case studies, and training materials contributed by practitioners from around the globe. Visit our Resource Library for more.