Validating Instructional Design

The last step in the Development phase is to validate the material by using representative samples of the target population and then revising as needed. The heart of the systems approach to training is revising and validating the instructional material until the learners meet the planned learning objectives. Also, it should not be thought of as a single shot affair — success or failure is not measured at a single point.

SpiralISD or ADDIE is iterative, NOT linear. This is because in traditional waterfall-type projects, training is developed in lengthy sequential phases. Learning methods and delivery flaws are normally only discovered during the delivery or evaluation phases. Fixing these defects can waste resources and cause delays to the learning platform or process due to the rework required. This is often referred to as the “1-100-1,000 rule:” if it cost one to fix it in the initial stages of the project, It will cost 100 times more to fix it at the end of the project and up to 1,000 times more to fix it once it is delivered.

Types of Validation

There are normally two types of validating learning or training platforms:

Prototyping: Iterations

Bill Moggridge (2007) wrote that iterative prototyping, understanding people, and synthesis are the core skills of design and that these skills are central to design:

Prototyping allows designers to look at their concept in real world usage before final design decisions are committed to, which makes it quite useful in solving highly complex problems. Understanding people has always been a big part of designing for performance, however it now extends out to the real world and the concepts and products we create. And of course this is all brought together in a unified whole.

Iterations are normally performed using two methods (Saffer, 2007):

A design iteration is a micro-technique in that it uses a small set of learners to test part of the learning platform so that you make an interpretation of its effectiveness. This method is normally used for innovative design. A design iteration will generally use two types of prototypes:

A release iteration is a macro-technique in that it uses a large set of learners in order to satisfy two requirements:

Trialing

Large scale testing of the learning platform before its final release is often referred to as trialing. The validation will depend upon the complexity of the training material and your resources. Listed below is a five-step procedure that provides an effective validation of a large, complex training program. Adjust it as needed to fit the size and complexity of your program, but keep in mind that the closer your validation follows this one, the less problems you will encounter when it is released for delivery.

1. Select the participants that will be in the trials:

The participants should be randomly selected, but they must represent all strata of the target population. They should be clearly told what their roles are in the validation process are. Let them know that they are helping to develop and improve the lessons and that they should feel free to tell you what they think about it. The participants should be pretested to ensure that the students learn from the instructional material and not from past experience.

2. Conduct individual trials:

This trial is performed on one learner at a time. The instruction is presented to a single learner at a time. The separate pieces of instructions, tests, practice periods, etc., should be timed to ensure they match the estimated training times. Do not tutor unless the learner cannot understand the directions. Whenever you help or observe the learner having difficulty with the material, document it.

3. Revise instruction as needed:

Using the documents from the individual trials, revise the material as needed. Closely go over any evaluations that were administered. A large number of wrong answers for an item indicates a trouble area. Conversely, a large number of correct answers for an item could indicate the learners already knew the material, the test items were too easy, or the lessons over taught the material. For more information, see Test Item Analysis.

4. Repeat individual trials until the lesson does what it is supposed to do:

There is no magical number for individual trials. From three to five times seems to be the usual number. If you are trialing a large course, you might only need to trial the whole course once, and then specific troublesome areas of the course, rather than the whole course itself.

5. Conduct group trial:

After you are satisfied with the results of the individual tryouts, move on to the group tryouts. These can be of any size. It may consist of several small groups, one large group, or a combination of both. The procedure is the same as the individual tryouts except for one difference. At some point in the trials you must determine if the program needs to be accepted or if it needs major revision. Usually a minimum of two successful tryouts are conducted to ensure the program does what it is supposed to do. Minor problems should not hold up implementing the program. As was stated earlier in this section, revisions do not stop upon the first implementation of the program, but are performed throughout the life of the program.

Next Steps

Go to the next chapter: Implementation (Delivery) Phase

Return to the Table of Contents

Pages in the Development Phase

References

Moggridge, B. (2007). Designing Interactions. Cambridge, Massachusetts: The MIT Press.

Saffer, D. (2007). Designing for Interactions: Creating Smart Applications and Clever Devices. Berkeley, CA: New Riders.