January 2007 - Neil Lasher | |
It has been 50 years since we were introduced to Bloom’s taxonomy. In 1956, Benjamin Bloom headed a group of educational psychologists who developed a classification of levels of intellectual behavior important in learning. Bloom found more than 95 percent of the test questions that learners encounter require them to think at only the lowest possible level: the recall of information.
The result was an instructional design theory based on asking “powerful” questions. These techniques are still widely used today in the classroom and by coaches to ensure learners gain a better education. The theory is that learners should respond to questions after thinking and replying rather than giving closed answers such as “Yes,” “No” or a choice of one of four multiple choice answers. Attempts to bring this instructional design theory to e-learning have been less than successful because many of the questions elicit closed answers.
It has been 40 years since we were introduced to Robert Gagne. In 1965 he published “The Conditions of Learning,” which outlined the relationship of learning objectives to appropriate instructional designs, identifying five categories of learning. In his later book, “The Principles of Instructional Design,” Gagne set out his model with “The Nine Events of Instruction.” These have become the industry standard for an instructional design model:
Instructional Event
1. Gain attention
2. Inform learners of objectives
3. Stimulate recall of prior learning
4. Present the content
5. Provide “learning guidance”
6. Elicit performance (practice)
7. Provide feedback
8. Assess performance
9. Enhance retention and transfer to the job
When used in the classroom, there is no better model. Many e-learning authors, however, have interpreted no fewer than three of these nine events by relying almost exclusively on questions and answers, often confusing or closed.
In doing this, you easily can create a culture of learning by inquisition. And with “forced” results of 100 percent, how do we measure the true results?
New approaches have been sought to enable e-learners to reach a far higher level of knowledge understanding rather than simple knowledge transfer.
To reach this goal, a shift is required in the approach we take to asking our staff to “learn.” We need to predict the needs of the workforce and move very quickly to select the most useful content and delivery techniques — we cannot expect working adults to spend all day, every day learning. In the last year, we have seen a major shift toward workflow learning, (task-based) informal learning and far smaller modules of embedded learning in systems.
I suggest taking a step outside the environment, looking at the whole picture as if you were an outsider. Have you noticed how easy it is to think of great learning interventions for other people’s issues? I am sure we all have considered a great program for the cashier at the grocery store on how to more quickly bag items.
Often, interventions attempt to “fix” or change an existing situation or process. Instead of the fix, start each learning design with a clean piece of paper — look at the purpose of the learning intervention as an outsider and try to identify the real reasons for the learning and what purpose it could really fulfill. This is known as “clean sheet development.”
Using the “clean sheet” process, consider how powerful enterprise learning could be if we could identify the potential customer we lost, the one who did not get customer support but voted with his or her feet and left. If we could understand what went wrong to make that person go elsewhere, we could write a learning intervention for the business that would bring measurable change directly to the bottom line.
With any luck, using techniques such as these will allow our “new” style of learning to be around for the same length of time as those of Bloom and Gagne.
No comments:
Post a Comment