The law of attraction is at it again. Testing and assessment is running through my mind and it seems to be running through my life as well. There is a certification project currently being implemented in my company that involves training and testing. I am taking a course on Inquiry and Measurement that also involves testing. And, during this week’s #lrnchat, I was drawn in by a discussion thread in which the pros and cons (well, mostly cons) of assessment were being discussed. Why am I so preoccupied with Level 2 of Kirkpatrick’s Four Levels of Evaluation? It is mostly because of the certification project at work.
Certification is a term that gets thrown around a little too loosely in training departments these days. Put someone through a course and a post-test and BANG, you are certified – or worse, you are not. It is a tricky thing to put together a certification process that is valid and reliable. Very few companies can stand to wait out validation process, so they jump right in and begin training and “certifying” people.
Certifications do not guarantee that the person being certified has learned more than he or she would in a regular training program, but business leaders often feel it is operationally necessary to validate a level of knowledge or skill required to meet goals and targets. If certification is needed or required, here are some things to consider for the assessment process:
Test items should be directly related to learning objectives, which should be directly derived from performance requirements. This may seem obvious but I have seen many tests that have included filler material alongside valid questions.
Test only on important items, not obscure ones. It is not necessary to test someone on small details unless they are critical. Very few corporate employees are doing life-saving work that needs to be tested at a granular level.
Test items should be straight-forward. Don’t try to be tricky. What is the point? It only serves to confuse the learner and adds no value to the assessment.
Choose question formats that make sense for the level of learning you need to assess. Multiple choice questions are commonly used on knowledge tests because they are easy to score and easy to tie to outcomes – but they are not always easy to write. Good multiple choice questions will have a clear premise in the stem of the question, a correct answer, and reasonable alternative choices. There should not be any throw away responses or convoluted choices such as “a and b, but not c” or “a and c” only. If your test item has more than one correct answer, then rethink the question format. Consider short answer questions or a matching column.
If possible, use randomized test questions. Most learning management systems have this capability. They allow you to create a bank of questions that can be drawn upon at random so that test-takers will be deterred from sharing answers. But don’t make the bank of test questions so large that everyone feels like they are taking a completely different test.
Pilot your test. This is the hard part, because it takes time and patience. You need to let a few people complete the learning experience and take the test to give you the opportunity to analyze the questions. You will want to take a second look at questions that everyone got right, or everyone got wrong, or questions for which many people chose the same incorrect answer.
Create rubrics for skills assessments. Skills assessment usually requires direct observation. It is important that all of your assessors are using the same criteria and weights when judging performance. Validate the process by having multiple assessors review the same performance. If they are more than a few points off from each other, either redesign the rubric or re-train your assessors.
Doing It Right: Instructional Design without Cutting Corners
What a pleasure it is when you are able to do a learning project the "right way." This week, my team and I finished training for a group of managers in one of our business segments. This was a project that we initiated in the fall. It was carried out according to plan and within the next few weeks, we will have our final set of measures on its overall impact.
When I say we did this the "right way," what I mean is that we were able to follow our instructional design process without having to cut corners along the way. You might be thinking, "Well, don't you always do that?" But in truth, we are often forced to make compromises on our projects to meet business deadlines, work within budget constraints, or cater to the expectations of a particularly influential business leader. But on this project, we were not constrained by any of those things.
The project was to provide training to approximately 35 managers who were mostly long-tenured and experienced, but who have recently had to deal with significant changes to their job. Here is how it went:
Analysis - We originally approached the head of this business unit to get an understanding of the outcomes that were expected from the changes that were put in place, and to get his perspective on the impact he thought these changes would have on his managers. Next, we had two rounds of discussions with four managers who were part of the target audience. After the first meeting with them, we drafted an analysis report to feed back to them our understanding of the audience characteristics, the job, and the key tasks that were changing. In our second meeting with the managers, we validated and fine-tuned the information gathered in the first meeting. After that, we presented our analysis findings and a training design proposal, including a draft of the agenda and objectives, to the business unit leader and the Vice Presidents into whom the targeted training audience reported. They provided some additional insights that we incorporated into our agenda and we were ready to begin designing the program.
Design & Development - We chose a blended approach including two online assessments and an e-learning module as pre-work, a three-day classroom learning event, and follow-up learning opportunities made available through a SharePoint site set up specifically for this class. The design process for the classroom event was relatively quick and easy. Most of the training needs could be addressed with existing material that had been used for other programs. There were a few key segments that would be new, but they were all on topics that were easy to research. Finding appropriate content was not an issue. Designing learning activities that would be effective at making the learning points was a little more challenging. But that is certainly a part of the job that my team enjoys doing.
Pilot & Revisions - Since our total audience was relatively small (at 35 managers) we did not really have the opportunity to conduct a full blown pilot. We broke the audience into three delivery groups and viewed our first delivery in December as a quasi-pilot. Overall it went well, but as with any new program for a new audience, there was room for improvement. We huddled up afterwards, examined our level one feedback, talked to a few of the participants and observers, updated our design document, and made some adjustments for our second and third deliveries.
Implementation - By the time our second delivery rolled around, we were confident that we had the right program to meet their needs. We were clear on which segments needed the most support and which would meet with resistance, and we prepared ourselves accordingly. For all three classroom events, we had one of the Vice Presidents with us during delivery. We carved out a small but important segment for them to specifically deliver, and for the rest of the time they were with us, they were able to provide clarification or join in the discussion as we covered the other items on the agenda. Their presence and involvement was a key factor in the program's success.
Evaluation & Follow-up - For this program, we used level one (participant reaction) and level three (behavioral change) measurements. The level one measurements were taken directly at the end of the classroom sessions. For the level three measurements, we use the Friday5s goal management system over a ten-week period after training. Each participant was asked to set two specific goals at the end of their classroom session. These goals get input into the Friday5s online tool where the class participants can go to receive online coaching and track their progress. Also, we continue the momentum created in the classroom by allowing participants to connect with each other after the event through a SharePoint site that was set up specifically for this program.
On the whole this was a very satisfying project. We got to help our managers and help our business by doing what we do best: creating a learning opportunity that met specific needs for a specific audience. And, we got chance to do it right!
No comments:
Post a Comment