When developing training programs, be sure to include a plan for evaluating the program after the initial pilot run to be sure it is meeting the participants’ needs and accomplishing the objectives it was created to accomplish.
As a best practice, develop a plan for evaluation immediately following the pilot program:
- Define the objectives of the pilot program evaluation (e.g., confirm program meets objectives, sufficient and relevant activities are included)
- Determine how you will gather data to evaluate the program (e.g., surveys, interviews with participants, observation in the classroom)
- Develop a timeline for gathering data to evaluate the program
- Gather raw data, analyze and develop your preliminary report
- Share preliminary report with course designers/developers
- Develop your final recommendations report, include recommendations on workshop changes/tweaks before roll out
Ensure that the following questions are answered as part of the evaluation of the pilot training program:
- Does the training program meet the objectives it was designed to meet?
- Are there sufficient activities, case studies, role plays, exercises to enable practice of the skills being learned?
Provide time for participants to be able to apply what they are learning to ensure that the training program is truly effective:
- Are new skills and knowledge learned applied back on the job?
- Do participants have the support they need to be successful back on the job?
As a best practice, try to enable at least 2 – 3 months after the pilot run before evaluating in full. You need participants to have the time to apply what they learned back on the job to be sure the program truly is effective. This may take some “selling” to your stakeholders as often there is an expectation that programs will be rolled out immediately after pilot tested.