by Mathieu Khoury
Learning and Development Specialist, IATA Training
Training needs are a moving target. Regardless of the industry, any given skill set has the potential to become irrelevant if it is not maintained and kept up-to-speed with the latest developments. This is even more painfully true in fields where compliance to standards and regulations are involved, or where technical skills drive business results. In the same way, as skill sets become obsolete, so too do the training programs that nurture them.
For professional training organizations, it is critical to remain up-to-date and relevant to industry needs. To achieve this, it is crucial to establish an instructional quality management system that helps organizations to continuously monitor developments in the subject area; to anticipate impacts on industry activities; and to regularly update training programs and products based on intelligence. That, we can say, is the foundation.
When that training hits the market, the next stage involves collecting input from all stakeholders in the learning ecosystem to measure training effectiveness and make informed decisions about what needs to be done to improve it. In this article, we will explore the sources and channels IATA Training uses to gather data on training effectiveness, and more interestingly, how we use it.
Data is gold and we dig it
Let's imagine a development effort for a classroom or self-study course. It has followed the instructional design process by the book and has yielded a perfectly structured course with each component of the course package, from lesson plans to assessment instruments, all finely aligned to meet the course objectives. How can we tell that this course will deliver on the expectations?
To answer this question we need to wear many hats to consider things from every stakeholder's perspective. Instructors, learners, and sponsors are our ears to the ground. Their evaluation of training effectiveness is the echo of how well we, as suppliers of training are doing.
At IATA Training, and for many other training organizations, instructors and faculty are the first-line validators of training effectiveness. This is why, after each training they deliver, we ask instructors to complete a comprehensive report which collects their appreciation of variables in key areas that can impact the quality and effectiveness of training. These areas cover the end-to-end process from the instructor's perspective before the training begins, all the way to learner assessment and grading.
For instructors, the work starts well before class begins so it's important that we get their feedback on the appreciation of the administration and logistical aspects of their experience. An instructor who receives the course material at the last minute, whose flight arrangements don't allow for enough rest before the training, or who has to teach in below par facilities, may be affected by situations that negatively impact training effectiveness, and vice versa.
Then of course, there is the course material. Whether the instructor developed the course or is delivering someone else's creation, every training session is an opportunity to gauge, for example, whether the course outline and lesson plans offer adequate support to manage the training, or whether the course content, activities and assessment instruments are relevant, up-to-date, and well aligned with the learning objectives. Is the content clear? Are there topics to remove or add? Is the theory/practice balance adequate? Are the activities engaging? Is the course too long or too short for the scheduled time? Are the exams too easy or too difficult? Those are all questions only the instructors can answer and, here again the information we collect highlights what works well, what doesn't, and what can be improved.
Now onto the next stakeholder in the training ecosystem: the learner. As the ultimate beneficiary of the training products we create, their voice speaks the clearest when it comes to giving feedback on training effectiveness. Indeed, for self-study training programs, the learner is the only source of information on training effectiveness we can tap into. This said, even though students very rarely have the training professional's critical eye on elements like course design or content validity, one thing they know better than anyone else is whether the training met their needs, gave them something they can use in the real world, and was good value for their time and money.
To capture learner feedback, we use two different surveys; one for classroom courses and one for self-study courses. Of course, training quality and effectiveness in terms of course objectives, content and course material, course structure, and assessments are at the core of both surveys. In addition to that, format-specific items are surveyed. This includes the instructor's level of subject-matter expertise; level of preparedness for the class; and presentation and class management skills. On the self-study side, students are asked to provide input on the registration process to the course and exam, the ease-of-use of course package, the exam relevance, difficulty, and length, etc. Exam results are also systematically scrutinized for validity and integrity, thorough item analysis yielding yet another set of precious insight on training effectiveness.
The next organism in our ecosystem, is the sponsor. Most times, they're the ones picking up the bill so we have to make sure they're satisfied with the results of the training! But exactly what affects their satisfaction level about the training they've sent their employees to? You got it! Return. Return on investment (ROI) – that is if they are able to really measure it - and the more qualitative and encompassing return on expectations (ROE). Sponsors want to see measurable on-the-job performance improvement, compliance to standards, behavioral, attitudinal, or cultural changes that drive tangible results for the business and link it to a training event or program.
Because they have that business results mindset, and because the cost of training comes out of their budget, sponsors can feedback valuable data to the training organization. At IATA Training, besides surveys, we reach out to sponsors in person. This allows us to better capture some of the intangible data a pre-formatted questionnaire cannot.
Knowledge is power
For data to become knowledge – and then power – it has to be analyzed, interpreted, communicated and actioned. With steady, massive streams of qualitative and quantitative data coming from different channels, several resources must dedicate their time to collecting that information, making sure the appropriate people are informed. Indeed, a significant benefit of the data crunching is the ability for multiple stakeholders to quickly gain visibility on key performance indicators and react in a timely manner.
For the Product Managers who own the courses, the intelligence gathered helps them make informed decisions about variables impacting the intrinsic value of their products, how well these meet the needs of the learners, how their instructors are performing, etc. It also helps them identify strengths to capitalize on and areas for improvement.
For the Quality Team, training intelligence means being notified of any rating below 3.5 on a scale of five that comes from any channel in our network, and it allows them to identify deficient questions in exams based on the item analysis reports. Whatever the quality issue, standard processes are in place to make sure it is investigated, documented, communicated to stakeholders, actioned, and resolved.
Depending on the root cause, the resolution process may include any actor in the training development and delivery value chain, and it may require a wide array of interventions. Issues related to the curriculum or the quality of course content or material, will trigger the course revision process and mobilize subject-matter experts, instructors, and the Learning & Development Specialist. In the same way, resolution of issues related to logistical, administrative, or technical considerations will be agreed, planned and actioned by the appropriate resources.
Intelligence and honoring it
A commitment to quality training is essentially a commitment to training effectiveness as measured by each customer in the training ecosystem. For the training organization, the customer isn't just the students. From the instructors who deliver the training, to the sponsors who patron it, the client experience happens on many fronts. From the registration process, to the assessment of learning outcomes, these internal and external customers will interact with a variety of systems – instructional, administrative, and technological – which will all impact the results of their training experience. Creating channels for them to communicate their evaluation of the end-to-end experience we've created for them, constantly monitoring these channels, and making sure the intelligence we gather is used to concretely improve our offering to meet their needs, is the best way to stay relevant and effective. Because let's face it, in this age of communications, anything else would be an insult to our customers' intelligence.