Workshop D (1-day, in English)

Title: Impact Evaluation: Global Perspectives and Best Practices

Micheline Chalhoub-Deville, University of North Carolina, Greensboro
Hanan Khalifa, MetaMetrics Inc
Eunice Jang, University of Toronto



Workshop Description 

     Impact evaluation is a vital process that helps organizations and policymakers measure the effectiveness of programs and interventions. This full-day workshop is dedicated to enhancing participants' understanding of impact evaluation emphasizing global and local perspectives and best practices that inform effective evaluation strategies.

     The workshop commences with an introduction to impact evaluation, outlining various types of evaluations, models and frameworks and discussing the intersection between testing and evaluation. Participants will explore the goals of impact evaluation, which include assessing program effectiveness, informing policy decisions, and enhancing accountability. An initial hands-on activity will encourage participants to share their experiences with evaluations thus fostering a collaborative learning environment from the outset (Rossi et al., 2004).

     Following this introduction, the focus will shift to defining evaluation outcomes and developing indicators. Participants will learn how to articulate clear objectives for their evaluations and differentiate between short and long-term outcomes. The workshop will emphasize the importance of developing indicators that are specific, measurable, appropriate, relevant and timebound. In a paired exercise, attendees will define outcomes and indicators for a hypothetical program, applying the concepts learned (Kusek & Rist, 2004).

     The next session will cover evaluation design methods, and participants will learn how to select the most suitable design based on factors such as context, resources, and timelines. In small groups, they will then choose an appropriate evaluation design for a given scenario and justify their selection, thus reinforcing their understanding of design principles through practical application (Chen, 2015).

     Participants will explore different data collection methods, including surveys, interviews, focus groups, and observations and discuss ways to enhance the reliability and validity of the instruments. They will create a short survey or interview guide based on the outcomes and indicators developed earlier and practice conducting interviews in pairs or small groups, allowing them to experience data collection firsthand (Fowler, 2014).

     The workshop will conclude with a discussion on best practices and ethical considerations in impact evaluation, emphasizing informed consent, confidentiality, and stakeholder engagement. A final wrap-up session will allow participants to share insights gained throughout the day and discuss how they can apply these practices in their respective contexts.

     The workshop is designed to empower participants with the knowledge and skills needed to conduct impactful evaluations that inform decision-making and enhance program effectiveness. By the end of the workshop, participants will possess an understanding of impact evaluation in language testing, equipped with the tools and strategies necessary to implement effective evaluations in their own contexts. They will develop a nuanced understanding of how context shapes evaluation processes and leave with actionable next steps to ensure their programs are not only effective but also responsive to the needs of all stakeholders, ultimately contributing to improved educational outcomes.


Detailed Outline 

Session 1
Introduction to Impact Evaluation
   - Purpose & goals (e.g., assessing effectiveness of initiatives)
   - Different types of evaluation (outcome, output, etc.)
   - Different evaluation frameworks (e.g., Log frame, TOC, Kirkpatrick’s 5 levels)
Hands-On Activity: Participants briefly share experiences with
evaluations they have conducted or been involved in.
Session 2
Defining Evaluation Outcomes & Indicators
   - Identifying goals & objectives
   - Developing short- and long-term outcomes
   - Developing SMART indicators
Hands-On Activity: Participants work in pairs to define outcomes
and indicators for a hypothetical program.
Session 3
Evaluation Design Methods
   - Overview
   - Selecting the right design
Hands-On Activity: In small groups, participants select an appropriate
evaluation design for a given scenario and justify their choice
.
Session 4
Data Collection Methods
   - Overview of data collection methods
   - Instrument reliability, validity and feasibility
   - Sampling (discussion on probability and non-probability)
Hands-On Activity: Participants create a short survey or interview guide based on
the outcomes and indicators they developed earlier. They then practice conducting
interviews in pairs or small groups.
Session 5
Group Case Study Exercise
   - Review a successful case study
Hands-On Activity: In groups, participants develop an evaluation plan based on the case study,
outlining objectives, indicators, design, and data collection methods. Each group presents their
plan to the larger group for feedback.
Session 6
Wrap upBest Practices and Ethics
   - Ethical considerations: informed consent, confidentiality ..etc
   - Engaging stakeholders and ensuring cultural sensitivity
Session 7
Wrap up
   - Participants share key insights from the day and discuss how they can apply these practices in their work.
   - Recap of Key Takeaways
   - Open Floor for Questions
   - Feedback and Evaluation of Workshop

Presenter Biodata: 

Dr. Hanan Khalifa is an international language testing & evaluation expert who developed national and international examinations; aligned curricula and tests to standards; and evaluated donor funded programs. For two decades, Hanan led Education Reform & Impact Evaluation work at Cambridge University Press & Assessment English and advised ministries of education globally. She is currently leading a Pan Arab initiative on developing a conjoint measurement scale for Arabic language for use in multilingual and multicultural communities. As an academic and a Council of Europe expert, she has contributed to and led on several impactful work, e.g., the socio-cognitive model for Reading (Khalifa & Weir 2009), the New Companion volume of the CEFR (2018, 2020), Qatar Foundation Arabic Framework (2022). Dr Khalifa has won several international awards and presented and published on various language education topics.



Eunice Eunhee Jang is a Professor at the Ontario Institute for Studies in Education, University of Toronto. Specializing in diagnostic language assessment, AI applications, and program evaluation, Dr. Jang has led large-scale assessment and validation initiatives in collaboration with various stakeholders, such as the Steps to English Proficiency (STEP) language assessment framework for Ontario public schools. She has been actively engaged in strengthening language proficiency requirements through standard-setting studies for professional regulators to determine the language proficiency of internationally educated healthcare professionals for immigration purposes and licensing. Dr. Jang is the author of "Focus on Assessment," which offers educators insights into assessing K-12 English language learners. She is a recipient of the University of Toronto’s David Hunt Graduate Teaching Award, Tatsuoka Measurement Award, Jacqueline Ross TOEFL Dissertation Award, and IELTS MA Dissertation Award. Currently, Dr. Jang is leading the BalanceAI and APLUS projects, which critically examine the impact of advanced technological innovations on language and literacy assessments in K-12 and postsecondary education contexts.



Micheline Chalhoub-Deville holds a Bachelor's degree from the Lebanese American University and Master's and Ph.D. degrees from The Ohio State University. She currently serves as a Professor of Educational Research Methodology at the University of North Carolina at Greensboro (UNCG) where she teaches courses on language testing, validity, and research methodology. Prior to UNCG, she worked at the University of Minnesota and the University of Iowa. Her professional roles have also included positions such as Distinguished Visiting Professor at the American University in Cairo, Visiting Professor at the Lebanese American University, and UNCG Interim Associate Provost for Undergraduate Education. Her contributions to the field include publications, presentations, and consultations on topics like computer adaptive tests, K-12 academic English language assessment, admissions language exams, and validation. She has over 70 publications, including books, articles, and reports, has delivered more than 150 talks and workshops. Additionally, she has played key roles in securing and leading research and development programs, with a total funding exceeding $4 million. Her scholarship has been recognized through awards such as the ILTA Best Article Award, the Educational Testing Service—TOEFL Outstanding Young Scholar Award, the UNCG School of Education Outstanding Senior Scholar Award, and the national Center for Applied Linguistics Charles A. Ferguson Award for Outstanding Scholarship. Professor Chalhoub-Deville has served as President of the International Language Testing Association (ILTA). She is Founder and first President of the Mid-West Association of Language Testers (MwALT) and is a founding member of the British Council Assessment Advisory Board--APTIS, the Duolingo English Test (DET) Technical Advisory Board, and English3 Assessment Board. She is a former Chair of the TOEFL Committee of Examiners as well as a member of the TOEFL Policy Board. She has participated in editorial and governing boards, such as Language Assessment Quarterly, Language Testing, and the Center for Applied Linguistics. She has co-founded and directed the Coalition for Diversity in Language and Culture, the SOE Access & Equity Committee, and a research group focused on the testing and evaluation in educational accountability systems. She has been invited to serve on university accreditation teams in various countries and to participate in a United Nations Educational, Scientific, and Cultural Organization (UNESCO) First Experts’ meeting.




Copyright © 2023 · All Rights Reserved · Chulalongkorn University Language Institute