18th Annual Conference
The 18th annual AEA Europe conference took place in Prague, Czech Republic
Prague, Czech Republic 9th-11th November 2017
Theme: Assessment Cultures in a Globalised World
Date: 9th-11th November 2017
Workshops: Pre-conference workshops were held on the 8th of November.
Conference Venue: The venue will be the Hotel Corinthia.
Registration: Registrations will handled by our sister site.
Find more information at http://easyconferences.eu/aea2017/index.html.
Assessment Cultures in a Globalised World
Assessment is a complex and multifaceted practice. Yet for many there is an assumption that when we talk about ‘assessment’ we have a common understanding of what it is and what it involves; that there is a universal understanding of assessment. However, this is not really the case. For example, teacher assessment in some countries has much more authority, inherent trust and is aligned in much the same way as national exams, while in other countries, it is considered less reliable than external assessments. Indeed, even examinations tend to have different status and purposes across various educational systems.
Within Europe, different assessment cultures can be observed. In the Scandinavian countries, for example, teacher assessment dominates, while in England schools can choose their examinations from a variety of awarding organisations. In other jurisdictions, like in Eastern Europe countries, national exams are much trusted and the ministry of education is responsible for one external examination which all students complete. Culture of assessment can also be very diverse: whereas in some countries (e.g. Netherlands) standardized testing has a long history and is now really rooted. This may be less common and not as widely spread in the education culture in other countries (e.g. France). Is this a case where east meets west or south meets north or are assessment cultures becoming more similar because of a more globalised education arena and employment market place? Most countries participate in one or more of the international comparative assessments such as PIRLS, TIMSS, PISA and ICCS. Are the results of such international assessment systems overly influencing policymaking or is it inevitable that there is globalisation of our assessment cultures along with other social and political cultures more generally?
Even locally, within countries, assessments might be understood differently by different stakeholders. Parents and students sometimes conceive examinations as absolute and objective while assessment developers are concerned about levels of uncertainty or measurement errors. As more and more national tests are introduced into national education systems, communicating about test outcomes and how they might inform teaching, learning and/or policymaking has become a major challenge to test developers, researchers and academia. National standards and school accountability concerns, while of intrinsic value to particular audiences, might well counteract the positive backwash on teaching and learning of good assessment practice and principles. How do those responsible for assessment development and regulation meaningfully interact with key stakeholders (e.g. students, parents, teacher, principals and school governors) to discuss and debate such dilemmas?
Furthermore, how do assessment practices and systems consider the movement and integration of people across and within countries and allow for the variety of experiences and understandings of education and its purpose that different groups bring to the assessment arena. Can we develop assessments and assessment systems that are culturally sensitive that allow every student to show us what they can do?
A sample of slides from the presentations from the 18th Annual Conference in Prague:
- E. Harrison, W. Pointer, Y. Bimpeh & B. Smith – AQA, UK: Beyond classical statistics: Different approaches to evaluating marking reliability
- Aigul Yessengaliyeva – National Bank, Kazakhstan; Nico Dieteren – Cito, Netherlands: EAST MEETS WEST: how high-stakes national assessments are valued in different social and cultural contexts
- Avi Allalouf, Nitzan Friedman – NITE, Israel: Improving Assessment Literacy among Teachers and the General Public Using MOOCs
- Ben Smith – AQA, UK: Putting a G-theory approach to marking reliability through its paces
- Caroline Morin, Stephen Holmes, Beth Black – Ofqual, UK: Understanding the nature of marker disagreement, uncertainty and error in marking
- Cassy Taylor, Gitte Sparding – Qualifications Wales, UK: A review of qualifications in Wales, including comparisons with those in Germany, Canada, Australia and New Zealand
- Egil Weider Hartberg, Vegard Meland – Inland Norway University, Norway: Assessment capacity building MOOCs’
- Stephen Holmes, Nadir Zanini – Ofqual, UK: Approaches to predicting predictability of examination papers
- Stephen Holmes, Caroline Morin, Beth Black – Ofqual, UK: Rank-order approaches to assessing the quality of extended responses
- Jane Nicholas – NFER, UK: The influence of the National Reading Tests on teaching and learning of reading strategies
- Karen Melrose, Rebecca Mead – Ofqual, UK: The impact of Government educational reforms on the maintenance of AS standards
- Mark Hogan – WJEC, UK: Do translated items perform the same way?
- Nico Dieteren, Sjoerd Crans – Cito, Netherlands; Laila Issayeva – NIS, Kazakhstan: Assessment tool validation research at Nazarbayev Intellectual schools: student performance monitoring system in Mathematics
- Sarah Hughes – Cambridge Assessment, UK: Developing a culture of research-informed practice by encouraging research use in an assessment organisation
- Sarah Maughan – AlphaPlus Consultancy, UK: Accessibility for All Learners in a Computer Adaptive Test
- Tamar Kennet-Cohen, Yonatan Sa’ar – NITE, Israel: Adding a Writing Task to a University Admissions Test: An Evaluation of Short-Term Consequences
- Tina Isaacs – UCL, UK; Kristine Gorgen – Oxford University, UK: Questions of power and structure- who sets exam standards?
- Tom Benton – Cambridge Assessment, UK: Using all our data to maintain examination standards
- Tzur Karelitz – NITE, Israel:Perception-Based Evidence of Validity