Newsletter – Winter 2019

Editor’s Note

Dear Readers,

Happy New Year 2019! AEA-Europe wishes you and your family all the very best of health, joy and success! Let us start off by being grateful for everything the association has achieved to date and we look forward to working together to reach greater heights.

In this issue, we warmly welcome our new President of AEA-Europe, Professor Jannette Elwood, who encourages us all to use the association’s communication channels to share information, experiences and research in the field of assessment and help consolidate the assessment culture across Europe. She also reminds you to save the dates for our 20th annual AEA-Europe conference in Lisbon from 13th-16th November 2019 to be hosted by IAVE Portugal. As you know, our annual conference is the highlight event of the association as it brings together professional actors working in the field of assessment, connecting people around scientifically quality presentations and networking activities. Details for conference submission and registration will follow soon and we count on your participation to make the event a success.

The last conference held in Arnhem-Nijmegen in November 2018, hosted by the CITO team included a packed academic program and it is our great pleasure in this newsletter to share with you some contributions kindly sent in to us by members. Our pre-conference workshops included one from AEA-Europe’s own e-Assessment Special Interest Group on innovative onscreen assessment. Participants were offered a hands-on exploration of digital assessment items from a range of international providers. Another pre-conference workshop held by Cambridge International focused on bilingual assessments and multilingual learners and also grounded the session on practicalities and guidance in this field for researchers and practitioners. Read on to find out about the paper presented by the National Foundation for Educational Research (NFER) on its small-scale research project on the challenges and opportunities related to the transition from paper-based to computer-based assessments. In addition, you can learn about the case of digital assessments using technology-enhanced items in France, which illustrates the complexities and requirements of data science to adequately assessing student performance. AQA, an awarding examination board in England, shares with us its work on reviewing assessments and data in a move towards effective evaluation of assessments. You can also discover two examples of our new Ignite Sessions here. In the first one, CITO presents its collaboration with two universities to develop a 2-year part-time professional master program on educational assessment. The second from the Open Assessment Technologies (OAT) in Luxembourg illustrates how open standards can serve as bridges to connect systems from multiple vendors to create a true best-of-breed Next Generation Digital Learning Environment. Also in this issue, we bring you three contributions for the poster sessions during the last conference. Two of them were sent in by OAT, informing us respectively about the challenges involved in a large-scale national diagnostic assessment in Italy and the thinking and research related to the effect of a strong user experience on digital assessment. The third poster contribution comes from none other than the AEA-Europe 2018 Poster Award winner, Gemma Cherry, from Queen’s University Belfast, who presented her research work on disparities in educational attainment across urban and rural locations of Northern Ireland. Our last piece is a call to readers who are keen to share knowledge and experiences on e-assessment, to save the dates for the 2nd FLIP event in Rome in June 2019.

By now, you get the drift, so much is happening in the world of assessment, particularly e-assessments! A big shout out to all our authors for contributing to this issue! By sharing their work in this space, readers can get the flavour of the conversations that took place during the last conference and can start reflecting on what could be presented in the upcoming one. I therefore reiterate our new president’s call to share your work on research, educational assessment and policy programs and I urge all of you readers to show up in this space, let us know how your work is helping to enrich the world of educational assessment.

To include your story in the spring issue of our newsletter, please feel free to reach out to me at aminath.afif@men.lu.

Happy New Year and happy reading!

Amina Afif
AEA-Europe Newsletter Editor
Publications Committee

A word from the President

I am delighted to write these few words in this edition of our newsletter which is my first as the President of AEA-Europe. The newsletter has become such an important component of the Association’s communication strategy. It provides the opportunity for members to share information, experiences and research in the field of assessment. In addition, it helps us to fulfil one of the aims of the Association by disseminating knowledge and consolidating an assessment culture across European nations. In this spirit, I call upon all our readers to contribute to the newsletter in any way that you can: – research articles; news pieces or blogs; advance information about courses, seminars, conferences that you are involved in – this will help raise awareness and increase the visibility of your own national activities related to assessment in education.

We have just had another great annual conference – hosted by our colleagues from Cito in Arnhem-Nijmegen.  We had another great coming together of our community across the 3.5 days with high quality pre-conference workshops, papers, discussion groups, posters, symposia and of course the new ignite session!  We very much welcome your feedback so please complete the online survey, which should have appeared in your in-box, by 17th January at the latest.

We have also encouraged you to send in your presentations so that they can be uploaded on to the AEA-Europe website.  You can do this by contacting admin@aea-europe.net

Also let us know of your most recent news by posting it on our AEA-Europe Facebook page and sending us tweets via Twitter. You can follow us this way and read the latest news and view some photos and videos of activities by members. Word is spreading and our Facebook page continues to reach out to a wider readership.  We are adding information on these sites all the time, so please do check us out and do join us if you have not yet done so and be alerted about news of the Association. Also check out the AEA-Europe website, upgraded with a fresher look so as to enhance its usability, render it more user-friendly and hence create a better space to connect with the educational world.

Finally, I would like to remind you of our 20th annual AEA-Europe conference to be held in Lisbon, Portugal from 13th-16th November 2019These dates have been changed since we advertised them at the Arnhem-Nijmegen conference – this is to avoid the big Global Web Summit that is being held in Lisbon at the same time as the initial dates we had chosen.  We will alert you all through email/Facebook/Twitter/website when the submission of proposals is open.

Our conference theme ‘Assessment for transformation: teaching learning and improving educational outcomes’ is timely and relevant for many European countries.  It does not necessarily focus on new perspectives on approaches to student assessment but aims to emphasize assessment as a tool that can play a relevant role in transforming the ways students are taught, the ways they develop as learners and, furthermore, that can contribute to high quality educational outcomes around the world.  There is no doubt that this theme will give rise to passionate contributions during the conference. I do hope many of you will join us there.

Noting all the work that goes on in and around the Association, I wish to warmly thank those of you who voluntarily dedicate your valuable time and effort to the various committees, as well as the general work of our Association, a commitment which truly contributes to where AEA-Europe stands today.

Happy reading!

AEA-Europe SIG eAssessment pre-conference workshop: Innovative onscreen assessment

Caroline Jongkamp, from CITO, and Rebecca Hamer, from the International Baccalaureate (IB), co-hosted the AEA SIG eAssessment pre-conference workshop: Innovative onscreen assessment. The workshop attracted 19 participants from 6 countries and 15 organisations. About a third had no immediate experience with implementing digital assessment while other participants were ‘old hands’ with many years of hands-on involvement.

This mix was a good start for the discussion on barriers to successful implementation of digital assessment and possible solutions. Workshop participants identified seven barriers and then collaboratively proposed subthemes and possible solutions. One major barrier was School/Institutional Preparedness – clustering logistical issues (e.g. power supply; and connectivity; policy, charging and software issues with BYOD) as well as stakeholder concerns (e.g. staff confidence and training; conservatism at school level as well as from teachers, students and parents). A second was Cost and Effort covering issues such as the need for trained item developers, the purchase of soft and hardware, necessary governmental investment in infrastructure as well as updating regulations and accreditations. To initiate useful sharing of experience and ideas, we paired ‘concerned participants’ with ‘solution proposing participants’. Lively discussions ensued and contact details were exchanged.

The main part of the workshop was a hands-on exploration of existing digital assessment items from a range of providers: Dutch national exams (CITO), digital items used in PISA and the US The Nation’s Report Card, as well as exclusive access to all items from five full IB Middle Years Programme onscreen exams from the May 2018 exam session. Participants examined and categorised items in relation to assessment aim and item characteristics using two recent frameworks: the taxonomy created by Kathleen Scalise in 2009 and the analytical framework developed by Parshall, Harmes, Davey & Parshley (2010). This small-group activity led to in-depth discussions about the alignment of assessment aim and chosen item type, linking back to the earlier discussions regarding barriers focusing on Construct validity and item format alignment. This exploration culminated in a plenary discussion on classification of a sample of provided items, any wished for or known item types missing or difficult to classify, issues encountered in using the classification taxonomies, and so on.

Participants were highly positive in their evaluations; they praised the focused activities and quality discussions, as well as the high level of interactivity throughout the day. They were enthusiastic about the taxonomies we had shared while also concluding that neither taxonomy supported a clear alignment of assessment aim to item type and all participants thought that was an outcome to work towards, perhaps within the framework of a follow up workshop next year.

Rebecca and Caroline will be producing a longer summary of the day’s findings for AEA and the workshop participants. They have also communicated the comments and suggestions for improvement to Kathleen Scalise, who is currently working on an update of the 2009 taxonomy.

For further information about this workshop, please contact rebecca.hamer@ibo.org or Caroline.Jongkamp@cito.nl.

Achieving in Content Through Language: Assessing Bilingual and Multilingual Learners

Recent estimates indicate that at least half the world’s population is bilingual. With this in mind, it is not surprising that over the past 40 years, bilingual education has increased dramatically. It is now believed that almost half of school children access their education through their second language. Bilingual education – the use of two or more languages as mediums of instruction for ‘content’ subjects such as science or history, is a fast-developing practice that is becoming an increasingly widespread direction of language learning in schools (Mehisto & Genesee, 2015). This has given rise to the provision of bilingual education programmes as part of mainstream school education in the great majority of countries at primary and secondary levels.

English is a major medium of instruction and assessment for international awarding bodies providing programmes of learning and assessments worldwide in a wide range of subjects. Programmes are delivered by schools in a variety of multilingual and educational contexts, and increasingly in bilingual education contexts. One key function of these programmes is to prepare bilingual students whose first language is not necessarily English as candidates for international high-stakes assessments. However, bilingual assessment is in need of attention and development and remains a key challenge for the 21st century (García, 2009).

This workshop focused on outcomes from a number of Cambridge Assessment International Educational (Cambridge International) bilingual assessment research studies. Of particular relevance to the research described are bilingual education approaches relating to content and language integrated learning (CLIL) which have been used highly successfully across Europe. The workshop comprised three taught sessions punctuated by group discussion work.

Following an introductory discussion (in which workshop participants were given an opportunity to talk about their respective cultural contexts and challenges), the first session outlined theoretical views on bilingualism (and the difficulties in defining such a concept), bilingual educational approaches (in particular CLIL approaches) and bilingual assessment (widely regarded as a ’thorny’ issue).

In the second session, insights from Cambridge International language awareness research, which have led to guidance for schools with bilingual learners (such as senior staff, content teachers and students) and for assessment agencies (such as test constructors, question-paper setters and examiners) were shared. In particular, the workshop demonstrated how research findings have assisted in raising L2 awareness in all stages of the development of question papers, mark schemes and examiners’ reports. The link between Cambridge International assessments and the Common European Framework of Reference for Languages (CEFR) was also explored. Estimates of minimum CEFR language levels in order to aid teachers in the preparation of their students for ‘content’ assessments were shared. There was also an opportunity to discuss how the research outcomes have informed the construction of the beginnings of an academic language proficiency scale based on the CEFR.

In the final session, overall research outcomes – which not only align with previously reported findings in the professional literature but offer new insights, were discussed.

The overwhelming majority of workshops in this area tend to be academic in their focus. This workshop, by contrast, was grounded in practicalities and offered substantial guidance to assessment researchers and practitioners who are currently implementing (or thinking of implementing) bilingual assessments.

For further information about this workshop, please contact Stuart Shaw stuart.shaw@CambridgeInternational.org at Cambridge Assessment International Education, UK.

Transitioning from paper-based to computer-based assessments at a national level: Challenges and opportunities

Computer-Based Assessment (CBA) is increasingly used in connection with large-scale assessments at the national or system level. These include; international assessment studies such as the PISA test (onscreen since 2015), the PIRLS test (ePIRLS from 2016) and TIMSS, (eTIMSS from 2019). Additionally, a range of national and statutory tests are switching to onscreen presentation both in Europe and beyond.

Although there are benefits to moving to CBA, such as adaptive and therefore personalised testing, automated marking and the efficient generation of a range of reports on student performance, there are also challenges, particularly in relation to large-scale assessments.

At NFER, a small-scale research project was undertaken involving a literature review and a small number of case studies, of a range of countries which have made the switch to onscreen testing for national level primary and secondary assessments,. Key government personnel with responsibility for implementing CBA were interviewed regarding the motivations and drivers for moving onscreen. The challenges that were encountered and the methods used to overcome those challenges were also probed.

Notable findings included similar drivers, benefits and challenges in the transition to CBA across participants. However, diverse solutions had been used to overcome challenges, mainly due to the differing social and political contexts in the individual countries. Common challenges mentioned by participants included: ICT infrastructure shortages; varying levels of support for CBA; and validity and reliability concerns. Even where similar solutions were found, it appeared they were used to differing extents, for example varying approaches to the resourcing of hardware and software were observed, in France and Northern Ireland, even though in both cases they had been purchased through government grants.

All large-scale national assessment programs discussed in this study were classified as low-stakes programmes and, interestingly, rather than incorporating the entire primary to secondary age range, focussed instead on 7-15 year olds. An interesting question remains as to whether high-stakes national tests and qualifications will also move to onscreen administration and what the timeframe might be.

To learn more about this research project, please contact Louise Bailey, l.bailey@nfer.ac.uk at the National Foundation for Educational Research, UK.

Assessment in the age of Data Science: the case of interactive items tested in France

During this digital era, France, like many other countries, is undergoing a transition from paper-based assessments to electronic assessments in order to measure student performance in education. New opportunities are emerging, which themselves give rise to new challenges.

There is a rising interest in technology-enhanced items which offer innovative ways to assess traditional competencies, as well as to address the 21st century skills and to link assessment feedback closer to learning. These technology-enhanced items can be extremely valuable when measuring problem solving skills. Compared to traditional assessments, they provide not only scoring information but enables the collection of rich data so as to determine how the students have reached their answers.

The Directorate for Assessment, Forecasting and Performance (DEPP) of the French Ministry of Education has been deploying digital assessments that include technology-enhanced items since 2016. These rich, and seemingly complex items, can be used to measure how students interact in a given situation to analyze and solve a problem. The exercises and questions included in the interactive items engage the students on multiple levels, and capture not just their responses, but their thought process as well. The rich log data captured by these items allow gaining deep insight into how students approach the problem, and identify areas that might require additional focus.

Nevertheless, the rich log data resulting from the interactive items also pose new challenges. Traditional tools and techniques used for handling the assessment data are not adequate anymore and must be revised. New competencies of data scientist and data engineering are needed in order to face the challenges of storage, processing and making sense of the extensive and somewhat less structured data offered by these items.

In 2017, an exploratory study was conducted by DEPP in which 25 interactive mathematics items were administered to 15,000 students of grade 9. DEPP worked with Capgemini to analyze the resulting log data, which was collected in a semi-structured JSON format. Work was organized in four phases: data preparation and feature engineering; data modelling; analysis of results and presentation of results and data visualization. An additional phase of development was undertaken by Capgemini in order to facilitate the processing of the log data from the future assessments.

The application of various machine learning algorithms to this new type of data allowed information to emerge on the strategies developed by the students as part of problem-solving. This information enables the bridging of summative and diagnostic evaluation. As such, the deployment of these items in the classroom would allow the teacher not only to detect potential difficulties faced by the students, but to also characterize their nature, in order to adapt teaching accordingly.

The technology-enhanced items and their log data can hence provide excellent feedback to the educational community – including teachers, researchers, and policymakers – as to student engagement, their understanding of the subject areas, and how they approach problem-solving.

To learn more about this work, please contact Reinaldo Dos Santos, reinaldo.dos-santos@education.gouv.fr at the DEPP, Ministry of National Education, Higher Education and Research, France.

Reviewing assessments and dense data: towards effective evaluation of assessments

Examination boards have an increasing amount of rich data available to them about the performance of assessments. It is important that this data can be communicated effectively to different groups of people to support assessment review and improvement. A key conversation is between the assessment writers, who are the subject experts, and the assessment experts, who ensure that exam papers meet regulatory requirements and design principles.

This work focuses on the review stage of the exam paper cycle. AQA has, historically, used a static Question Paper Functioning Report (QPFR) to evaluate how an exam paper has performed. The impact of these reports, and engagement with them, has been varied. It is suggested that allowing space for the reflective evaluation of selected data and using this evidence as a starting point for discussion provides a better opportunity for rich conversations and a greater understanding of assessment issues than starting with the QPFR.

The presentation at AEA-Europe outlined the theoretical basis and practical experience of updating the assessment review process. This started by recognising that differing perspectives on assessment performance exist. Within education, prevalent misconceptions and outmoded models are hard to challenge, a psychological model of explanation was presented, supported by examples in the literature. It was suggested that accepting information is significantly easier than rejecting it and thus people are more likely to believe an incorrect but complete model over an incomplete but correct model and also accept explanations that they find intuitively satisfying. Four key questions were put forward as a framework for evaluating evidence;

  1. Does the evidence fit with what I believe?
  2. Is the evidence coherent?
  3. Do I believe the source?
  4. Do other people believe this?

In applying this framework to the process of reviewing assessment performance, the key change for assessment experts was to move from being passive consumers of data provided to them, to become prosumers of available evidence. This allowed the data to stimulate and inform conversations about the key assessment issues. This discursive approach was seen as crucial to evaluating complex assessments, retaining an understanding of the context, incorporating data and leading to effective improvements.

Initial evaluations of this approach show that there has been increased engagement in assessment improvement conversations. The next steps in this work are: First to do more in depth evaluation of how these conversations are developing by interviewing participants and reviewing the materials used within the meetings; second, to consider how assessment writers evaluate the quality of items during the assessment production process; a and, third, to continue to research the use of data visualisations within this framework. It is hoped that a set of principles may be developed for more effective communication of assessment issues with a non-technical audience.

Jonathan Powell is a research assistant at AQA, one of the main awarding bodies (examination boards) in England. If you would like more information about this or other research please contact jmpowell@aqa.org.uk.

The assessment expert of the future: Masters program on educational assessment

High quality of assessment in education is of utmost importance. Based on tests and exams in education, important decisions are taken on the development possibilities of especially young people and on the careers of adults. Quality of assessment is determined by four main indicators: (1) the product, the actual exam questions and assessment tasks; (2) the process, the procedures in which the assessment are developed and the actual administration regulations; (3) the public, the examinees, but also the working context, public opinion and the press are important stakeholders; and (4), last but certainly not least, the professionals involved in the policy making on and in the construction of assessments and in the interpretation of assessment results. Their assessment competences influence the quality of assessment in education.

Until shortly, in the Netherlands, the educational community was lacking an educational program of professional educational testing experts on a master level. The educational field and politics responded positively to the development of a professional masters program on educational assessment. Fontys University of applied sciences, in collaboration with the University of Twente and Cito started with the development.

The program is characterized by:

  • A solid theoretical foundation that can be applied in different practical situations.
  • Six main themes are distinguished: the assessment landscape, design of educational assessments, balancing formative and summative assessments, applied testing theory, accountability and technology-enhanced assessment.
  • Education is explicitly connected to students’ professional practice.
  • Feed-up, feedback and feed-forward is central.
  • Face-to-face education, online education and activities in the workplace are combined.
  • Students show their knowledge and skills in tests and professional products.

The students are professionals who work in a diversity of professional contexts and on different educational levels, like secondary education, vocational education or higher education. They are looking for solutions, for example in advisory trajectories on assessment policy, in evaluation discussions with teachers about specific assessment questions, in discussions about the possibilities of the use of mobile assessments in formative and summative assessments or the use of assessments for group work. High-qualified professionals are needed to evaluate the reliability and validity of tests, use test theory in an adequate way and professionalize their colleagues about different standard setting methods.

By training core players in educational organisations, they can spread the word and share their knowledge in order to increase knowledge and skills on testing and assessment to increase assessment quality in all education sectors.

The masters program is a two-year part-time program of 60 European Credits. Each year contains four periods of ten weeks in which theory, skills and community building are shared. In every period, face-to-face meetings are organized ones in three weeks. The program exists now for the third year. More than 60 students have started and we are proud on the first 9 graduated students. In their research, they have investigated for example, guidelines for assessment construction, the effect of feedback, or the use of augmented reality glasses.

We like to discuss the content of the program with you. If you want to provide us with feedback or if you have questions, please contact us.

For further information, please contact Desirée Joosten-ten Brinke, d.tenbrinke@fontys.nl or Theo Eggen, Theo.eggen@cito.nl at CITO, Netherlands.

Building Bridges by Open Standards

The future of education is all about personalized learning: offering effective learning pathways using a wide range of content and learning apps curated from various sources to make it easier for teachers to teach and students to learn faster and more effectively.

This puts a huge demand on the assessment and learning environments of today; the functional scope is broadened, technologies are advancing rapidly and end-users are more demanding from their daily consumer experience. As a result, former product features (e.g. remote proctoring) turn into separate product categories, posing challenges for system integrations.

During this Ignite Session, we’ve illustrated how open standards like IMS QTI, LTI and Caliper can serve as bridges to connect systems from multiple vendors to create a true best-of-breed Next Generation Digital Learning Environment (NGDLE), ready for the challenges and students of tomorrow!

For more information about this Ignite Session, please contact: Mark Molenaar, markm@taotesting.com at Open Assessment Technologies.

Large Scale National Diagnostic Assessment in Italy: A Case Study

From April to May 2018, Invalsi, on behalf of Italian Ministry of Education, and leveraging the Open Source TAO solution and Open Assessment Technologies services, delivered digital assessment testing to the complete population of 8th-10th grade students under the Ministry. Covering the subject matters of Italian, Math, and English Reading and Listening, this the first of its kind operation marked one of the most significant leaps in digital assessments for a European country. During the 4-week period,  critical data was gathered from 1.8 million students across 4.4 million tests to help the national education system better understand the educational disparities across the country.

Such a high stakes operation implies multiple key ingredients for a smooth administration. Given the media exposure and high political stakes, and the implications on teachers, principals, and parents, we need to have the total confidence in the accuracy of the tests’ results.

Of all the factors, we found the following to be the most challenging and important:

High Availability and Scalability: The infrastructure needs to build-in the redundancy at all levels to ensure a smooth user experience with the assumption that all incidents and disasters possible will happen but should not cause any interruption on tests administration.

Our operation in Italy was scaled to support 200,000 concurrent users taking digital assessments. This was to include media intensive components for the English Listening and standard concurrency of 50k, with peaks at 80k concurrent test takers, were observed. Similar to previous smaller operations and due to the visibility, several attempts of attacks such as DDOS, Brute Force and general scanning have to be handled.

Fair Testing for the population with Cognitive Disabilities – In order to ensure a level playing field and deliver consistent results, Read Aloud functionality, which highlights the text that is being read aloud, was implemented to help students with cognitive disabilities. Other accommodation tools such as zoom, magnifier,  and color contrast that are needed to ensure a fair assessment, were also made available.

Operational Equipment and Workstations – As 200k workstations were used to administer the assessment, availability of operational consoles and suitable internet connectivity had to be ensured. Considering the need for a smooth, consistent user experience, a diagnostic tool was put in place to run across testing networks allowing for tweaks as needed on variables such as browsers used or the number of users in sessions of concurrent tests. The online Service was built to minimize the requirement at this level but some resources such as audio listening put some obvious requirement.

This cross-border collaboration is a case study for how national education ministries can leverage Open Source technology to lean on each other – and learn from each other – to leverage the latest developments in digital assessment. By utilizing Open Source solutions, like TAO, these institutions can cut cost and more importantly, better collaborate in pursuit of their mission by sharing best practices for connectivity, scalability, accommodation, and feedbacks from previous global operations.

To view our poster click here!

For more information about this case study, please contact: Patrick Plichart, patrick@taotesting.com at Open Assessment Technologies.

User Experience (UX) in the World of Assessment  

A topic that is of utmost interest and importance to us at Open Assessment Technologies (OAT) is the effect that a strong User Experience (UX) has on digital assessment. It is our belief that creating pleasurable and convenient experience raises engagement levels and triggers the intrinsic motivation of the learners, allowing them to demonstrate their true competencies and enabling the success of the assessment goal.

Our thinking and research on this topic was the basis for our presentation at AEA Europe Conference. We not only wanted to discuss the core elements of good UX in educational assessment and how we are tackling them at our company but to also build awareness around the human factor, which is especially needed as new technologies are introduced.

We were excited to learn that we are not the only ones thinking about UX in this way. From the conversations we had, we found many to already be sensitive towards the importance of UX. While a subset of our community has already started to include UX efforts in their projects and daily work, some were even applying standard UX evaluation techniques to new forms of digital assessment unknowingly!

What was most exciting to us was being able to share how we think about UX as a foundational component of our platform and how that leads us to create our User Experience department at OAT. Our poster wants to show that universal design principles are critical to follow in all forms of assessment. Otherwise, measurement biases are likely to compromise the targeted assessment goal and to harm the validity of the test. We argued that strategic UX ensures that students are able to demonstrate their knowledge on the measured competencies and not their tool proficiency.

Our belief is that effective user research is key, and every decision should be backed by data and the insights it provides. At OAT, all user research efforts conducted are channeling into a knowledge base. This is a concept we will continue to develop and we hope to share our experience with the community next year.

To conclude, we are excited and encouraged to learn that while the world of assessment is still holding on to its conservative roots, more and more people in our community are starting to recognize that good UX plays a critical part in the future of assessment. This gives us even more motivation as we strive to work even harder in that direction!

To view our poster click here!

For more information about the user experience in digital assessment, please contact the two UX designers: Sam Sipasseuth, sam@taotesting.com and Luc Schomer, luc@taotesting.com, at Open Assessment Technologies.

AEA-Europe 2018 Poster Award –  Gemma Cherry

The poster which I presented in Arnhem outlined my PhD research, in which I aimed to uncover disparities in educational attainment across urban and rural locations of Northern Ireland. Previous research which investigates the influence of location on young people’s educational outcomes, often focus on measuring the impact of poverty in urban communities and largely overlook the influence of rural locations and the differing context of the young people living in these areas. While urban and rural disparities are not a completely new topic in the field of education, they have been largely ignored in the context of Northern Ireland. After an extensive systematic review of the literature, I found only nine studies were conducted on this topic across the United Kingdom and none of these focused on Northern Ireland. This highlights the minimal research efforts dedicated to understanding such educational inequalities. My research aimed to fill this gap.

A quantitative methodology, incorporating secondary data analysis was utilised to obtain the results. The sample size was sufficiently large and representative of the Northern Ireland post-primary school population with n = 63,373.

Using multilevel models I found that at post-primary level, pupils who live in urban locations of Northern Ireland are less likely to achieve a maths or English GCSE at grade A*-C, compared to pupils who live in rural locations. Similarly, pupils living in rural locations are expected to achieve higher numbers of GCSEs overall, in any subject, compared to urban pupils. Location was found to be having a statistically significant influence on pupils educational attainment, even after controlling for other pupil and school characteristics. These results provide the first evidence that disparities in young people’s educational attainment exist across urban and rural locations of Northern Ireland, to the detriment of pupils living in urban areas.

I thoroughly enjoyed my time in Arnhem attending the 19th AEA-Europe Annual Conference. As an early career researcher, I found the poster presentation session a fantastic way to share my research with a large audience of experts in the field. The feedback I received has been incredibly beneficial to my research. I was delighted when I received the news that I had won the best poster award.

For more information about the  poster award of the 2018 AEA-Europe Conference, please contact Gemma Cherry gcherry01@qub.ac.uk, Queen’s University Belfast, School of Social Sciences, Education and Social Work.

The 2nd FLIP event : save the date, 6-7th June 2019 in Rome

Following the first FLIP event held in Paris in June 2018, the FLIP coordinating team met up last December in Lisbon to discuss the next steps to be taken within the framework of this new initiative.

It should be recalled that FLIP is a joint initiative set up in 2017 to share knowledge and experiences, IT development costs and digital content within the context of e-assessment. Current members include the founding countries France, Luxembourg, Italy and Portugal as well as Brazil who joined in 2018. Nonetheless, a growing number of countries are now closely following FLIP, especially those with the ambition of developing technology-enhanced items in order to better assess traditional competences on a test delivery platform which supports “very-large-scale assessments”.

FLIP has, since the beginning, been driven by the spirit of collaboration between community participants in order to constantly improve the digital tools in e-assessment. Members may be countries, institutions or corporate bodies involved in educational assessment. A FLIP participant would benefit from shared knowledge and experiences in e-assessment, would be willing to mutualize resources to fund IT development costs and would collectively build or share open-source and interoperable technology-enhanced item content according to specific copyright, sharing and confidentiality regulations.

Up to now, some digital tools have been identified and planned for development in the short and medium term, according to the priorities of different countries. In order to formally consolidate the collective work undertaken within FLIP till now and to pursue the next steps, a 2nd international event will be organized by the National Institute for the Educational Evaluation of Instruction and Training (Italy), Invalsi, in Rome on 6th–7th June 2019. A formal invitation will be sent out soon and interested countries, institutions, or corporate bodies will be welcome to join. As was the case in the first event, the meeting will be a space to engage in conversations on country experiences in e-assessment, open standards, product demos, platform functionalities, broad issues related to security, confidentiality, usability, data analytics, reporting etc. and a hands-on session with respect to the development of technology-enhanced items.

To learn more about FLIP, please contact Dr. Thierry Rocher, thierry.rocher@education.gouv.fr at the DEPP, Ministry of National Education, Higher Education and Research, France.