Newsletter – Autumn 2014bigadmin2017-08-15T18:27:12+00:00

Newsletter – Autumn 2014

Editorial

Jo-Anne Baird – Acting AEA-Europe President

Council of AEA-Europe

I am delighted to welcome Thierry Rocher (DEPP, France) to his post as Vice President of the Association and Iasonas Lamprianou (University of Cyprus) as a Council Member.  Both will take up their positions when they are formally approved at the General Assembly meeting in Tallinn in November.  The Council would like to thank the Nominations Committee for providing such good candidates for the Council vacancies.  The Nominations Committee were Christina Wikstrom (University of Umea, Sweden), Frans Kleintjes (Cito, Netherlands) and Gordon Stobart (Institute of Education, University of London, UK).  According to the Association’s Constitution, Thierry will act as Vice President until November 2016, when he will take on the post of President for two years.  Iasonas has a four-year term as a Council Member, which may be renewed for a further four years.

AEA-Europe Coordinator 

Jim Brant has been appointed as our part-time Coordinator.  Jim has a wealth of experience in working with assessment associations in this way and has already become involved in many aspects of the Association’s work.  This appointment is a major step forward in professionalizing the work of the Association.

AEA-Europe Secretariat

Sarah Maughan has decided to step down from the post as Executive Secretary of AEA-Europe, as she is moving on from the National Foundation for Educational Research (NFER, England).  The Association owes a great deal to Sarah and to the NFER for their support since the initiation of the Association.  A call for a new Secretariat will be made over the coming weeks.  This is an excellent opportunity for a Corporate Member to become more involved at the hub of the Association’s work.

15th Annual Conference, 6 – 8 November, Tallinn

As Chair of the Scientific Programme Committee for this year’s conference, I can promise you an excellent set of presentations.  We had a very strong field and the papers have been grouped thematically to produce a very interesting set of critical issues in assessment for discussion.  There will be some new countries represented by presenters and participants this year, all of which points to a thriving and growing Association.  In keeping with the conference theme and with Estonian culture, we are embracing technology to a greater extent at this year’s conference, including plans to podcast keynote addresses and the use of social media, such as twitter. See the article Let’s go… social in Tallinn for further details.

Kathleen Tattersall New Assessment Researcher Award

Yasmine El Masri has won the award this year and will be giving a keynote at the conference on her research relating to the effects of language upon assessment of science.  Congratulations to Yasmine.  The award is sponsored by AQA.

AEA-Europe Newsletter

The Council would like to express their thanks for the professional job that Julie Sewell has done in producing such a vibrant Newsletter for so many editions.  Without Julie, the Newsletter would not have taken off.  She has been a great Editor in many ways, including her recruitment of interesting articles, her engagement with the text and her personable way of communicating with authors.  She has made the Newsletter a success.

Introducing the Tallinn conference

AEA-Europe holds its 15th annual conference in Tallinn at the Meriton Grand Conference & Spa Hotel from November 6th to 8th  2014. The theme of the conference is “Assessment of Students in a 21st Century World“. The local host is the Foundation Innove. Foundation Innove is a competency centre in the area of general and vocational education and education support services as well as a mediator of the EU structural aid. The main objective of the Foundation is to coordinate the lifelong learning development activities and to implement the relevant programmes and projects and the EU structural aid in a targeted and efficient manner.

The Estonian organisers are more than happy to welcome guests to the conference. We remind you that there is no bad weather in Tallinn, there is only inappropriate clothes! At the beginning of November, there is usually not yet snow, but temperatures are on average between 0°C and +4°C. It may rain. The sun rises at about 8am and sets at about 4pm. The time difference is plus one hour compared with Berlin and plus two hours compared with London. The currency is the Euro and debit and Credit Cards are widely accepted; free wifi is common in hotels, cafes and restaurants.

The conference hotel is so close to the historic Old Town that you can walk there any time you like. You can get good views from numerous viewing platforms of Toompea and later visit cafes and restaurants to eat and rest. We have also organised an official tour of Tallinn. If you are lucky, you could get tickets to see The Nutcracker at the National Ballet. If you have time, you could visit the National Museum of Art in Kadriorg, Kadriorg park and the Open Air Museum in Rocca-al-Mare.

See you in Tallinn!

Further  information

Further information about culture in Estonia can be found here and a video introducing Estonia can be accessed here

Let’s go… social in Tallinn!

As you probably know, AEA-Europe has a growing and lively LinkedIn group (and if you didn’t know, have a look and join us today!); the  AEA-E 2014 Annual Conference is also on Facebook.

This year, at the Tallinn conference, we would like to try something new and spread the word about the conference on Twitter. Do you have a Twitter account? If so, please share your opinions on sessions, speakers and events during the conference. You can also share pictures of the event, document links, notes and quotes.  And when it’s time to go home, you can always use Twitter to stay connected with other professionals and colleagues you interacted with during the conference.

And, of course, do not forget to use our hashtags freely:  #aeaTallinn or #AEAe_2000

Issues in Assessment

What is meant by “rigour” in examinations?

by Isabel Nisbet

Many of the advocates of reform to school exams and tests – perhaps particularly in the UK and the USA – claim that they are adding (or restoring) “rigour”. “We will make GCSEs more rigorous by stripping out modules”, said the UK Secretary of State in 2010  and in 2014 the Chief Inspector of Schools in England criticised the use of “useless vocational qualifications… which lack rigour” . Ofqual, which regulates examinations in England, states “We regulate because .. the public needs to be assured that standards and rigour are being maintained.” All these statements, assume that rigour is good. But those who make them – and their critics – are seldom sure what they mean by “rigour”.

The concept of “rigour” has undergone something of a journey. Its root meaning, linked to the Latin “rigor”, meaning “stiffness”, conveys notions of strictness and severity. It was largely a negative word, compared with patience and kindness. However, some ideas associated with this root meaning can have more positive connotation – thoroughness, perfectionism, not overlooking mistakes. If I was having a blood transfusion, I would want the blood I was receiving to be checked rigorously.

A development was to apply “rigour” to intellectual features- “rigorous arguments”, “rigorous analysis” and “academic rigour”. These terms are often used with approval, to signify care and thoroughness, contrasted to sloppy thinking or unevidenced assertions.

A more radical development, described by some writers as a “paradigm shift” , is seen in contemporary American writing on “rigor” by such authors as Barbara Blackburn . They apply “rigor” mainly to curriculum and state that it is not about strictness, but about having the highest ambitions for all students, and supporting them to achieve them. It is contrasted with “low ambition”, “dumbing down” and “second best”.

In a paper for the IAEA in Singapore, I considered how the various notions of “rigour” might be applied to assessment. Some senses are linked to the root meaning (strictness), and are found, for example, in discussions about the extent to which mark schemes should penalise mistakes in spelling or grammar. Views on that vary, including by those who want students to aspire to higher standards (“rigor” in the third sense), as some claim that students will play safe and avoid trying out new ideas if they are afraid that every mistake will be picked up. Against that, a mark scheme that applied no penalty for grammatical mistakes might be criticised on validity grounds.

Other arguments link “rigour” with the level and type of knowledge or skill tested, referring to one or both of Bloom’s taxonomy and Webb’s “depth of knowledge” . On that account, the higher order the thinking skills, the more rigorous the assessment. Yet other approaches link rigour with difficulty, or level of demand.

Before one structure or type of exam or assessment can be said to be more “rigorous” than another, a considerable amount of analysis is required, and much will depend on the subject matter being assessed. There are prima facie arguments for and against the view that terminal examinations are more rigorous than modular examinations. While in some subjects terminal exams are thought to allow for more testing of synoptic thinking, in others, the limited “window” of the terminal exam is thought to exclude assessing more developed thinking. Also, there is no sense of “rigour” in which one can generalise that vocational subjects are less rigorous than academic subjects. No one form of assessment is prima facie more rigorous than its opposite in all contexts.

There is an urgent need for more analytic work to be applied to the language of educational  – and assessment – reform.  It is a necessity, not an academic pastime, to help policy-makers to be clear what they mean.

Work in progress

The Establishment of the Centre for Educational Measurement at University of Oslo (CEMO)

Sigrid Bloneke

The context

CEMO is a newly established Norwegian institution for educational measurement. It is part of the Faculty of Education at the University of Oslo. Jan-Eric Gustafsson, University of Göteborg (Sweden) was responsible for recruiting the first generation of CEMO researchers. Sigrid Blömeke, Leibniz-Humboldt Professor of Instructional Research at the Leibniz Institute for Science and Mathematics Education, Kiel, and the Humboldt University of Berlin, Germany, was named the first director of CEMO. Johan Braeken, University of Tilburg, The Netherlands, and Ronny Scherer, Humboldt University of Berlin, were then recruited as Associate Professor and Postdoctoral Fellow respectively. Several additional professorships, postdoctoral positions and PhD fellowships are currently announced.

What is happening

CEMO conducts basic research in educational measurement and applied research in early childhood, primary and secondary education, and higher education. The centre develops national competence in educational measurement, disseminates knowledge to relevant stakeholders, and teaches on Master and PhD levels. International collaboration and networking are regarded important to strengthen CEMO’s work. The overall rationale for all research work at CEMO is two-fold:

•testing the psychometric quality of instruments important for dealing with educational challenges so that they balance quality, equity and effectiveness in education

•carrying out substantive research that takes seriously national concerns about unintended consequences and side-effects of assessments in education and is of practical benefit for policy makers and participants

CEMO follows a modern understanding of psychometrics by taking a comprehensive view. The centre’s research therefore covers all four fields of the measurement cycle. As pointed out by Wilson (Psychometrika, 2013), this means carrying out studies on how to conceptualize constructs, taking part in instrument development, studying coding and scoring as well as developing measurement models as part of a unified view of psychometric practice. Such an approach turns around the traditional measurement procedures by fitting “the data to the construct (and to learn about both the construct and the measurement from when it does and does not fit well), rather than seeking a model to fit the data” (Wilson, 2013). In addition, CEMO has expanded the traditional focus of psychometrics on cognitive student achievement by including studies on the assessment of non-cognitive outcomes of education (e.g., social competencies or well-being), context characteristics on different levels of the education system (e.g., instructional quality), actual behaviour (e.g., performance assessments) and formative development (rather than relying on summative assessments only). These are research needs not only discussed within the field of educational measurement but also identified recently as policy priorities by OECD (2013).

Furthermore, CEMO takes account of the need to examine the role of assessments in public policy and for system-level changes. Kane (JEM, 2013) points to the necessity of validating test scores by balancing them against negative effects. In fact, evaluation reports from Norway suggest unintended consequences in this country, too: “Det rapporteres om tilfeller av at annen undervisning settes til side til fordel for forberedelser til prøvene.” [We have received reports that the teaching of some topics is reduced in favor of test preparation.] (Seland, Vibe & Hovdhaugen, 2013, p. 22) If the tests are constructed well, these consequences need not necessarily be a problem as the authors conclude: “Samtaler med skoleledere bekrefter at forberedelser til nasjonale prøver inngår I skolens øvrige planer for å styrke grunnlegende ferdigheter.” [Talks to school principles confirmed that the preparation for national tests was part of the schools’ objective to strengthen basic skills.] To be able to support such positive side-effects, the tests have to be validated.

Capacity building in educational measurement is another important mission of CEMO that regards itself as a national resource centre. With the increasing use of assessments in Norway, an increasing need for methodological knowledge outside the field of educational measurement arises. Professional development activities, outreach and teaching are the means for accomplishing this objective. Different types of methodological support aim at addressing the needs of PhD students and Postdocs at different places in Norway. A long-term mission of CEMO is to establish a Master program and a Graduate School of Quantitative Research and Educational Measurement in cooperation with related fields such as psychology, public health or econometrics and with related initiatives such as NATED, the Norwegian National Graduate School in Education. In the long run, training for school owners, policy makers and journalists is to be offered as well because a well-informed public debate about assessments is important.

Why inform members

Most of the objectives outlined can only be accomplished if they are connected to similar efforts in other countries because they are methodologically demanding and they require extensive resources. Several countries exist where educational measurement has made much progress during the past decades so that CEMO intends to build up systematic collaboration with countries such as England, Germany, the Netherlands, Sweden or the US. Exchange programs at the student and the researcher levels as well as internships at international institutions are means to tighten their links to CEMO. Thus, CEMO would contribute to UiO’s first major mission to “fremme grensesprengende forskning, utdanning og formidling og være en etterspurt internasjonal samarbeidspartner” [foster transboundary research, teaching and outreach as well as to become a place to look at for international collaborators] (Det utdanningsvitenskapelige fakultet, 2013).

References

Det utdanningsvitenskapelige fakultet (2013). Årsplan 2013-2015. Oslo: Universitetet i Oslo.

Kane, M. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1-73.

OECD (2013). Synergies for better learning: An international perspective on evaluation and assessment. Paris: OECD.

Seland, I., Vibe, N. & Hovdhaugen, E. (2013). Evaluering av nasjonale prøver som system (= Rapport 4/2013). Oslo: NIFU.

Wilson, M. (2013). Seeking a balance between the statistical and scientific elements in psychometrics. Psychometrika, DOI 10.1007/s11336-013-9327-3.


Evaluating the effectiveness of education systems with PISA data

Preparing for Life in a Digital Age: The IEA International Computer and Information Literacy Study International Report

Daniel Caro

The context

International rankings of education systems based on student performance in the PISA test receive great attention from the media and policymakers every time a new cycle of PISA is released. There is particular interest in the structures, processes, and implemented reforms of the highest performing education systems and in how they can be implemented in their own systems. PISA rankings, however, reflect the cumulative effect of social, economic, and cultural processes as well as policy interventions acting together over long periods of time. It is therefore difficult and often misleading to compare the performance of educational systems using the rankings alone. Likewise, successful educational models of the highest performing education systems cannot simply be transferred to other education systems.

What is happening?

Recent work proposes an alternative classification of education systems based on effectiveness indicators rather than absolute performance (Lenkeit & Caro, 2014). It is argued that more useful comparisons for policy purposes can be made by taking an educational effectiveness approach (Creemers & Kyriakides, 2008). Drawing on this work, this project will:

  1. 1. Produce education system effectiveness indicators for PISA 2000-2012.
  2. 2. Examine effectiveness enhancing factors

Effectiveness indicators compare educational systems whilst taking into account previous performance and the socioeconomic context in which they operate. Educational systems that perform above what is expected for their socioeconomic context and previous performance are regarded as more effective. This project will examine which factors amenable to policy intervention contribute to education system effectiveness.

The project is sponsored by the OECD’s Thomas J. Alexander Fellowship.

Why inform members?

The project contributes to methodological developments of effectiveness research in international large-scale assessments and provides relevant information for policymakers to further look into policies, structures, and reform measures that have favoured effectiveness. It also contributes a contextualised, effectiveness perspective of performance that can shape the way policymakers and the media reflect on results of international assessments.

Further information

For further information please contact Daniel Caro at: daniel.caro@education.ox.ac.uk

References

Creemers, B. P. M., & Kyriakides, L. (2008). The dynamics of educational effectiveness. A contribution to policy, practice and theory in contemporary schools. London: Routledge.

Lenkeit, J., & Caro. D. H. (2014). Performance status and change – Measuring education system effectiveness with data from PISA 2000-2009. Educational Research and Evaluation, 20(2), 146-174


Teacher Involvement in High Stakes Language Testing: Call for Book Chapter Proposals


Daniel Xerri and Patricia Vella Briffa

Context

Chapter proposals are being sought for an edited book on teacher involvement in high stakes language testing at the secondary level of education. This is currently an under researched area in the field of assessment despite the burgeoning popularity of such tests internationally. This book seeks to address this gap in the literature by examining the benefits and challenges of teacher involvement as well as the strategies that might need to be adopted in order to effectively implement this. The book grapples with the issue of teacher involvement by focusing on a diverse range of contexts, thus making it a seminal investigation of this research area.

Instruction is influenced by test content, procedures and results, especially in the case of high stakes language testing. The washback effect of a language test on classroom practice seems to be undeniable, however, it need not always be negative and stultifying. Positive washback is more likely to ensue if tests are produced with an awareness of the learning context, student cohort, and subject content. Providing teachers with a sense of ownership by encouraging them to play an active role in high stakes language testing is likely to increase its formative potential and lead to more effective learning. Learning in this case does not only apply to students but more significantly to teachers, especially in terms of the latter’s assessment literacy.

This book also considers the challenges and disadvantages associated with teacher involvement. For example, one argument is that teachers should not be actively involved in high stakes language testing because their judgement is insufficiently reliable. Nonetheless, the main danger in such exclusion is in relation to the validity of the testing system. A system that aims to safeguard its validity while improving outcomes will seek to harness teachers’ involvement. Teacher involvement empowers educators to play a role in reforming high stakes testing so that it is more equitable and more likely to enhance classroom practices.

What is happening?

Daniel Xerri and Patricia Vella Briffa (University of Malta) will be editing this book for Springer. Those interested are invited to send the editors a 500-word proposal at daniel.xerri@um.edu.mt and patricia.vella-briffa@um.edu.mt

Proposals about the following topics are particularly welcome:

  • Concerns with high stakes language testing
  • Different roles adopted by teachers when involved in high stakes language tests
  • Perceptions and attitudes in relation to teacher involvement
  • Reliability of teacher judgement
  • Value of teacher involvement in high stakes language testing
  • Effects of teacher involvement on beliefs about high stakes language testing
  • Effects of teacher involvement on assessment literacy
  • Washback effect of teacher involvement on classroom practice
  • Concerns with teacher involvement
  • Challenges and difficulties of teacher involvement
  • Effective strategies for teacher involvement
  • Case studies of teacher involvement in high stakes language testing

The deadline for submission of chapter proposals is 20 December 2014.

Proposed chapters will be 8,000 words long and may take the form of theoretical, empirical or case study analyses. Chapters will be subjected to double-blind peer review.

Why inform members?

The focus of this book will appeal to AEA-Europe members given that they consist of teachers, assessment specialists, and policy-makers in the field of language testing. Hence amongst them there might be individuals interested in submitting a chapter proposal about a topic related to teacher involvement in high stakes language testing.


Exploring the Meaning of a Grade

Stuart Cadwallader from the AQA Centre for Education Research and Practice (CERP)

The Context

The qualification system in England is undergoing significant change. The mainstream national qualifications that are used by most schools, the General Certificate of Secondary Education (GCSE) and the A-level, are in the process of being substantially revamped. In the case of GCSEs this process will include a change to the way examinations are graded. In the current system students can achieve a range of grades between ‘G’ and ‘A*’ but the reformed qualifications will have a grade range of ‘one’ to ‘nine’.

The change presents an interesting challenge. GCSEs were first introduced over 25 years ago so students, teachers, employers, universities and other stakeholders are familiar with the grading system and, arguably, have at least some understanding of the level of performance that is required to attain each grade. This long established understanding will now require recalibration. For example, how will someone with a grade 8 in a particular subject differ to someone with a grade 7 in terms of what they know and can do? How will the new grade scale compare to the old one?

Qualitative grade descriptors are currently used to provide an indication of the skills and knowledge required to achieve a given grade in a given subject. As such, they provide a potential mechanism for supporting stakeholders in their understanding of the new qualifications. However, the current incarnation of grade descriptions are very broad in scope and do not provide clear criteria against which performance can be evaluated. The methodology used to produce them does not appear to be robust or consistent and it is unclear how useful they are to students, teachers and awarding organisations.

What is happening

CERP have reviewed the literature and we argue that grade descriptors, in their current form, are not sufficiently suitable for either of their two primary purposes:

•Supporting teachers as they prepare students for the qualification

•Supporting examiners as they set grade boundaries during awarding

The overarching issue is that describing grades accurately and in appropriate detail is extremely challenging due to technical features of the current system for assessment and awarding.

For example, the system dictates that a student’s grade is based on the total number of marks that they attain across all items in an assessment. Students can therefore compensate for weak performance on some items with strong performance on others. This means that there are a multitude of routes to a given mark, each with a different profile of performance across the items and domains of the assessment. The task of identifying and describing a prototypical performance, particularly for those grades in the middle of the grade range, is therefore an enormous challenge.

Technical issues such as this need close consideration when developing grade descriptors for the new qualifications. However, there are arguably two larger questions that need to be tackled first:

1.What should be the explicit purpose of the new grade descriptors?

2.What methodology should be used to develop grade descriptors such that they fulfil their specified purpose?

Based on the literature, we are currently considering further research to explore whether valid and practical post-hoc grade descriptors could be generated using an empirical approach. For example, data gathered following the award of the new qualifications could be used to identify items which discriminate between candidates at different grades. These items could then be used to exemplify and describe each grade, rooting the description in the context of the assessment. Such post-hoc exemplifications of performance at each grade may not have the same aspirational emphasis as a grade descriptor but may better meet the needs of teachers, while also more accurately reflecting the nature of the assessment system.

Why inform members

Our research has thus far focussed on England and is therefore rooted in a fairly narrow context. It would be very helpful to explore the ways in which researchers and practitioners from across Europe approach grade descriptors. Such understanding would provide the platform for a better informed and perhaps more creative approach to developing grade descriptions for the new qualifications.

We will be hosting a discussion group at the upcoming AEA conference in Tallinn with the aim of capturing a range of European perspectives on these issues. We hope to see you there!

Further information

For further information please contact Stuart Cadwallader at SCadwallader@aqa.org.uk.

What’s New

Introducing Jim Brant

Jim has a background as an independent educational assessment consultant and has worked with various UK awarding organisations, World Bank and the Chartered Institute of Educational Assessors.  Prior to this Jim worked at The Qualifications and Curriculum Authority undertaking test development quality assurance for UK national curriculum onscreen tests including assuring content definition, test specification, item development, test design, scoring test responses, standard setting, stakeholder management, accessibility.  He has an MA in Educational Assessment and Prince 2 project management qualifications that will be invaluable in his role as coordinator.

Jim will be attending the Tallinn conference so if you have any questions, queries or suggestions about AEA-Europe he will be pleased to hear them.


Members’ News

Congratulations to Gabriella Agrusti! She has been appointed Associate Professor in Educational Research Methodology, Multimedia and e-learning, and Educational Assessment at LUMSA – Libera Università Maria Santissima Assunta of Rome (www.lumsa.it).

She takes up her new role officially on October 1st, 2014.

Yasmine El Masri is this year’s winner of the Kathleen Tatersall New Research Award. Antonella Poci, chair of the Professional Development committee, has announced this on LinkedIn – and her announcement brings this award full circle, as Antonella was the first recipient when it was the New Researcher award.

Michelle Meadows, Ian Stockford and Paul Newton have moved to research posts in Ofqual.

Lena Gray has moved to AQA from SQA. Alex Scharashkin is now at AQA (formerly at the National Audit Office).  Anton Beguin now works part-time at Cito and AQA.

Isabel Nisbet is CEO at ALCAB.


Editorial position

The idea of an AEA-Europe newsletter arose during the very first meeting of the Communications (now Publications) committee. As a result of the discussions, I took on the role of editor and with the help of the whole committee (comprising our chair, Chris Whetton, Emma Nardi, Guri A. Nortvedt, Jo-Anne Baird and me) we produced the first issue in October 2008, in time for the Hisar conference.

This particular newsletter is the thirteenth – and that’s quite long enough for one person to do the job. So, here’s an opportunity for more involvement in the association in the role of newsletter editor. If you’re interested, watch out for a call for a new editor. This will be sent out sometime in the Autumn.

What follows is not a formal job description, but it includes what I do and will, I hope, give a flavour of the role.

As there are two newsletters a year, work is concentrated in two bursts: in early Spring and early Autumn. We always include a request for articles in the previous newsletter, but a prompt via an email from the Secretary works wonders – as do emails from members of the Council to likely contributors!

Once articles come in, it’s a question of checking them for sense and grammar; if there are major issues, I always send them back to the contributors for approval, but this is usually a straightforward job (but does require fluent English). I also need to pester people for photos as we started to include these from the second issue. The list of conferences needs to be updated and members’ news items collated.

That all sounds a little dry, but it’s not really. It’s great to have the chance to communicate with members across Europe and even more widely (as we’ve had contributions from Hong Kong and the USA) – and those ‘virtual’ meetings can be made into reality at the conference. How else would I know that one contributor owns a stunning pair of red shoes and that he is a fantastic dancer?

I shall miss doing this job, but it’s a great opportunity for another member, particularly at a time when the newsletter is due for at least a little overhaul.


Conferences

IACAT (International Association of Computerized Adaptive Testing) conference

8-10 October 2014

ETS, Princeton, New Jersey, USA

AEA-Europe 15th Annual Conference

Nov 6-8, 2014, Tallinn, Estonia

ATP Innovations in Testing 2015 conference

March 1 – 4, 2015, Palm Springs, California

http://www.innovationsintesting.org/

North Central Association Higher Learning Commission’s 2015 Annual Conference

March 28 – 31, 2015, Chicago

http://annualconference.ncahlc.org/

AERA 2015 – Towards Justice: Culture, Language, and Heritage in Education Research and Praxis

April 16 – 20, 2015, Chigago, Illinois

http://www.aera.net/tabid/10208/Default.aspx

6th IEA International Research Conference (IRC)

June 22 – 26, 2015

Cape Town, South Africa

www.iea.nl

CICE-2015 – Canada International Conference on Education

June 22 – 25, 2015, Toronto, Canada

EEL 5th annual international conference on education and e-learning

September 7 – 8, 2015, Singapore

http://www.e-learningedu.org/

41st Annual IAEA conference – The Three Most Important Considerations in Testing: Validity, Validity, Validity

October 11 – 15, 2015, Lawrence, Kansas

http://www.iaea2015.org/


Courses

Rasch Conference

On March 20th 2015 Pearson will be hosting the 9th annual UK Rasch Users conference. This is a conference that aims to stimulate research into the application of Rasch modelling and is an opportunity for all practitioners (and those with general interest) to meet, discuss and present their work. Early next year 2015 formal registration will open for attendance and for those wanting to present at the conference. If potential attendees join the mailing list linked below then they should receive the email notification when this happens. Further information can be found here.

Contributions and Deadlines

Publications committee members: Gabriella Agrusti, Sandra Johnson, Newman Burdett, Anastassia Voronina, Julie Sewell (newsletter editor)

The AEA-Europe Publications and Communications Committee would like to make a call for contributions for the Association’s newsletter. The Newsletter is published twice a year, in the Spring and Autumn, with deadline dates for the next Spring Newsletter being 1st March 2015. We would like help from members to make the information as up to date and relevant as possible. In particular we would like the following:

  • Articles for the Work in progress section. These should be a maximum of 500 words long and be formatted under the headings “Context”, “What is happening”, “Why inform members” and “More information” (brief details of a website or references). Please see previous newsletters for more information.
  • Articles for the Doctoral students’ Work in progress section. These should also be 500-600 words long and follow a similar format, covering the research and the context, with some details of methodology, potential impacts or directions.
  • What’s new section: Information on conferences and courses at master and PhD-level. For conferences we need to know: Title, date, place, website address. For courses at master and PhD-level we would like information on courses relating to assessment open to international students, bearing in mind any language constraints. We will need the name of the institution, the title for the course in your own language as well as in English, plus dates, town, country, website or a contact address.
  • Members’ news: any information on new appointments, promotions, etc.

We would also like some additional AEA-Europe national representatives who would be willing to send information about recent developments in assessment in their countries and approach relevant people to contribute to the Work in progress section.

Julie Sewell (on behalf of the Publications and Communications Committee)
AEA-Europe