Newsletter – Spring 2013
International surveys, policy borrowing and national assessment
The theme for our next conference will relate to the ways in which our national assessment systems interact with international measures of educational performance. Many of our current members are involved in national and international assessments, which produces a very interesting dynamic at the conference. As a growing Association, we are also encouraging wider membership. Specifically, we are interested this year in making connections between the current membership and those working on the topic of the conference theme who may not necessarily be regular attenders at the conference. Together with our hosts, DEPP, Ministère de l’Education, we will be creating a forum for the exchange of ideas on the conference theme, as well as broader educational assessment topics. Marit Kjærnsli, Rolf Olsen (both University of Oslo) and Newman Burdett (National Foundation for Educational Research) will be assisting the Scientific Programme Committee with the thematic element of the conference programme. If you have suggestions for specific pre-conference workshops, speakers or discussion group topics, please email me. Look out for the call for papers for the 2013 Annual Conference, 7 – 9 November 2013, in Paris. As always, we welcome your submissions!
Doctoral students – get involved in the Paris conference
Our doctoral network has held two very successful events for students at our Belfast and Berlin conferences. The current Network Coordinator is Yasmine El Masri (firstname.lastname@example.org). Students help to organise doctoral events, give poster presentations or present papers and discussion groups. This year, we are also looking for student volunteers to assist in the running of the conference itself. We would like to invite students to contact Yasmine informally if they would like to get involved in the Paris conference in any of these ways.
We conducted a survey of members’ views on whether the Association should have a journal. The outcome of this survey was that members would generally like to have a journal and there were suggestions for specific titles.
The Council is now in negotiations with publishers to see whether a good agreement could be reached for the benefit of Association members. A number of members expressed the view that they would like to see the Association set up its own journal in the long term. We will update you when discussions are further progressed.
Due to the sad death of our President, I have agreed to take on the role of Acting President of the Association. According with the Articles of the Association, my position as President is considered formally at the Business Meeting in November. We are in the middle of an election for Vice-President and I am pleased that there have been a number of excellent nominations. This shows the commitment of members to the Association’s goals. Please do cast a vote – this is important to the democratic nature of our Association.
I look forward to working with the person who is elected in their new role.
Pearson Professor of Educational Assessment & Director of the Oxford University Centre for Educational Assessment (email@example.com)
Kathleen Tattershall – Obituary
1942 – 2013
Kathleen Tattersall was Vice President of AEA-Europe for two years and was our President from November 2012. She was a leading figure in examinations in England, with a driving passion that assessment should be fair and lead to progression for learners. Sadly, Kathleen died of cancer on Wednesday 23 January 2013. This is a great loss to our Association and to the field of assessment.
Kathleen was important personally and professionally to many educationalists internationally. She was highly respected for her integrity and depth of understanding of assessment. Kathleen was a trailblazer for women in senior positions and came from northern, working class roots in England. She was a leader with a natural gravitas and an uncanny instinct for the right action in difficult situations. In 2003, Kathleen was awarded the Order of the British Empire by the Queen of England for her services to education.
Kathleen was a graduate of Manchester University (BA, 1963; PGCE, 1964; MEd, 1975). Her commitment to the university was long-lasting and she was a member of their Council and many other committees. Manchester University’s medal of honour will be awarded to Kathleen in May 2013 for her services to the university.
In 2004, when Kathleen attended an event on behalf of the university of Manchester which involved the Queen having arrived for lunch by helicopter, Kathleen took her retired former secretary from the examination board as a guest, which gave her secretary the opportunity to meet the Queen. This was typical of Kathleen – giving others opportunities and valuing people, no matter their position in life in the eyes of the world.
Before Kathleen gave a keynote to our annual conference in Budapest in 2004, she confessed to me that she felt a little nervous. Surprised, I asked why. She responded that it was such an eminent audience that it was a little daunting. I pointed out that none of the audience had lunch with the Queen the previous week, which made her laugh. Although she had a strong sense of self, she was strongly committed to doing a good job. Of course the presentation went on to be a success.
Kathleen taught in three schools in her early career: the Paddock House Grammar School (Oswaldtwistle), St. Augustine’s Junior School (Burnley) and St. Hilda’s Comprehensive School (Burnley). In 1972, Kathleen made the move to a post as Assistant Secretary to the Associated Lancashire Schools Examining Board. This was a small examination board, which meant that she learned the business from the bottom up. In 1982 Kathleen had a research secondment to the Schools Council in which she produced a report on methods of differentiating learner outcomes. Kathleen subsequently gained her first post as Examination Board Secretary. There followed moves to take charge of larger examination boards: the North West Regional Examinations Board in 1985, the Joint Matriculations Board in 1990 and the Northern Examinations and Assessment Board in 1992.
In 2000, Kathleen led the merger of two examination boards (NEAB and AEB) to form the largest English examination board, the Assessment and Qualifications Alliance. She kept the organisations thriving and held the merger together through difficult times. In open meetings, Kathleen talked to staff, persuading them that their working conditions could be better in the merged organisation – which they were. Kathleen made sure she knew people as individuals and talked with many of the senior examiners about their work and personal interests. She knew by name a large proportion of her staff.
In 2002, there was the so-called A-level crisis. Kathleen fought for the independence of expert decisions about standards and trusted examiner expertise as an important part of the process. When problems emerged in the standard setting, Kathleen worked alongside her team to find the reasons for it. At the subsequent parliamentary Select Committee Inquiry, Kathleen was clearly armed with a sound understanding of what the standards meant technically and socially. She retired from AQA in 2003. It has to be said that she was not very good at retirement and went on to take on even more senior positions!
Kathleen became the first Chair of the Chartered Institute of Educational Assessors in 2005. Following an open competition, the then Secretary of State, Ed Balls, appointed Kathleen Tattersall as Ofqual’s first Chair and Chief Regulator in 2008. She held this role until 2010, establishing Ofqual as an independent statutory body.
She proved to be an inspirational first leader of the new organisation, with a robust insistence on transparency, integrity and fairness. Her visits to schools and colleges were legendary, as she always returned with a clutch of stories of named individual students who were using their qualifications to progress, or who had encountered problems that Ofqual needed to address. She also provided leadership and support to the new Board and to the staff and their senior management team, at a time when the organisation had been required to relocate from London to Coventry, involving recruiting large numbers of new staff.
Within months of Ofqual’s creation in 2008, there was a major failure in the delivery of the results of National Curriculum tests, and Ofqual made the decision to set up an independent external inquiry, led by Lord Sutherland, and reporting to Ministers. Subsequent years saw debates about the quality of regulated qualifications, repeated accusations that standards were being “dumbed down” and technical challenges in implementing structural changes to A-levels. A balance had to be sought in retaining independence while taking full account of policy steers from Government, and this proved difficult at times, particularly when there were technical problems with new qualifications or tests that Ministers were keen to introduce quickly. The role of a regulator can be a lonely one, and Kathleen Tattersall set the highest standards for herself and for Ofqual in delivering it.
With a vigour that only left her in the last few days of her life, she contributed to too many important roles to mention here. A small selection illustrate her commitment to education: she was a Council member at Manchester University, a member of the Higher Education Funding Council for England’s Audit Committee and Chair of the Northern School of Contemporary Dance in Leeds.
She was also a keen runner, singer and supporter of Burnley football club.
Kathleen described herself in the following words, which make a fitting epitaph,
“As one who has worked in education all of my life to ensure the best of opportunities and provide challenges of the highest standard for learners of all ages.”
Kathleen is survived by her partner, Geraldine Boocock.
A condolences page has been set up at http://www.facebook.com/KathleenTattersallCondolencePage
Jo-Anne Baird & Isabel Nisbet
Berlin 2012 – A personal view
By Margaret Parfitt
Hello it’s me: firstname.lastname@example.org or email@example.com. This is how most of you know me and indeed up until the Berlin conference and for the past 10 years or so, I have only known most of you as an email address. So I was thrilled to be attending and felt as though I was meeting long lost friends when I finally met some of you.
Having been involved in the organisation of so many conferences, for me it was particularly interesting to attend to see how they all actually work and to see how enthusiastic all the delegates are about AEA-Europe.
I did have to work for my place at the conference for much of the time but I also managed to attend many of the interesting sessions.
A few things, of the many, that I learned:
- According to Richard Desjardins presentation “age related decline occurs on average but evidence suggests that it may be possible to mitigate, delay or prevent cognitive decline through education and training or other physical activities”. Good job I have recently joined a gym!
- Having attended Bryan Maddox’s session An anthropologist among the psychometricians: ethnography and DIF in the Mongolian Gobi, I realised I would not succeed in any of their assessments.
- All the ex-presidents have dogs! I wonder if Jo-Anne has one?
- Don’t go out to view the sites late afternoon, in the pouring rain – you won’t see much and you will get wet!
- How friendly the German people were that I came across and how enthusiastic the taxi driver was about his city.
Overall, I had an amazing time and would be very happy to be lucky enough to be able to attend the next conference in Paris! But for now, it’s back to work – and preparing for that one.
My visit to CITO
By Jurik Stiller
After being awarded with the poster-prize of AEA-Europe’s 2012 Annual Conference in Berlin, I was invited to Arnhem on March, 7th to visit CITO.
I stayed in Arnhem two nights in a comfortable small hotel situated near station and CITO.
On March 7th I was given the opportunity of meeting four proven experts in the field of assessment. At first, I met Anneke Thurlings, who discussed different aspects of generating context-based assessment items in science, especially with regard to the nationwide examinations. Second, I was introduced to Johanna Kordes, who is PISA 2015 National Project Manager for the Netherlands. She was able to tell me, what needs to be taken into consideration when conducting an international comparison study. The next insight into current assessment research at CITO was provided by Erik Roelofs. He explained different ways of operationalizing teachers’ competence, particularly regarding a model of competent performance. At last, Angela Verschoor showed various psychometrical aspects of testing and test optimizing.
The program was completed by a tour through the impressive building, revealing quite comfortable working conditions and advanced technical equipment. Besides, even one of the world’s largest psychometric libraries can be found there.
The people of CITO welcomed me warmly. The assessment experts were willing to answer all my questions and offered me further support and advice. Diederik Schonau, who was responsible for planning my visit, and Angela Verschoor accompanied me in the evening, as we had a delicious Indonesian meal.
The day in Arnhem was an exciting experience. I can highly recommend awarding the poster-prize like that as it made insights into the daily work of one of the most important institutions for assessment possible.
Thank you for that opportunity.
Comparable Quality of Assessment Across Europe
But how would we know?
If any two assessment experts in Europe discuss developing a test how many definitions will they have in common? Do we know if educational assessments and tests across Europe are of comparable quality and is it important to know? Are assessment and test developers in different European countries using the same quality principles? Can stakeholders find out in a transparent way how developers secure the quality of their assessments and is that quality comparable across Europe?
The Association observed that ‘European’ instruments that have been developed to date have focussed on outcomes of education, and achieved knowledge, skills and attitudes of learners e.g. the Common European Framework of Languages and the European Qualification Framework, or on transparency, quality and mobility across Europe through the Bologna and Lisbon Agreements. While gathering data about the outcomes of learning and the credentialing of competencies have been recognized as major drivers for implementing the various strategies, relatively little attention at a European level has been given to how (the quality of) tests, examinations, assessments and assessment programmes used across Europe compare to one another. It is against this background that AEA-Europe initiated the development of the European Framework of Standards for Educational Assessment to explore if a tool could be developed that would facilitate communication, promote quality and link best practises in assessment to European values of diversity, inclusion, and autonomy.
A key position that the Association adopted was to aim to produce a European Framework of Standards rather than a European Set of Standards: So first and foremost a communicative document and not a prescriptive document. The Framework aspires to be an instrument that test providers, users of assessment results and (educational) authorities can use to compare, contrast and evaluate practices in the development, administration, scoring and reporting of a wide variety of tests, assessments and assessment programmes. It offers guidelines for the reviewing and presenting of evidence that the assessment and reporting strategies used meet certain quality criteria. It is a tool that facilitates transparency for providers, users and (educational) authorities. It can also be used for benchmarking an existing national system of standards for the development of assessment, for peer review activities and as a reference framework for the development of an audit procedure or an evaluation framework to protect the rights of users.
The Framework is the result of a process that started a number of years ago when Association members began discussing this topic, first during the annual conference in Stockholm in 2007 and then in a more organized fashion through a Standards Committee that was formed to come up with suggestions about whether and how AEA-Europe could play a role. During the 2009 conference the committee introduced the idea of producing a framework of European standards. A position paper followed, that took into account input from the Association’s membership, including examples from specific countries and documentation about standards and other quality assurances systems that are used within and beyond Europe. The position paper was discussed during the 2010 annual conference in Oslo. On the basis of all the feedback received, version 1.0 of the European Framework of Standards for Educational Assessment was drafted. In 2011 at the Belfast conference a workshop was organized to present the core elements of the Framework and to do a sanity check if they could work in practise. Encouraged by the positive result a final document was produced to the Council in January 2012. The Council adopted the draft as an official document and at the 2012 conference in Berlin the first edition of the Framework for Standards for Educational Assessment was officially presented. Have a look at https://www.aea-europe.net/images/downloads/SW_Framework_of_European_Standards.pdf
By Gerben Van Lent
On the issue of validity
By Onik Mikaelyan & Sona Mikayelyan
Validity is one of the characteristics of a human activity or of an event. As in reality reliability is related to consistency and to something being right or wrong. The quality of education is strongly connected to assessment (Popham, W.J.; 2003), which is a continuous process in the sphere of education. The quality of assessment, in its turn, is highly influenced by validity.
Mistakes in assessment, especially systematic errors (Payne, D. A.; 2003a.), have a bad impact on students’ performance individually and on the assessment system in general. Validity is a valuable means for avoiding systematic errors in assessment and its use will allow assessment tools to be applied more correctly.
The importance of any theoretical concept in education is measured according to the extent to which its application helps to improve the quality of education (David Mott; 2008). The effect of validity will become greater if it is possible to give a quantitative expression to it. This is the aim of this work in which an attempt is made to prove the possibility of giving quantitative expression to validity.
If the validity of a phenomenon doesn’t have a quantitative expression, it won’t be possible to compare two similar phenomena or objects of the educational system in the sense of validity or to make judgments about an event. Therefore the idea will not work purposefully and will be less usable. However, it is obvious that the idea of validity is natural and that it is applied both in life and in the educational sphere (Cunningham, G. K.; 1998).
What is happening
In general, qualitative characteristics, such as validity, present particular difficulty if they are to be measured as quantitative characteristics in terms of degree of trueness.
Validity is called practical validity if it becomes measurable by use of a method. A method for finding practical validity of a test and expressing it by numbers is described in the work. Our experiment shows that the concept of validity used in practice in this way will contribute to the high quality of education. Valuable inferences are made based on the results we obtained from the experiment.
The quantitative expression of validity of a test can be defined both before and after using the test (Payne, D. A., 2003b; Croker L., Algina J., 2008). In both cases the main issue is scaling of the goal of the test and of each item precisely and correctly. In this work an attempt is made to represent a method and its application for finding a test’s practical validity.
In order to check pupils’ mastery of any theme, the tester, the specialist defining the content validity and the teacher are led by the subject standards, curriculum and the text-book. In the process of defining practical validity it is purposeful to take into account teachers’ opinions, as the teacher is the professional who knows what the achievements of the pupils are. It is also useful to apply pupils’ feedback about a test’s feasibility when defining a test’s face validity.
Why inform members
Assessment is a powerful tool which can broadly affect educational process, so its validity is one of the most important factors in that process.
A mechanism of application of validity, such as the one discribed in the work, will encourage test writers to choose each item of the test more scrupulously so that it corresponds to the aim of testing and to the educational standards.
However, it may be not the only method for that purpose.
- Cunningham, G. K. (1998), Assessment in the Classroom: Constructing and Interpreting Texts, London-Washington, D.C., The Falmer Press.
- Popham, W.J. (2003), Test Better, Teach Better: The Instructional Role of Assessment, USA, ASCD.
- Payne, D. A. (2003a.), Applied Educational Assessment, Canada, Wadsworth Group.
- Payne, D. A. (2003b.), Instructor’s Manual for Applied Educational Assessment, Canada, Wadsworth Group.
- David Mott (2008) www.vatd.org/TestValidityRevisitedAgain.pps
- Croker L., Algina J. (2008), Introduction to Classical and Modern Test Theory, Cengage Learning.
Some brief description of the work was presented at the AEA-Europe conference, 2008. It is also proposed to present the complete work as a journal article.
An investigation into Chinese Teachers’ of English Conceptions of, Perceptions about and Attitudes towards Formative Assessment at a Chinese University
by Chris Cookson
Why does it matter?
As is the case with so many concepts, formative assessment (FA) and its younger sister, assessment for learning (AfL), are open to interpretation. In particular since the seminal work of Black & Wiliam (1998), there has been heightened interest in these terms, but questions are now being raised on an international level about the extent to which there is a shared understanding of what they mean and what they involve. In the position paper which came out of the Third International Conference on Assessment for Learning, held in Dunedin, New Zealand in March of 2009, for example, it is stated that several definitions of AfL exist in a number of primarily English-speaking countries worldwide and, more importantly, that the words of the previous two Conference’s definitions of AfL are often misinterpreted, leading to the “misunderstanding of the principles, and distortion of the practices, that the original ideals sought to promote” (Position Paper on Assessment for Learning, 2009). As a possible explanation for this phenomenon, the paper, however, does not make mention of cultural or contextual factors. Furthermore, no representatives from Asia were in attendance at the Conference.
The goal of this doctoral study is to gain a better understanding of how Chinese university English teachers in China understand the term “formative assessment.” A further objective is to investigate whether there are differences in the ways in which Chinese and Western academics understand this concept. An additional purpose is to examine the extent to which culture and other contextual factors play a role, not only in the way teachers understand FA, but also in how useful they judge it to be in their teaching environment.
The major hallmark of the study – its attempt to understand as best as possible participants’ views – makes it strongly qualitative in nature. The study has not yet committed itself to any particular philosophical perspective. It is, however, likely to be informed most heavily by the socio-cultural paradigm, in particular by Engeström’s activity theory (1987) which recognises the implications of social or collective elements – borne out of unique histories and cultural identities – on the actors and their behaviour within a given community or “activity system.” Analyses and discussions from Crossouard (e.g. 2009) and Pryor (e.g. 2008) will be used as more immediate anchor points since they reflect specifically on FA from a socio-cultural standpoint.
The study is deliberately sensitive to the wider context in which it is being conducted. The underlying argument is that teachers may construct their own meaning of FA though socio-historic mechanisms which are unique to their national or cultural setting, and that this meaning may be equally as meaningful relative to their context.
- What are Chinese Teachers’ of English conceptions of formative assessment? How do they understand this term; how did they become familiar with it; and to what extent is the Chinese variant (xíngchéng xìng píngū) perceived to differ from or be similar to Western conceptions?
- How amenable to the use of formative assessment do they believe the environment in which they teach to be? What are their reasons?
- What do they regard as the principal opportunities and challenges presented by attempts to implement formative assessment in English education at universities in China?
- Which, if any, personal and professional characteristics do the respondents share who claim to employ formative assessment in their teaching?
The study is intended to contribute to the evolution of the theory of FA, in particular its ability to embrace views from national or cultural contexts outside of the Western or English-speaking world.
Crossouard, B. (2009). A Sociocultural Reflection on Formative Assessment and Collaborative Challenges in the States of Jersey. Research Papers in Education. 24 (1), pp. 77-93.
Engeström, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki: Orienta-Konsultit.
Black, P. & Wiliam, D. (1998). Assessment and Classroom Learning, Assessment in Education: Principles, Policy & Practice. 5 (1), pp 7-74. Position Paper on Assessment for Learning (2009). In Third International Conference on Assessment for Learning. Dunedin, New Zealand, March 15th to March 20th.
Pryor, J. & Crossouard, B. (2008). A socio-cultural theorisation of formative assessment. Oxford Review of Education. 34 (1), pp. 1-20.
Contact details: firstname.lastname@example.org
Supervisor: Dr. Val Brooks, Institute of Education, University of Warwick
AEA European Framework – A field trial in Italy
By Paolo Campetella, Valeria Damiani, Nader A.M. Harb, Carmela Spicola, University of Roma Tre, Department of Education, Italy.
From left to right: Nader M. Harb (student), Valeria Damiani (student), Antonella Poce (researcher and coordinator for the laboratory activity on AEA Europe Framework), Carmela Spicola (student), Gabriella Agrusti (researcher), Cinzia Angelini (researcher), Paolo Campetella (student)
What we are doing
As a group of PhD students in “Innovation and Evaluation for Educational Systems” at the University of Roma Tre, for a laboratory activity we have piloted a survey in Italy concerning the AEA Framework and its applicability in Italian school systems.
Aims and methodology
The aim of the survey was to investigate the applicability of the standards of assessments to the early educational stages and to highlight the differences between some important measures that the AEA European framework stress and introduce, and the actual measures adopted by some investigated Italian schools. This led to the transformation of the AEA framework Core Elements into questions within a questionnaire that was handed to the school teachers in order to explore, as far as possible, each element of the evaluation process in school practice. A single questionnaire was thus designed for all school levels involved in the research to collect general aspects and approaches related to evaluation practice.
The research has considered schools from various schooling levels (two primary, one lower secondary and one upper secondary) in Rome, Palermo and Civitavecchia. In these institutions 58 teachers of different disciplines were willing to participate to the survey. We opted for a purposive non-probabilistic sampling, as we wanted to test the applicability of the framework through specific questions to better understand evaluation practices in different teaching contexts.
In addition to the questionnaire, a semi-structured interview was held with four teachers in charge of assessment practices for each school. The purpose of these interviews was to obtain information on evaluation practices by shifting the focus of the investigation from individual teachers to the overall school system.
The analysis of the questionnaire administered in the four investigated schools showed a heterogeneous condition of assessment in Italy. On the one hand, in defining the assessment goals, teachers generally focus on both the characteristics of the class, the ministerial programs and guidelines established by the school. On the other hand, in choosing the type and creating the assessment tools, the ministerial guidelines are considered mostly irrelevant by the teachers and the assessment criteria therefore differ from school to school and in some cases from a teacher to another.
The results of the pilot survey showed some critical issues and highlighted some aspects that should be investigated through further research.
In particular, the difficulties faced in comparing the assessment procedures in the selected schools were due to three main factors: the lack of clear and common rules established at national level; institutions’ autonomy in educational planning and assessment criteria design introduced by the Ministry of Education in 2000 and the different geographical, economic and social conditions of the schools. Taking this into account, it might be difficult to compare the results of the present research to others, from different countries and educational systems in Europe.
With regard to the research tool, the questionnaire was considered unsuitable to investigate the evaluation practice in primary schools and it remains uncertain whether this was due to the limitation of the framework regarding its applicability on all levels of the school systems or to the tool itself.
The interviews showed a strong interest by the teachers in a common framework for evaluation methods and procedures at a European level and the belief that such a standard could help in the comparison and sharing of best practices within different school systems.
Paul Newton is now Professor of Educational Assessment
at the Institute of Education, University of London.
Newman Burdett is now Head of the Centre for International
Comparisons at the National Foundation for Educational Research.
AEA-Europe, like all successful organisations, is the product of many people over a long period of time; it has many parents. But, if it can be said to have a father, it is Steven Bakker. Steven was one of the founders of AEA and in one role or another was instrumental in its success for 12 years, standing down from the Presidency and the Council at the Annual meeting in 2012. He will be missed a great deal.
Formally, Steven has served in four positions within the Council of AEA. For its first six years, he was the Secretary, he was then Vice President for four years before being elected President for the two years 2011-12.In all this time, he was an active contributor to the many developments in the young organisation. In the early years, this involved personally overseeing the conferences and active recruiting campaigns for members in all areas of Europe. He was and remains very visible to members, with a wide range of friends and colleagues everywhere.
However, in fact Steven’s involvement with the Association went back for years before its formal founding in the year 2000. The idea of an assessment association for Europe arose in 1998 when UNESCO was promoting regional rather than global organisations, and Steven took the initiative by seeking volunteers to join an organising committee with the intention of establishing an assessment association for Europe. The vision was one that has largely materialised – a professional association that brings together policy makers, academics, developers and practitioners to have discussions that recognise both the technical and political dimensions to assessment. The very earliest discussions took place at IEA meetings, on the sweltering beaches of Barbados and the cool mountains of Slovenia, but along with these was the hard work of finding start up funding, writing a constitution, enrolling members and forming a first Council. Steven was indefatigable in these and the birth of the organisation was in large part due to his efforts.
Over the period of his involvement with AEA, Steven had several professional roles. He was initially head of the international department at CITO, then the Director of ETS Europe, before founding his own company, Dutchtest. In all of these he has provided wise advice and consultancy in many different countries, enjoying the wide range of cultures and cuisines he has encountered.
Despite enjoying his travels, he always missed his wife, Brigitte, at home in Nijmegen. The invention of Skype was therefore particularly useful for keeping in touch and the couple agreed to have their evening meals together, just as if Steven was at home, chatting over the day’s events. He also missed other comforts of home, particularly his dog, Max, and one of the benefits of leaving the Council is he is now able to spend more time with both Brigitte and Max. Happily Brigitte attended all of the AEA conferences, but equally happily for members, the large and boisterous Max did not.
Like all good fathers, Steven will continue to watch his child, proud both of what he has helped it to grown into and happy that it can move forward without him. AEA owes a great debt to its parent.