Report on RAE2001 ----------------- 1. Organisation I have always believed that the basic concept of the RAE has both good and bad aspects. It is good that everyone in a subject area is judged by the same panel, and that the panels are nominated by the academic community and have their confidence (though this was noticeably less so than in 1996). It is bad that there are a few discrete grades with big funding gaps (how bad this time, we did not know until afterwards). It is also bad that, apart from the introduction of category A* for staff, nothing was done to prevent universities playing the system, by submitting staff to inappropriate units or not submitting them at all, signing up academic visitors onto short-term contracts, and submitting ineligible research outputs. 2. Preparation The Pure Mathematics panel requested early on that we should be given the submissions electronically in a form allowing, for example, sorting of the research outputs by journal and year; or, if this couldn't be done, to be given such sorted data. We were told that this was impossible. I conclude that either the expensive RAE software was unable to do what the humblest PC database can do, or (more likely) that HEFCE values its secretarial staff higher than its panel members. (I spent more than three working days sorting the data manually.) I was proud of the fact that, in 1996, the Pure Maths panel considered almost exclusively the excellence of the research outputs. Now I am ashamed that this high standard was not followed in 2001. Probably the changed rules made it inevitable, but I wish that we had taken a firmer stand on this issue. (I have heard "applied research" defined as "research funded by someone else"; I am not happy that funding levels played as important part in the assessment of pure mathematics as they did.) Of course, I must accept corporate responsibility for this. 3. Conduct of the exercise In my report on the 1996 RAE, I said that the panel had an excellent working relationship, hampered by the carping and criticism we had from the RAE team. This time, the working relationship of panel members, secretaries and observer was again excellent; I am grateful to my colleagues for this. I think we judged the quality of the research to the best of our abillities, and the "calibration exercise" we undertook helped us towards consistent standards. John Rogers addressed us and ensured us that we would have all possible support from his team. In the event, of course, the system of sending research outputs to panel members collapsed and was withdrawn; the promised "assessment aids" (far less than we had requested) arrived far too late (at the end of July) and were useless (half the pages were printed twice and half were missing); and emails to the RAE team were not acknowledged for weeks. 4. Aftermath Just before our last meeting, news of the funding disaster was widely leaked. I felt betrayed, and tried unsuccessfully to persuade my colleagues on the panel to make at least a token protest. Now I am in the invidious position of being responsible for an assessment which has led to better research receiving less funding. Several people approached me unofficially for some words about how the panel had acted with regard to their institutions. I replied with the party line that there would be official feedback. I did not realise how positively unhelpful this feedback would be out of context until I saw the feedback reports on other units at Queen Mary. In 1996 departments could ask for a debriefing from the panel chair, surely a more satisfactory method. After the 1996 RAE, my comments were unacknowledged by HEFCE, but read by several people in the academic community. I have no reason to think that things will be different this time. Peter J. Cameron May Day 2002