Search
Close this search box.
MBA

Das MBA-Studium: Ranking the Rankers

Anstehende Events
IT Consulting for Graduates
12.04.2024
Köln
Anmelden und Top-Beratungen kennenlernen!

Article by Tim Dickson

The most nerve wracking moment

Ask any business school dean to describe the most nerve wracking moment of the academic year and a candid answer might be: ‘Just before I know our place in the B-School rankings’. League tables are big business for newspapers and magazines such as Business Week, The Wall Street Journal, Forbes, US News and World Report and The Financial Times – and they’re about to get bigger. Publications are currently turning the spotlight on universities which run executive education courses and ‘executive’ and other part-time MBAs, as well as continuing their scrutiny of full time MBA programmes, with the result that a rash of new headline-grabbing surveys can be expected in the coming months. The trepidation of deans can only get worse.

Those who prepare and sanction these tables defend them as a legitimate exercise in accountability – a market response to the bewildering choice of courses and the need for unbiased information on quality and performance. Most business schools, on the other hand, take a different view. While not denying the commercial logic of attempts to create a ‘buyer’s guide’ many privately see rankings as a curse – a time consuming and wasteful lottery administered by self-appointed regulators using questionable methods inherently disadvantageous to particular regions of the world (most notably Europe).

The question for students, recruiters and anyone else who has the interests of the business education sector at heart is: who’s right? This article (based on discussions with several experts and my own analysis) turns the tables on the league tables and – with special reference to three of the most influential MBA exercises, namely those carried out by Business Week, The FT, and The Wall Street Journal – seeks to challenge the foundations of the rankings game.

SQUEAKER

Unsere Karriere-Events: Dein Einstieg ins Consulting

Lerne Top-Unternehmensberatungen in einem exklusiven Event kennen. Hautnah und auf Augenhöhe!

Events entdecken!

In the beginning…

The first B-School rankings published by Business Week in 1988 were greeted with surprise and even derision at the time – Kellogg (Northwestern) somewhat implausibly came out top – yet arguably they delivered a much needed jolt to the US educational establishment. The ‘best’ schools a decade or so ago were automatically assumed to be a coterie of traditional establishments – the Harvards, the Stanfords and the Whartons – a cosy consensus which the US magazine sought to put to the test by formally analysing customer satisfaction (students and recruiters) at a much wider range of institutions.

The justification for such surveys became more seductive as demand for the MBA qualification and for executive education courses mushroomed in the 1990s; the pool of providers increased and globalisation meant that the market for courses became much more international. According to the Association of MBAs there are currently around 1,500 ‘credible’ business schools running MBA programmes worldwide and perhaps as many – such as the South American ‘University of England and Wales’ – which fall short of the standards students are entitled to expect. The need for guidance is particularly acute in the emerging Europe market – a point underlined at a recent seminar on business education organised by EBF for the Haarlem Business School in the Netherlands at which executive students consistently stressed the importance of more widespread accreditation and better quality control.

The case against…

But are B-School rankings as now administered the right solution to the problem? Even those that consistently come near the top of the leagues question whether rankings are truly reliable, whether the methodology they apply can ever be scientific and fair, and therefore whether they genuinely help the market they purport to serve. The general case against the rankings is as follows:

Devising an objective methodology is an impossible task. All the main benchmarks – student satisfaction, salary performance, recruiter attitudes – are superficially appealing but potentially misleading (see analysis of FT, Wall Street Journal and Business Week rankings). The league tables, moreover, wrongly assume that the needs of all employers and students are the same – that there is a single yardstick or set of yardsticks against which they can be judged. The ‘best’ school, is likely to result from matching the institution’s strengths and culture on the one hand with the individual’s aspirations and values on the other. 

Notwithstanding the genuine efforts of some publications to attach health warnings – ie that the rankings should be read with care – the headlines are often taken out of context by students and potential students. “Don’t be fooled into thinking they don’t matter” one prominent European dean told me recently. “The person who edits one prominent ranking has me very much by the short and curlies – and it hurts”, he says pointing to the top of his trousers. Students, faculty and alumni all care deeply – whatever they say publicly – and implicitly or explicitly the administrators of schools know that ultimately a bad ranking can jeopardise their career. “I know schools in the US where staff below the dean have been told that if their school doesn’t move up five points in the rankings they will lose their job”, says one European professor.

The rankings are now so widespread and so powerful that schools’ strategies and operations are being unduly distorted. In a competitive market B-Schools are rightly required to be as brand conscious as the most red-toothed corporation – but the pressure on those who fill in questionnaires, answer questions, and retrieve huge quantities of data for the rankers cannot be justified if methodologies are ultimately unsound. The sheer volume of rankings, moreover, has come to dominate reputation management in an unhealthy way, according to two researchers from Penn State’s Smeal College of Business Education who interviewed leaders from the top 50 US schools earlier this year. “The rankings have taken on a life of their own, so schools must not only track them, but actively try to conform to their requirements, as well as try to change them, all the while attempting to maintain the integrity of business education”, observes Kevin Corley, a doctoral candidate in Smeal’s Department of Management and Organisation and one of the authors of the report.

Europe’s diverse approach to management education makes comparisons very difficult and a built-in North American bias arguably puts European schools at an automatic disadvantage. Questions requiring evaluation of internships, for example, are not relevant for students studying on one year MBA programmes which are the norm in say the UK, France and Switzerland.

The lack of debate

Why is there so little open debate about business school rankings? AMBA – which runs an accreditation system for European business school providers – has spoken out robustly on the issue. And the European Quality Link for Business Schools and Management Education (EQUAL) – comprising seven national and three regional associations in Europe, which between them represent over 750 business schools – has produced two pamphlets entitled “A Responsible Approach to League Tables”, one for students and employers, the other for the media. But efforts by representative organisations or groups of business schools to agree ‘sensible’ measures have come to nothing.

SQUEAKER

Another issue is that deans and faculty are often reluctant to speak out. In Europe individuals such as Professor David Norburn, dean of Imperial College, London, and Professor Leo Murray, director at Cranfield School of Management, have voiced deep reservations – but the vast majority of business school leaders prefer to keep their heads down, worried no doubt that criticism could impair relationships with those journalists that write about them (and on whom their reputation to some extent depends). Many schools, of course, are guilty of hypocrisy – privately deploring the methodologies and even principles of rankings but proudly boasting about their strong ‘rating’ when well placed in a league table.

Business Week’s biennial league tables have had enormous influence in the US – and are taken increasingly seriously in Europe. The methodology involves a customer satisfaction survey of students and recruiters. For the most recent MBA survey (October 2000) 16,843 Class of 2000 graduates in 82 participating B-Schools were sent a 37-point questionnaire covering everything from the content of e-business courses and the quality of placement offices. The response rate was 60 per cent. Half the ‘student satisfaction’ score – which accounts for 45 per cent of the total – comes from this student survey, the rest from graduates in the two previous polls ‘to ensure that short-term changes don’t unduly influence results’. On the employer side (another 45 per cent of the total) surveys were sent to 419 companies which were asked to rate their top 20 schools according to the quality of students and their company’s experience with B-School grads past and present. For the first time in 2000 a new ‘intellectual capital’ component representing 10 per cent of the overall weighting was added. The Business Week rankings have come under fire for a variety of reasons, among them:

The undue influence of recruiters. They may be key customers but faculty have complained that schools which care about their ranking are forced to indulge student job seekers and visiting companies, possibly at the expense of real learning. “It is not unusual for students to miss 20 per cent of their classes (and perhaps half of their attention span) through activities not geared to learning but to getting a job”, laments one professor.

The questionable value of student opinions

On the face of it asking those who have been to business schools to give their views seems the ideal test of customer opinion. Necessarily, however, the Business Week survey cannot be a true sampling of student feedback since respondents cannot compare their own business school with ones they did not attend. Moreover, students are widely advised (if they did not already know) that the value of their MBA after graduation will depend on the school’s rating in the market place – something that is more than likely to shape their judgements. Asking business education customers for feedback is not the same as asking a Sony customer what he thought of its latest CD player – the latter has little or no vested interest. “I would bet that all schools have excellent median rankings… and differ only according to a small vocal (and stupid) minority of disgruntled students’, says the same professor. Business Week’s vulnerability on this score surfaced last year when the magazine warned schools that they would be penalised if evidence was found that they were trying to influence student opinion.

The FT business school rankings are widely regarded as among the most exhaustive on the market – but not necessarily the least flawed. Indeed the very breadth and complexity of the assessment – a range of indicators covering indidvidual career progression, diversity and research – prompts criticism that the newspaper has presumptuously assumed the right to define a standard for what constitutes a ‘good’ international business school. “The criteria we have chosen reflect in our opinion the most important elements of an MBA programme, while allowing comparision between schools globally”, the FT wrote in its Business Education Survey in January this year. The very process of listening to complainants and seeking to be fair – a compliment paid by a number of deans – merely reinforces an impression of the Pink Paper playing God. Among specific concerns are The FT methodology’s perceived bias against European schools, the emphasis on salary and salary increases achieved by alumni, and apparently arbitrary weightings given to issues like women students and faculty and the international proportion of students and teaching staff.

The FT goes out of its way to try to avoid charges of bias and attacks on its methodology. The 20 criteria are scrupulously set out with the salary data, for example, standardised (because they are reported in different currencies) by conversion to US dollars with purchasing power parity exchange rates estimated by the World Bank. The benchmark salary comprises 50 per cent of the average salary for graduates from the relevant graduating class and 25 per cent each of salaries of classes from the previous two years to avoid short-term fluctuations. There are further adjustments for the variation in salaries between different sectors. These niceties notwithstanding The FT’s detailed criteria can be challenged on a number of counts such as:

The high weighting in the rankings – 40 per cent of the total – given to student salary performance (both in absolute terms ‘today’ and the percentage increase since the start of the MBA course). Schools that typically admit mature students with a higher starting salary (say those in their late 20’s) complain that the potential salary uplift is far less than it is for those who join the class at a lower age with only two to three years of work experience. Students turned entrepreneur – those brave individuals who often take a substantial salary cut for a number of years to establish their own business – presumably do not do their alma maters a favour in the process. Moreover, the implication in The FT weighting that all MBAs are driven by greed, if not fear, is surely contentious. “Asking one of our students how much their salary went up after an MBA course – as if this was the main purpose of the exercise – is so far from the point that it’s quite ridiculous”, says a European involved in providing her school’s data to the rankers.

The dubious definition of what is meant by ‘international’ faculty – those whose nationality is different to their country of employment. ‘I can point to Americans on our faculty who spend 50 per cent of their time travelling in Asia who are far more international and open minded than others who have been born with a different passport’, observes a US programme officer.

The composition of Research and Doctoral ratings

The Research category in 2001 was determined by faculty publications in 35 international academic and practitioner journals – the bulk of them North American. A high number of doctoral students no doubt underscores a school’s academic credibility – but it might be just as relevant to a school’s effectiveness to assess the ability of its teaching staff (eg through visits from successful business people) to impart good practical knowledge and advice.

The WSJ – which produced its first rankings in association with Harris Interactive at the end of April 2001 – has come in for more academic sniping than most. This is partly because of the distinctly counter-intuitive headline results – Wharton 18th, Virginia 32nd, Columbia 34th, MIT 38th, Fuqua 44th and Stanford 45th out of the top-ranked 50 – but also because of its methodology which relied exclusively on the views of corporate recruiters. Interviews were conducted online with 1600 MBA recruiters and the ratings based on perceptions of the school and the school’s students (80 per cent of the total and involving the assessment of 12 school and 13 student attributes, plus two ‘overall’ attributes) and on the school’s ‘mass appeal’ (20 per cent) as defined by the total number of recruiters rating that school.

Notwithstanding The WSJ’s efforts to demonstrate the rigour of its approach, the exercise has been widely attacked. Perhaps the most detailed rebuttal comes in a paper written by John Lynch of the Fuqua School – a faculty member who teaches Market Intelligence and an expert in market research methodology. Lynch asked his students to analyse the WSJ survey and produced his own ‘answer key’ which makes the following points:

The WSJ used a universe of recruiters that was ‘biased and open to manipulation’. Rather than randomly sampling from all recruiters who visited a school, it allowed schools to choose which recruiters from a company would be invited to participate in the survey. It encouraged schools to make follow-up calls to recruiters to encourage them to particpate when the initial response rate was low. Of the 1600 total respondents, 700 came via a special hotline to which schools could direct recruiters.

The percentage of recruiters co-operating was so low as to invalidate the study. In the list of recruiters initially invited to participate a large majority declined. Fuqua, for example, initially provided 80 names. Much later it was asked to give 400 names so that The WSJ could reach its minimal targets. But ‘we cannot assume that those agreeing to participate have attitudes like those who declined’.The researchers let respondents decide which one to three schools they would rate. Because respondents self-selected into the study findings for a given school ‘we cannot project the results to the universe of recruiters at that school’. Moreover, because most recruiters only rated on school rather than all schools visited in a given year the procedures over-represented the opinions of small recruiters visiting only one or two schools in comparison with large recruiters who visit many schools.

Even if the samples of respondents had been chosen randomly – which was decidedly not the case – the sample sizes were so small as to preclude meaningful comparisons among schools. While acknowledging the technical shortcomings, another seasoned observer of rankings (associated with a Top Ten school in The WSJ rankings) has this to say: ‘Remember that Business Week’s first efforts were regarded as ‘untraditional’ and ‘wrong’. Perhaps the Journal’s pecking order is a reasonably close approximation of what recruiters want – cheapness and an ability to turn knowledge into action, for example, would be pretty important. At least the survey is not purporting to be a definitive measure of quality – it’s clearly measuring one thing (what the customers think) and if people realise that it may be useful’.

Conclusion

So what of the future? What’s certain is that the league tables will not go away – the survey results provide valuable advertising revenues and additional circulation for one thing, and publications can legitimately ask what could possibly be wrong with trying to measure customer satisfaction. Complaints can easily be dismissed as emanating from schools which get a poor rating.

What’s also unlikely is that anyone will come up with a set of criteria which will appease the market (perhaps students of market research can have a go). That leaves the onus on readers to interpret the findings with extreme care and not to read too much into year on year fluctuations – as well as an obligation on business schools to use their ‘ratings’ responsibly. The publications, meanwhile, should recognise less grudgingly the limitations of their methods.

Aktuelles aus unserem Magazin:
Fr., 15.03.2024
Wie können wir helfen?
kalender_v2

Diese Events darfst du nicht verpassen!

Lerne Berater:innen persönlich kennen und starte deinen Weg ins Consulting.