The composition of Research and Doctoral ratings
The composition of Research and Doctoral ratings
The Research category in 2001 was determined by faculty publications in 35 international academic and practitioner journals - the bulk of them North American. A high number of doctoral students no doubt underscores a school's academic credibility - but it might be just as relevant to a school's effectiveness to assess the ability of its teaching staff (eg through visits from successful business people) to impart good practical knowledge and advice.
The WSJ - which produced its first rankings in association with Harris Interactive at the end of April 2001 - has come in for more academic sniping than most. This is partly because of the distinctly counter-intuitive headline results - Wharton 18th, Virginia 32nd, Columbia 34th, MIT 38th, Fuqua 44th and Stanford 45th out of the top-ranked 50 - but also because of its methodology which relied exclusively on the views of corporate recruiters. Interviews were conducted online with 1600 MBA recruiters and the ratings based on perceptions of the school and the school's students (80 per cent of the total and involving the assessment of 12 school and 13 student attributes, plus two 'overall' attributes) and on the school's 'mass appeal' (20 per cent) as defined by the total number of recruiters rating that school.
Notwithstanding The WSJ's efforts to demonstrate the rigour of its approach, the exercise has been widely attacked. Perhaps the most detailed rebuttal comes in a paper written by John Lynch of the Fuqua School - a faculty member who teaches Market Intelligence and an expert in market research methodology. Lynch asked his students to analyse the WSJ survey and produced his own 'answer key' which makes the following points:
The WSJ used a universe of recruiters that was 'biased and open to manipulation'. Rather than randomly sampling from all recruiters who visited a school, it allowed schools to choose which recruiters from a company would be invited to participate in the survey. It encouraged schools to make follow-up calls to recruiters to encourage them to particpate when the initial response rate was low. Of the 1600 total respondents, 700 came via a special hotline to which schools could direct recruiters.
The percentage of recruiters co-operating was so low as to invalidate the study. In the list of recruiters initially invited to participate a large majority declined. Fuqua, for example, initially provided 80 names. Much later it was asked to give 400 names so that The WSJ could reach its minimal targets. But 'we cannot assume that those agreeing to participate have attitudes like those who declined'.The researchers let respondents decide which one to three schools they would rate. Because respondents self-selected into the study findings for a given school 'we cannot project the results to the universe of recruiters at that school'. Moreover, because most recruiters only rated on school rather than all schools visited in a given year the procedures over-represented the opinions of small recruiters visiting only one or two schools in comparison with large recruiters who visit many schools.
Even if the samples of respondents had been chosen randomly - which was decidedly not the case - the sample sizes were so small as to preclude meaningful comparisons among schools. While acknowledging the technical shortcomings, another seasoned observer of rankings (associated with a Top Ten school in The WSJ rankings) has this to say: 'Remember that Business Week's first efforts were regarded as 'untraditional' and 'wrong'. Perhaps the Journal's pecking order is a reasonably close approximation of what recruiters want - cheapness and an ability to turn knowledge into action, for example, would be pretty important. At least the survey is not purporting to be a definitive measure of quality - it's clearly measuring one thing (what the customers think) and if people realise that it may be useful'.
So what of the future? What's certain is that the league tables will not go away - the survey results provide valuable advertising revenues and additional circulation for one thing, and publications can legitimately ask what could possibly be wrong with trying to measure customer satisfaction. Complaints can easily be dismissed as emanating from schools which get a poor rating.
What's also unlikely is that anyone will come up with a set of criteria which will appease the market (perhaps students of market research can have a go). That leaves the onus on readers to interpret the findings with extreme care and not to read too much into year on year fluctuations - as well as an obligation on business schools to use their 'ratings' responsibly. The publications, meanwhile, should recognise less grudgingly the limitations of their methods.
Diese Artikel könnten dich auch interessieren
MBA vs. Promotion: Das ist dein passender Karriereweg
Das MBA-Studium: Der Executive-MBA
Uni Ranking 2018: Die besten Universitäten in Deutschland