Alert Sign Dear reader, online ads enable us to deliver the journalism you value. Please support us by taking a moment to turn off Adblock on

Alert Sign Dear reader, please upgrade to the latest version of IE to have a better reading experience


How to assess a mathematician

August 16, 2009


Professor Philip Hall (1904-1982), FRS, an eminent mathematician was Sadlerian professor of pure mathematics at Cambridge from 1953 to 1967. It has been stated in Daily Telegraph that he was the world's leading group theorist of the 20th century. Recipient of the prestigious Sylvester Medal of the Royal Society, the de Morgan Medal and the Larimor prize of the London Mathematical Society, he was elected honorary secretary and the president of the London Mathematical Society. Hall exercised a profound influence on English mathematics, an influence which was felt throughout the mathematical world. He published 40 papers in his career of 47 years.

But Hall's cumulative impact factor, based on HEC's criterion, is only 19.844. According to the HEC criterion for civil award of Pakistan (impact factor for Tamgha-i-Imtiaz 34-49, Pride of Performance 50-99, Sitara-i-Imtiaz 100-198, Hilal-i-Imtiaz 200), Hall will not qualify even for Tamgha-i-Imtiaz.

There are many more such outstanding mathematicians who would not qualify the HEC criterion for awards, research projects, distinguished professorship, etc., whereas, ironically, there are some Pakistani mathematicians who are no match to such outstanding mathematicians such as Professor Philip Hall and yet they have received awards and research projects worth millions of rupees.

The impact factor is a measure of the relative number of citations in two and three years. It is calculated by dividing the number of current citations a journal receives to articles published in the two previous years by the number of articles published in those same years. So, for example, the 1999 impact factor is the citations in 1999 to articles published in 1997 and 1998 divided by the number of articles published in 1997 and 1998. The number that results can be thought of as the average number of citations the average article receives per annum in the two years after the publication year.

In June 2008, the International Mathematical Union (IMU) issued a report “Citation Statistics”, which addresses the use of quantitative measures in assessment of research. It states that while numbers, that is, impact factors, appear to be “objective their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations. It states that the sole reliance on citation data provides at best an incomplete and often shallow understanding of research.

Numbers are not inherently superior to sound judgments. But citation data provide only a limited

and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused.

Research is too important to measure its value with only a single coarse tool. It is important that those involved in assessment will understand that if we set high standards for the conduct of science, they should set equally high standards for assessing its quality (for details see the September 2008 issue of the Notices of the American Mathematical Society).

The value of the impact factor is affected by sociological and statistical factors. Sociological factors include the subject area of the journal, the type of journal (letters, full papers, reviews), and the average number of authors per paper (which is related to the subject area). Statistical factors include the size of the journal and the size of the citation measurement window.

In general, fundamental and pure subject areas have lower impact factors than the specialised or applied ones. The variation is so significant that the top journal in one field may have an impact factor lower than the bottom journal in another area.

Connected closely to subject area variation is the phenomenon of multiple authorship. The average number of collaborators on a paper varies according to the subject area, from pure mathematics (with about two authors per paper) to applied mathematics (where there are four or more). Not unsurprisingly, given the tendency of authors to refer to their own work, there is a strong and significant correlation between the average number of authors per paper and the average impact factor for a subject area. So comparisons of impact factors should only be made for journals in the same subject area.

The value of the impact factor is affected by the subject area, type and size of a journal, and the period of measurement used. Journals of physics and engineering for instance, have much greater impact factors than the mathematical journals not because they are qualitatively better but because they have a wider readership and the time spent from acceptance of a paper to its publication is much shorter.

The Institute for Science Information (ISI) has listed 321 journals under the subject of mathematics, and only 15.58 per cent mathematics journals have impact factors greater than one. Only four journals have impact factors greater than two, the highest being 2.75.

The use of impact factors, outside of the context of other journals within the same subject area, is virtually meaningless; journals ranked top in one field may be bottom in another. Extending the use of the journal impact factor from the journal to the authors of papers in the journal is highly suspect; the error margins can become so high as to make any value meaningless. The use of journal impact factors for evaluating individual scientists is even more dubious, given the statistical and sociological variability in journal impact factors. Impact factors, as one citation measure, are useful in establishing the influence journals have within the literature of a discipline. Nevertheless, they are not a direct measure of quality and must not be used.

In Pakistan, the rationale for the use of the impact factor is to “help the administrators of science to evaluate the quality and output of scientists who seek key positions”. The National Commission on Science has made it a part of its policy to rate scientists and their work on the basis of impact factors of their research papers in accordance with the list of impact factors published by the ISI. However, as Professor Milman in his article has argued, ISI's impact factor puts mathematicians in a disadvantageous position because the index is not suitable for research in mathematics.

Determining one's status by the arbitrary assignments of numbers to the journals in which one happened to publish is rather bizarre. Most intelligent scientists and administrators are well aware of two facts firstly, that we do not yet have reliable bibliographic measures for comparing or making absolute ratings of the value of the work done by research workers; secondly, that in any event, bibliographic measures appropriate in one field are inappropriate in others.

An impact value based on the simple measurement of how many times a journal is cited makes no sense as a measure of the quality of the papers published in it, let alone the quality of the mathematicians publishing there. Such use of management-type figures which claim to enable comparisons to be made, can be utterly misleading and damaging.

Figures are only as good as the premises on which the figures are based and often the premises of many widely-touted management figures are seriously flawed. The only real criterion of an individual's scholarship is the quality of work and that does not admit of simple numerical assessment.

The misuse of impact factors and the citation index has already borne adverse effects in Pakistan. Applied mathematicians for instance are at a more advantageous position. They have the choice of publishing papers in journals of physics, computational mathematics and engineering. Their choice of publishing a paper also increases. These journals by and large have much higher impact factors than the journals of mathematics. Consequently the cumulative impact factors of mathematicians who have the choice of publishing papers in such journals increase phenomenally.

Young Pakistani mathematicians are now reluctant to carry out research in pure mathematics as they feel that publishing papers in top mathematical journals is not only difficult but receives no recognition or appreciation due to low impact factors and citations. There has been an increase in fake authorship. Production of short papers is on the rise. Worthwhile and exhaustive research is compromised. Meaningless mathematical modelling is gaining popularity. A rat race of producing shallow mathematical results with no scientific value has damaged the very purpose of research in basic sciences.

Allocation of funds, grants, awards, appointments and promotions on the basis of impact factor will be disastrous for science in Pakistan. The particular nature and special characteristics of each branch of science have been completely ignored in devising the Rs15.7 billion National Scientific and Technological Research and Development Fund for encouraging scientific research in the country.

It does not require much effort to understand from the information and analysis provided above that the use of impact factor needs some modifications to suit the very nature of mathematics. We can adopt a more pragmatic rather than a protectionist approach to this problem.

The writer is president of the Pakistan Mathematical Society and professor of mathematics at the Quaid-i-Azam University, Islamabad.