OSA Fellow Pablo Artal has kindly allowed OPN’s Bright Futures career blog to adapt and republish content from his popular blog Optics Confidential. In his blog, Artal fields questions from students, colleagues and other researchers on science, society and managing a career in optics.
Dear Pablo, I am confused about what works to cite in my scientific papers. Should I cite only the papers that helped me with my research? Or should I expand the list to include those that I found clearly wrong or even misleading? –Bruno, Italy.
I believe the proper approach is to cite everything that you actually used during your research. This includes seminal papers that may have inspired your project, articles on the methods you used, papers presenting similar previous work, and even research that you may consider incorrect or biased—although you should mention why you think it is invalid. This is an important part of the scientific process, and it will help your colleagues in the future.
Your question brings up an issue that I have long found troubling. As you know, the number of citations a scientist receives on his or her papers can be a deciding factor in receiving grants, academic jobs and prestige. The so-called h-index, referring to the number of papers that a scientist has with the same or higher number of citations, is a particularly important metric. For instance, if I have an h-index of 41, that means that 41 of my articles have received 41 or more citations. Some time ago, I covered this issue in more detail in my other blog in Spanish.
Although the number of citations is a better measure of scientific performance than simply counting the number of published papers, it is far from perfect. There are many possible problems with this system. For example, the differences in the number of publications and citations among different scientific fields generally make it difficult to compare between subject areas.
You can get an automatic count of citations on an article in Google Scholar or Web of Science, but this doesn’t take into account the fact that citations are not all equal—maybe you know a scientist whose work has a large number of citations, but some of them are actually negative. To avoid problems like this, I propose that we classify citations into four categories. I’ve listed them below with some examples obtained from actual papers.
“We followed the approach proposed and first implemented by (ref) to perform the current experiment…”
“The results of figure 5 are in good agreement with those presented in (ref)”
“Figure 3 compares our results with those of previous works (ref)”
“Although we followed the same procedure, we were not able to reproduce their results. This may be due to some individual variability. However, several other authors’ findings were similar to ours.”
“The suggestion by (ref) is clearly incorrect…”
“An additional problem in this study is the surprising lack of details provided on some of the most relevant methods and procedures used.”
I understand the technical difficulty of classifying different types of citations, but this system would provide a more accurate depiction of scientific value. Appropriate software could classify every citation within these categories, and each would be rated with points. For instance, seminal citations would be worth two points, positive ones would be worth one, neutral citations would have no points and negative citations would be negative one point.
A few decades ago, many of us were unhappy with the mere counting of papers as a measure of success, and the current system has helped address that. But other issues have cropped up. We could not begin to imagine at that time the large emphasis that would be placed on citation counts today. Perhaps the time has come to reevaluate.
Pablo Artal (Pablo@um.es) is an OSA Fellow and professor of optics at the University of Murcia, Spain. He is an optical and vision scientist with an interest in visual optics, optical instrumentation, adaptive optics, and biomedical optics and photonics.