What is perfection, anyway?

· Research
Author

I just saw a tweet from JJ Acouturier (@jjtokyo), re-tweet from someone else, linking to the following editorial article.

The original tweet by was:

ACM CCR had *no* peer-reviewed papers in July -- all were rejected! The editor's critique of reviewing is a must-read: ccr.sigcomm.org/online/files/pā€¦

I thought I could throw in some comments about what the author (S. Keshav) wrote.

The author states that the rejection of articles for the ACM CCR (Computer Communication Review) is mainly due to human reasons:

  • a subconscious desire to get one’s back: this argument might probably be true, quite unfortunately. Researchers are human, after all… However, I believe that this could be a urban legend as well: researchers normally get educated such that they have (subconscious) ethics.
  • a desire to prove one’s expertise: this is possible, but as a reviewer myself, I have also thought about that. This however only affects how the associate editor sees you as a reviewer: I tend to believe it is more in my interest to show that, yes, I am an expert, but also that I can acknowledge innovation and good as well as bad work.
  • a biased view of what papers in a particular area should look like: I am not sure what the author exactly meant there, but as a reviewer, I always criticize the articles in accordance with what I believe they should look like, or should contain. I need to use some reference, at some point, in order to correctly review articles, needn’t I? If the author refers to the article form, in my opinion, the reviewer’s role is to indicate how, in his belief, the article could be improved, including for its form. This usually leads to suggestions, but in no way this should directly lead to a rejection by itself…
  • unrealistic expectations: I may not write perfect papers but I expect to read only perfect papers: I actually believe that one should aim at this, that we should aim at perfect papers. I do not believe this is as “unrealistic” as stated by that author. More likely, we should not expect to review perfect papers, we should review the papers and give advice to authors so as to make them perfect (or at least, closer to one’s idea of perfection).
  • My belief is that, as a reviewer, our role is also to make our research environment a “better” place. By this, I mean that a review should act in accordance to his/her feeling about what a paper should look like, which standard it should be raised to and so forth. As a reviewer, I want the papers published in journals to be self-explaining, consistent in the assumptions that are given on the problem, and theoretically well stated – and of course also innovative in some way, which is probably the most difficult part to assess, especially at the beginning of one’s reviewer career!

    Of course, all the research works cannot be published right away in that form, and for me, conferences are typical places where “perfectible” works can and should be published. Some people might however also expect articles at conferences to be as complete as journal articles, but I would argue this would have a negative impact in research.

    While I can understand the principle of places like arxiv, as cited by the author, I still believe the reviewing system is a good system, as it is. I must admit that I do not exactly know how things happen on arxiv, but I doubt that, once an article is published there, after a few comments from readers, the authors come again and modify the article in order to fit people’s expectations or advice.

    As an article writer, this time, I have always felt aggressed by reviewers, that’s a common fact, but I have also felt that this feedback was necessary in order to question my own way of understanding and, most important for articles, my own way of explaining my work. This feedback is valuable and, in my own opinion, truly helps in improving the quality of a paper.

    Unfortunately, people are more and more pushed to publish incomplete work, or even shaky works, just for the sake of publishing (“publish or perish”), or also to have the hand on the first iteration of a potentially interesting idea (a bit like patents, except in research we talk less about money than about the grail of all researchers: fame and renown). Allowing science and research to take the time to develop the ideas, pushing collaboration between people, and not competition, exchanging the ideas at early stages, without fear of being “stolen”, and at last publishing when the idea is ripe: this should be the workflow of our research world.

    One last thing: I find it funny to turn around Voltaire’s citation. Le mieux est l’ennemi du bien, eh? Well, that indeed means “better does not mean good”. A first interpretation is: once you achieve something reasonably good, it is unnecessary, even harmful, to seek to improve it. In some cases, that ‘s true: I had my computer working fine, until one day I thought I should upgrade to the new, better version of some Python software. Well, I broke everything, and it took days before I could work normally again.

    However, another interpretation could be: something can be better than a previous iteration of it, but that may not make it good, yet. I should develop my research work, and only publish when it is ready for it, and not when I just start to understand what I am really doing.

    Edit: also check the original moral story (“conte moral”) from Voltaire, where that citation appears, there. In the first part, it says “Non qu’on ne puisse augmenter […] en science”, which freely translates as “Not that one should not progress in science”. Obviously, that saying does not originally apply to science! QED. šŸ˜€

    Leave a Comment