Category Archives: peer review

Preprints Exist!

(This post originally appeared on the Coast and Ocean Collective Blog)

Preprints are non-peer-reviewed scholarly documents that precede publication in a peer-reviewed journal. Several disciplines like Physics, Astronomy and computer Science have been using preprints through arXiv for decades. Other disciplines are catching on, notably the biological sciences (See bioRxiv), and a variety of other discipline specific preprint services (e.g., here). There are many great articles and blog posts discussing preprints recently— common questions, critiques, misconceptions, concerns, etc. — here are three especially useful introductions:

1) Bourne PE, Polka JK, Vale RD, Kiley R (2017) Ten simple rules to consider regarding preprint submission. PLoS Comput Biol 13(5): e1005473. https://doi.org/10.1371/journal.pcbi.1005473

2) Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z (2019) On the value of preprints: An early career researcher perspective. PLoS Biol 17(2): e3000151. https://doi.org/10.1371/journal.pbio.3000151

3) A recent comprehensive look by Sheila Saia for the Young Hydrologic Society website is particularly useful for the Earth and Environmental scientists.

Full disclosure: I am a big advocate for preprints, interested in preprint adoption as a topic of study, and I am a current member of the EarthArXiv community — EarthArXiv is a community run preprint server for the Earth sciences (Narock et al. 2019). We have a very active community (especially on twitter!) so please bring us your questions/comments/concerns/clarifications.

To me, there are too many interesting facets about preprints to discuss in a single post. Here, I focus on some ways in which preprints compliment existing, more traditional ways of publishing — so we need to start by looking at scholarly communication and scholarly publishing, specifically journals.

A recent comment by Wouters et al (2019) outlined 5 roles for journals:

[Graphic from Wouters et al (2019) ]

I want to focus on discussing preprints in relation to task 1 (Registering) and 5 (Archiving). These are tasks that a scientific journal currently does by giving submission dates and assigning a persistent identifier to a journal article (i.e., the digital object identifier; DOI). In my opinion, these are tasks that we do not necessarily need a scientific journal to do. Instead, a preprint can accomplish these tasks — establishing precedent for an idea, and providing a means of citing the idea via a DOI.

If we rely on journals to do these tasks, the process can be attenuated. Peer-review can take months (or even years) before an article is published and visible to a community of peers. This is not a complaint against peer-review or the peer-review process — I am arguing here that several steps can occur before peer-review. My opinion is that bundling the registration of an idea (Task 1) and the archiving of idea (Task 5) with the peer-review process is suboptimal for one key reason:

No one can read, cite, or respond to an idea when the paper is hidden in review — only the editor, AE, reviewers, and coauthors can read, engage with, explore, and think about the work. These ideas may be presented at conferences, but in the written record, they do not exist (e.g., many journals have policies discouraging citations to conference abstracts). Ideas that are preprinted have persistent identifiers (DOIs) and can (and should!) be cited and discussed by others — preprints exist.

As an early career scientist, this is especially important. Scholarly work in review with no preprint remains invisible to the broader community. Early career scientists often mention ‘in prep’ or in review articles on CVs — I’d argue that this is far less meaningful than linking to a preprint version (where people could actually read and cite you work). Again, preprints exist.

Being unable to read and cite articles that are in review in a transparent way hampers our ability to do science. Hiding articles through the review process is a form of information asymmetry — and a bizarre, imperfect hiding. I know about lots of work that remains hidden — I read about it as a grant or paper reviewer, I hear about it in passing from colleagues, and conference presentations give a glimpse of what will be published in the next few years — but I cannot cite these ideas or these works unless there are preprints. Put another way — there is a subset of ideas that I know about, but can’t share with colleagues. This is strange.

This is where preprints come into the picture. Preprint services like EarthArXiv can 1) store papers (i.e., registering intellectual claims associated with author names and submission timestamps), and 2) assign DOIs and archive scholarly artifacts. Therefore preprint services accomplish Task 1 (registering) and Task 5 (archiving) in the Wouters et al (2019) taxonomy. Preprints leave the other tasks (curating, evaluating, and disseminating) for other services such as scientific journals.

My argument here is that we should unbundle the services that journals provide to increase the flow of information. Preprints can accomplish some of these tasks faster, cheaper, and better than traditional journals.

The AGU EOS ‘Editorial Practices’ discussion of 1984

On May 15 1984, Russell and Reiff published a (jokey) flow chart of the AGU editorial and peer review process with several time delay terms and a ‘counting’ step for the multiple revisions. This set off 6 responses in EOS, similar to the episode in 2003-2004.

RusellReiff1984.jpeg

  1. On Oct 23, 1984, Baum wrote in to discuss how peer review tended to filter out controversial new ideas. Baum recommended that authors be allowed to publish controversial new ideas even if reviewers protested, but reviewers should also be allowed to publish their criticisms. In addition Baum offered some mathematical changes to the Russell and Reiff flow chart.
  2. Dessler also wrote in on Oct 23, 1984, with remarks that referees are often named and thanked by the editor or author. As a result, authors may be more wary of support for controversial ideas. Dessler also suggests that Comment—Reply pairs should be published more often (I have written about these in JGR-ES).  
  3. On Dec. 25, 1984, Sonnerup (the editor of JGR-Space Physics) wrote to EOS in support the idea that peer review should permit new and unorthodox ideas. Additionally, Sonnerup provides additional details regarding the review process at JGR-Space Physics. 
  4. On Feb 19, 1985 Walker and van der Voo wrote in to EOS to discuss the editorial process at GRL. Choice quote (bold type highlighted by me): Because of the importance attached to prompt publication in GRL we will gener­ally use only one reviewer for each paper, communicating with this reviewer, when necessary, by telephone or telemail. More reviewers are used only when a paper seems likely to be particularly controver­sial or is otherwise difficult to deal with.”
  5. Baker wrote in on April 25, 1985 to suggest that JGR collect the rejected papers and publish them. Baker stated, in jest, that there is likely a “large body of unpub­lished papers out there which have been rejected by Neanderthal referees. I say let’s do something about it! I suggest that all of these brilliant, creative, earthshaking pa­pers be collected into a special JGR issue each year.”
  6. Murdoch wrote in on March 10, 1987 to suggest that abstracts of rejected papers be published. If a scientist wanted to see the rejected paper, then the author could provide the paper AND the critical reviews.

 

These papers just highlight the role of editors, something still missing from my peer review agent model (pointed out by a commenter/Jazz legend).

Retaliation in the Peer Review Model

(The full motivation, rule set, and previous results for this model are collected here)

Today I am adding a new rule into my toy peer review model. Suppose some fraction of the population is set to ‘retaliate’ when called upon to be a reviewer. Specifically, these agents assign review scores based on their feelings (postive or negative) toward the author. This is an example of biases that might influence a reviewers decision (e.g., Lee et al., 2012).

So the new rule is:

  • If a ‘retaliator’ is assigned a review, and they feel positively or negatively toward the author, the review is postive or negative, respectively (overriding a random review).

(n.b.: A more gentle statement of this review could instead focus solely on ‘cliques’ — if a reviewer feels positively toward the author, the review is positive. if the review feels negatively, the review is random. )

The issue is now there are 4 types of peple in the model:

  • Those who sign reviews, and do not retaliate
  • Those who sign reviews, and retaliate
  • Those who do not sign reviews, and do not retaliate
  • Those who do not sign reviews, and retaliate

Again I will use the range of incoming and outgoing node weights to visualize model results. As a reminder:

R_{i}^{in} is the maximum incoming weight minus the minimum incoming weight. This represents the range of feelings all other scientists have about scientist i.

R_{i}^{out} is the maximum outgoing weight minus the minimum outgoing weight. This represents the range of feelings scientist i has about all other scientists in the discpline.

So here are the results with 30% of all scientists being ‘retaliators’.

Figure_2.jpg

  • Compared to the previous results, the same trends hold: Rin is larger for signed reviewers (blue), and Rout is roughly the same for signed vs unsigned. (ranges are different for the previous results because of a change in the number of model timesteps).
  • Unsigned retaliators (empty orange markers) are similar to Unsigned non-retaliators. If you never sign reviews, no author will end up knowing that you are a retaliator (the editor is a different story).
  • Signed retaliators (empty blue markers) have a large Rin — they are polarizing figures. Authors are either on the good side of these people (they are friends) or on the bad side (they are enemies).

Signed and Unsigned Reviews in Earth Surface Dynamics

All of the reviews for Earth Surface Dynamics are open, published, and citable. Today I do a bit of webscraping to determine the % of mix of signed and blind reviews for the 198 paper reviewed in EsurfD. Also, since reviews occur in sequence (i.e.,  R1 submits their review before R2), we can exame how R1’s decision to sign a review influences the decision of R2.

The code to do the webscraping is here. Note that R is not my best language, but I am using it because of all the cool packages written for R to interface with Crossref (rcrossref, for obtaining publication DOIs), and the easy webscraping (rvest).

The code works by:

  1. Pulling (from Crossref) details for all ESurf Discussion publications using the ISSN number.
  2. Going to every EsurfD page (following the DOI link)
  3. Scraping the webpage for author, editor, and reviewer comment (see this helpful tutorial on using rvest).
  4. Checking for descriptive words, for instance “Anonymous Comment #1”, to determine if Reviewer 1 and/or Reviewer 2 were anonymous.
  5. Check to see if a Reviewer 3 exists (to exclude the data… I only want to deal with papers with 2 reviewers for this initial study).

I imagine some specific pathological cases in review comments may have slipped through this code, but a cursory check shows it captures relevant information. After the code runs, I am left with 135 papers with 2 reviewers, for a total of 270 reviews. In total, 41% reviews are signed — this matches previous reports such as 40% reported by Okal (2003) and the 40% reported by PeerJ

  • Reviewer 1 totals are 74 unsigned, 61 signed —55% unsigned, 45% signed
  • For the 74 papers where Review 1 is unsigned,
    • Reviewer 2 data is 59 unsigned, 15 signed — 80% unsigned, 20% signed
  • For the 61 papers where Review 1 is signed,
    • Reviewer 2 data is 27 unsigned,  34 signed — 44% unsigned, 54% signed.

There is one clear confounding factor here, which is how positive/negative reviews impact the likelyhood to sign a review (both for R1 and R2). I imagine referee suggestions to the editor (e.g., minor revisions, major revisions, reject) and/or text mining could provide some details. (I can think of a few other confounds beyond this one)…. Furthermore, I would assume that since many (all?) journals from Copernicus/EGU have open review, this analysis could be scaled…

Peering into the Nature Geoscience author dataset

A recent Nature Geoscience editorial looked at the reviewer suggestions of submitting authors.  The editorial examined many different issues, including:

  • The geographic breakdown of submitting authors.
  • The geographic breakdown of author-suggested reviewers.
  • The geographic and gender breakdown for submitting authors whose paper was sent for review.
  • The gender breakdown of suggested reviewers by submitting author gender.

Fortunately, the data behind the editorial was also provided as a supplement. So let’s take a peek and investigate some other aspects of the data. First, let’s check out the gender breakdown of submitting authors by geographic region

F1.jpeg

For reference, ‘f’ is female, ‘m’ is male, and ‘u’ is unknown. The disproportion is clear accross all regions (note that Australia and NZ seem to be least disproportionate).

Next, let’s check out the geography of suggested reviewers by submitting author geography. Here is the number of authors who suggested reviewers, by geography:F3.jpeg

Now from this set of authors, the proportion of suggested reviewers broken down by geography:

f2.jpeg

One major trend I see, aside from the lack of balance across all recommendations, is that North American authors recommend North American reviewers most of the time (~65%). No other geographic location recommends itself as much (see even the European + Middle East authors, who recommend European + Middle East reviewers equally with North Americans)

I can think of data that is missing from this dataset —  in particular, the breakdown of assigned reviewers by geography. However the editorial alludes to some answers:

“Nevertheless, the geographical distribution of editor-assigned reviewers resembles the biases of author-suggested reviewers”

The R code for my analysis is here — this post was a good excuse to continue learning R (keep that in mind that I am learning R as you look at the messy, verbose code).

 

Peer review model — results part 2

Backstory of the model:

This model is based on networks, so I’ll use some of the language and techniques from the study of networks to analyze the data.This peer review model creates a directed and weighted network. In other words, the ‘scientists’ (nodes) are connected (via edges) to other scientists (other nodes). The connections (edges) have a direction (how ‘scientist A’ feels toward ‘B’) and weight (-3, negatively). The book-keeping for this model is an adjacency matrix.

A=\begin{pmatrix}0&5&-1&0\\ 1&0&2&0\\ 0&-3&0&0\\ 0&0&1&0\end{pmatrix}

Where A_{ij} denotes the an edge from i to j with a given weight. In this model, it is the mood that scientist i has toward scientist j . (Some other texts do the reverse convention).

A measurement for this sort of matrix is incoming and outgoing node strength. The outgoing strength of scientists i — how scientist i feels about all other scientists — can be denoted as:

s_{i}^{out}=\sum_{j \neq i} A_{ij}

And can be calculated by summing rows. The incoming strength of scientists i — how all other scientists feel about scientist i — can be denoted as:

s_{i}^{in}=\sum_{j \neq i} A_{ji}

And can be calculated by summing columns. (for reference, my previous post showed time series plots of the mean of incoming weights, similar to the strength metric we are talking about here (s_{i}^{in})).

Signed reviewers can be polarizing — weights can quickly become very negative and/or very positive. So the strengths (s_{i}^{in} and s_{i}^{out}) will be a sum of extreme positives and negatives — this is not very descriptive because it can lead to 0 strength. Instead I want to look at the range of incoming and outgoing weights, or:

R_{i}^{out}= \max\limits_{j \neq i} A_{ij} - \min\limits_{j \neq i} A_{ij} which denotes the maximum outgoing weight minus the minimum outgoing weight.

R_{i}^{in}=\max\limits_{j \neq i} A_{ji} - \min\limits_{j \neq i} A_{ji} which denotes the maximum incoming weight minus the minimum incoming weight.

Now let’s now look at some model results, R_{i}^{out} and R_{i}^{in}, for each scientist.

rangenoretal.jpg

  • Both types of reviewers have similar R_{i}^{out} — they tend to have a similar range in their opinions about the scientists in the discipline.
  • Signed reviewers tend to have a larger R_{i}^{in} —  the range of feelings that other scientists have toward the signed reviewers — compared to those who do not sign reviews. Scientists tend to either like or dislike signed reviewers more strongly that unsigned reviewers. 

An added feedback is coming….

Some inspiration for this work comes from:

Initial results from the agent-based Peer Review ‘mood’ model

Today, the first results from my Peer Review agent model (description of the model and code). I’m going to build this model slowly, week-by-week, adding complexity.. (a good way to practice writing coding in julia).

On to the model. For these results, Before the model runs, it decides which 50% of reviews are signed and which 50% are unsigned. Each paper submitted gets 2 reviews. Reviews can either be bad (-1) or good (+1). During each timestep, all reviews are randomly assigned to be good or bad (split 50-50 for the entire pool of reviewers). Remember that these good and bad reviews are returned to each writer and writers feeling toward the reviewers therefore changes according to these rules:

  • signed good (+1) reviews result in +1 feelings toward the reviewer
  • signed bad (–1) reviews result in –1 feelings toward the reviewer
  • unsigned good (+1) reviews result in +1 feelings toward a random scientist
  • unsigned bad (–1) reviews result in –1 feelings toward a random scientist

Note that the model at this point is some modified version of a random walk.

Now, some results. For 10,000 times steps I am tracking the collective feeling of scientists toward one another — in other words, these are plots of the average feeling of all scientists toward a specific Signed or unsigned reviewer.  Signed reviewers (50) in the top panel, and unsigned reviewers (49) in the bottom panel. Each line represents the average mood toward a single scientist (signed or unsigned reviewer).

PRbots.jpeg

Signed reviewers are more polarizing — there is wider dispersion of emotions toward signed reviewers. This seems obvious, they get credit for both positive and negative reviews

When review scores are random (between -1 and +1) and the change in feeling is random (equal to review score), the results are same as above, but with less difference (less spread in feelings toward signed reviewers).

There is no feedback mechanism in the model right now — reviewers don’t take into account their feelings for authors. This will change in the next version….

Rules for an agent-based peer review model

Last week I wrote about a set of AGU EOS articles from 2003 that focus on anonymity in peer review. A quote from one of the articles really stuck with me regarding the personal decision to sign reviews:

Okal (2003) states that, as an editor of GRL, ~40% of the reviews he sees are signed. As a reviewer, he signs 2/3 of his reviews. And as an author, 1/2 the reviews he receives are signed. His experience suggest that:
The above numbers — 40%;two-thirds; one- half — suggest that the community is divided, with no overwhelming majority in its attitude toward anonymous versus signed reviews. This diversity may indeed be precious and should be respected. Why not keep the system as it is now, leaving it to the individual reviewer to exercise a free decision regarding waiving anonymity?”

Over the course of the next few weeks I hope to build a fun little toy model of ‘peer reviewing’ agents to see if I can tease out something  — is diversity in peer review behavior (re: signed vs blind) in some way ‘precious’?

the rules of the model are:

Each agent (scientist) is set to either sign or blind their reviews.

For each time step:

  • Randomly pick the number of scientists (‘P’) out of ‘N’ total scientists who will publish a single paper
  • Randomly assign ‘R’ reviewers for each paper
    • Nobody can review their own paper
    • Writing Sceintists can review
    • Scientist can do multiple reviews
  • Each reviewer gives a random review score (good or bad)
  • Reviews are returned to each writer and writers ‘mood’ changes
    • signed + reviews result in + feelings toward the reviewer
    • signed – reviews result in – feelings toward the reviewer
    • unsigned + reviews result in + feelings toward a random scientist
    • unsigned – reviews result in – feelings toward a random scientist

And we see how the feelings of the community (toward one another) develop through time.

The beginning of the code is already up on Github. Feel free to contribute or give an opinion.

The AGU EOS ‘Anonymous Peer Review’ debate of 2003-2004

This summer I stumbled upon a cache of EOS newsletters from 2003. Among the pages was a series of comments and letters about anonymous review, specifically problems and possible solutions. It’s nice to know that we struggle with the same issues 14 years later.

The original article written on July 1, 2003 by Beck (2003) was focused on the rejection of a paper by 2 anonymous reviews and an anonymous AE. After listing and discussing potential reasons that a reviewer and/or AE would prefer to remain anonymous. Beck ends by writing:

“The only reviews I remember that left me perma­nently angry were anonymous. There is far too much unpleasantness in the world already to needlessly introduce even a little bit more. Anonymous reviews are unnecessary, unacceptable, and should not be permitted.” 

Strong statement! I have my own opinions about anonymity in peer review (I’m sure everyone does), but what is most interesting to me is that fact that this article produced such a large reaction — I can find 15 letters and comments published in EOS as a response to Beck (2003) — compared to the rare comment-reply pairs in JGR-ES.

On July 29th, 2003

  • Roninove (2003) writes in to support Beck (2003), having written a letter about problems with anonymous reviews (back in 1990).
  • Criss and Hofmeister (2003) suggest discounting anonymous reviews, and discuss the issues surrounding signed vs unsigned reviews for underrepresented groups.

On Sept 23rd, 2003

  • Geller (2003) writes in to suggest that AEs should always sign reviews because they often make the decision for the editor.
  • Goff (2003) writes in to suggest AEs should sign reviews and that AGU should encourage signed reviews and newer journals should require signed reviews
  • Walder (2003) writes in to suggest that AGU AEs should sign reviews and we should collect data — reviewers be asked ‘why’ they choose to remain anonymous.

Sept 30th, 2003

  • Forel (2003) is an ‘advocate’ for anonymous reviewing, but believes editors should not be anonymous.
  • Fisher (2003) writes in to suggest double blind reviewing
  • Savov (2003) writes that science should be “…discussed in the open air.” and suggests that the paper, reviews, and reviewer names should all be published together.
  • Okal (2003) writes that the current system should be preserved and personal preference (re: signed vs unsigned reviews) should be respected. Okal writes that this debate has been going on for decades with no clear solution:

“The debate on peer review has been going on for decades. It may be the worst possible system, but by and large it works. And to paraphrase Sir Winston Churchill, wait until you consider all the other ones….”

Dec 23rd, 2003

  • The editors of JGR-Atmopsheres respond in O’Dowd et al. (2003). They discuss the editorial process in the journal and highlight the role of anonymity for the AEs and reviewers.

Dec 30th, 2003

  • Kirwan Jr. (2003) writes that peer reviews should not be signed because it could be self serving. Furthermore authors should not speculate about the authors of their anonymous reviews because of possible negative and counterproductive consequences.
  • Wesolowski (2003) writes that finding reviewers is difficult enough without requiring the identification of reviewers, and forced signing of reviews may lead to overly positive reviews.

April 20th, 2004

  • Armstrong (2004) discusses the possibility of multiple review stages, some with or without anonymity.  
  • Sturrock (2004) presents a ‘Code of Ethics’ for peer review.

April 27th, 2004

  • Genereaux and Sen (2004) discuss the NSF proposal review process, specifically how proposers do not have an opportunity to respond to “Incorrect and Overly Negative Statements (IONS)”.

 

N.B. — There was an article on anonymous peer review in GSA Today by McBirney (2003) — here is a link to the issue — something must have been in the air.