# Signed and Unsigned Reviews in Earth Surface Dynamics

All of the reviews for Earth Surface Dynamics are open, published, and citable. Today I do a bit of webscraping to determine the % of mix of signed and blind reviews for the 198 paper reviewed in EsurfD. Also, since reviews occur in sequence (i.e.,  R1 submits their review before R2), we can exame how R1’s decision to sign a review influences the decision of R2.

The code to do the webscraping is here. Note that R is not my best language, but I am using it because of all the cool packages written for R to interface with Crossref (rcrossref, for obtaining publication DOIs), and the easy webscraping (rvest).

The code works by:

1. Pulling (from Crossref) details for all ESurf Discussion publications using the ISSN number.
2. Going to every EsurfD page (following the DOI link)
3. Scraping the webpage for author, editor, and reviewer comment (see this helpful tutorial on using rvest).
4. Checking for descriptive words, for instance “Anonymous Comment #1”, to determine if Reviewer 1 and/or Reviewer 2 were anonymous.
5. Check to see if a Reviewer 3 exists (to exclude the data… I only want to deal with papers with 2 reviewers for this initial study).

I imagine some specific pathological cases in review comments may have slipped through this code, but a cursory check shows it captures relevant information. After the code runs, I am left with 135 papers with 2 reviewers, for a total of 270 reviews. In total, 41% reviews are signed — this matches previous reports such as 40% reported by Okal (2003) and the 40% reported by PeerJ

• Reviewer 1 totals are 74 unsigned, 61 signed —55% unsigned, 45% signed
• For the 74 papers where Review 1 is unsigned,
• Reviewer 2 data is 59 unsigned, 15 signed — 80% unsigned, 20% signed
• For the 61 papers where Review 1 is signed,
• Reviewer 2 data is 27 unsigned,  34 signed — 44% unsigned, 54% signed.

There is one clear confounding factor here, which is how positive/negative reviews impact the likelyhood to sign a review (both for R1 and R2). I imagine referee suggestions to the editor (e.g., minor revisions, major revisions, reject) and/or text mining could provide some details. (I can think of a few other confounds beyond this one)…. Furthermore, I would assume that since many (all?) journals from Copernicus/EGU have open review, this analysis could be scaled…

# Peer review model — results part 2

Backstory of the model:

This model is based on networks, so I’ll use some of the language and techniques from the study of networks to analyze the data.This peer review model creates a directed and weighted network. In other words, the ‘scientists’ (nodes) are connected (via edges) to other scientists (other nodes). The connections (edges) have a direction (how ‘scientist A’ feels toward ‘B’) and weight (-3, negatively). The book-keeping for this model is an adjacency matrix.

$A=\begin{pmatrix}0&5&-1&0\\ 1&0&2&0\\ 0&-3&0&0\\ 0&0&1&0\end{pmatrix}$

Where $A_{ij}$ denotes the an edge from i to j with a given weight. In this model, it is the mood that scientist i has toward scientist j . (Some other texts do the reverse convention).

A measurement for this sort of matrix is incoming and outgoing node strength. The outgoing strength of scientists i — how scientist i feels about all other scientists — can be denoted as:

$s_{i}^{out}=\sum_{j \neq i} A_{ij}$

And can be calculated by summing rows. The incoming strength of scientists i — how all other scientists feel about scientist i — can be denoted as:

$s_{i}^{in}=\sum_{j \neq i} A_{ji}$

And can be calculated by summing columns. (for reference, my previous post showed time series plots of the mean of incoming weights, similar to the strength metric we are talking about here $(s_{i}^{in})$).

Signed reviewers can be polarizing — weights can quickly become very negative and/or very positive. So the strengths ($s_{i}^{in}$ and $s_{i}^{out}$) will be a sum of extreme positives and negatives — this is not very descriptive because it can lead to 0 strength. Instead I want to look at the range of incoming and outgoing weights, or:

$R_{i}^{out}= \max\limits_{j \neq i} A_{ij} - \min\limits_{j \neq i} A_{ij}$ which denotes the maximum outgoing weight minus the minimum outgoing weight.

$R_{i}^{in}=\max\limits_{j \neq i} A_{ji} - \min\limits_{j \neq i} A_{ji}$ which denotes the maximum incoming weight minus the minimum incoming weight.

Now let’s now look at some model results, $R_{i}^{out}$ and $R_{i}^{in}$, for each scientist.

• Both types of reviewers have similar $R_{i}^{out}$ — they tend to have a similar range in their opinions about the scientists in the discipline.
• Signed reviewers tend to have a larger $R_{i}^{in}$ —  the range of feelings that other scientists have toward the signed reviewers — compared to those who do not sign reviews. Scientists tend to either like or dislike signed reviewers more strongly that unsigned reviewers.

Some inspiration for this work comes from:

# a short break

Please excuse the short break for the next few weeks — I am out of the country on a research trip.

# The growing reference section of geomorphology articles

What is the mean of number of references in a geomorphology journal? Are reference sections growing (as the number of papers published increases)? I have previously looked at reference section size changes in JGR-ES and in length restricted articles (i.e., GRL and Geology), but I am extending the analysis to the four geomorphology journals I commonly read:

You can see below that the mean reference section size is growing at a rate of ~2 additional references per year: