Last week I wrote about a set of AGU EOS articles from 2003 that focus on anonymity in peer review. A quote from one of the articles really stuck with me regarding the personal decision to sign reviews:
Okal (2003) states that, as an editor of GRL, ~40% of the reviews he sees are signed. As a reviewer, he signs 2/3 of his reviews. And as an author, 1/2 the reviews he receives are signed. His experience suggest that:
“The above numbers — 40%;two-thirds; one- half — suggest that the community is divided, with no overwhelming majority in its attitude toward anonymous versus signed reviews. This diversity may indeed be precious and should be respected. Why not keep the system as it is now, leaving it to the individual reviewer to exercise a free decision regarding waiving anonymity?”
Over the course of the next few weeks I hope to build a fun little toy model of ‘peer reviewing’ agents to see if I can tease out something — is diversity in peer review behavior (re: signed vs blind) in some way ‘precious’?
the rules of the model are:
Each agent (scientist) is set to either sign or blind their reviews.
For each time step:
- Randomly pick the number of scientists (‘P’) out of ‘N’ total scientists who will publish a single paper
- Randomly assign ‘R’ reviewers for each paper
- Nobody can review their own paper
- Writing Sceintists can review
- Scientist can do multiple reviews
- Each reviewer gives a random review score (good or bad)
- Reviews are returned to each writer and writers ‘mood’ changes
- signed + reviews result in + feelings toward the reviewer
- signed – reviews result in – feelings toward the reviewer
- unsigned + reviews result in + feelings toward a random scientist
- unsigned – reviews result in – feelings toward a random scientist
And we see how the feelings of the community (toward one another) develop through time.
The beginning of the code is already up on Github. Feel free to contribute or give an opinion.
This summer I stumbled upon a cache of EOS newsletters from 2003. Among the pages was a series of comments and letters about anonymous review, specifically problems and possible solutions. It’s nice to know that we struggle with the same issues 14 years later.
The original article written on July 1, 2003 by Beck (2003) was focused on the rejection of a paper by 2 anonymous reviews and an anonymous AE. After listing and discussing potential reasons that a reviewer and/or AE would prefer to remain anonymous. Beck ends by writing:
“The only reviews I remember that left me permanently angry were anonymous. There is far too much unpleasantness in the world already to needlessly introduce even a little bit more. Anonymous reviews are unnecessary, unacceptable, and should not be permitted.”
Strong statement! I have my own opinions about anonymity in peer review (I’m sure everyone does), but what is most interesting to me is that fact that this article produced such a large reaction — I can find 15 letters and comments published in EOS as a response to Beck (2003) — compared to the rare comment-reply pairs in JGR-ES.
On July 29th, 2003
- Roninove (2003) writes in to support Beck (2003), having written a letter about problems with anonymous reviews (back in 1990).
- Criss and Hofmeister (2003) suggest discounting anonymous reviews, and discuss the issues surrounding signed vs unsigned reviews for underrepresented groups.
On Sept 23rd, 2003
- Geller (2003) writes in to suggest that AEs should always sign reviews because they often make the decision for the editor.
- Goff (2003) writes in to suggest AEs should sign reviews and that AGU should encourage signed reviews and newer journals should require signed reviews
- Walder (2003) writes in to suggest that AGU AEs should sign reviews and we should collect data — reviewers be asked ‘why’ they choose to remain anonymous.
Sept 30th, 2003
- Forel (2003) is an ‘advocate’ for anonymous reviewing, but believes editors should not be anonymous.
- Fisher (2003) writes in to suggest double blind reviewing
- Savov (2003) writes that science should be “…discussed in the open air.” and suggests that the paper, reviews, and reviewer names should all be published together.
- Okal (2003) writes that the current system should be preserved and personal preference (re: signed vs unsigned reviews) should be respected. Okal writes that this debate has been going on for decades with no clear solution:
“The debate on peer review has been going on for decades. It may be the worst possible system, but by and large it works. And to paraphrase Sir Winston Churchill, wait until you consider all the other ones….”
Dec 23rd, 2003
- The editors of JGR-Atmopsheres respond in O’Dowd et al. (2003). They discuss the editorial process in the journal and highlight the role of anonymity for the AEs and reviewers.
Dec 30th, 2003
- Kirwan Jr. (2003) writes that peer reviews should not be signed because it could be self serving. Furthermore authors should not speculate about the authors of their anonymous reviews because of possible negative and counterproductive consequences.
- Wesolowski (2003) writes that finding reviewers is difficult enough without requiring the identification of reviewers, and forced signing of reviews may lead to overly positive reviews.
April 20th, 2004
- Armstrong (2004) discusses the possibility of multiple review stages, some with or without anonymity.
- Sturrock (2004) presents a ‘Code of Ethics’ for peer review.
April 27th, 2004
- Genereaux and Sen (2004) discuss the NSF proposal review process, specifically how proposers do not have an opportunity to respond to “Incorrect and Overly Negative Statements (IONS)”.
N.B. — There was an article on anonymous peer review in GSA Today by McBirney (2003) — here is a link to the issue — something must have been in the air.
This spring I taught an undergraduate Geomorphology class at Duke. For the last few weeks of class, I broke out my debris flow flume. I have written about this debris flow previously, and is described here on the Sediment Experimentalist Network site. Also posted is a slo-mo video of a typical debris flow.
Students planned and executed an experiment of their choosing — an example of a ‘Course-based Undergraduate Research Experience’ (CURE). Though there has been some work done with ‘scaled down’ debris flows (e.g., de Haas et al. 2015) there seemed to be lots of room for the students to do something new.
Both groups ended up investigating various mitigation measures for slowing or stopping debris flows. This involved 3D printed several pieces as mitigation structures, from solid walls of various sizes and angles:
…to plates with various densities of upright rods/sticks to function as tree/vegetation mimics:
Each group ended up writing up their work as a paper (data and plots included), and I’m happy to share them here:
- Paper 1 focused on solid walls
- Paper 2 focused on the ‘green infrastructure’ mimics.
Nature reported last week on the uptick in usage of Arduino and Raspberry Pi for research. The idea of building research tools with open source hardware has been covered before (see Pearce 2012 for an example), but this recent article had a nice plot of the # of papers/year that mention these boards (using PubMed and Scopus) .
After the article last week, I wondered how many Geoscience articles actually use an Arduino or Raspberry Pi….
Using the Web of Science, there are less than 10 articles under the ‘Geosciences Multidisciplinary’, ‘Geology’, and ‘Geography Physical’ topics that use the word ‘Raspberry Pi’ or ‘Arduino’ in the title, key words, or abstract. Not much uptake in the Earth sciences I guess.
Though the articles that use the Arduino are very neat, such as a system for geophone data acquisition, a microscope focus stacker, an earth flow monitoring tool, and temperature-sensing waders.
I have seen other Earth science research using these boards — by attending poster sessions at AGU that highlight low cost tech, and I have read about the Raspberry Shake, which could generate a host of papers in the future…
My interest here comes from dabbling with these two tools in the past. With the Arduino I have actually built a few things, including a primitive Optical Backscatter Sensor (OBS), a datalogger, and an ultrasonic distance sensor (see below; pic from 2014). I hope to get back to that dabbling some day..
Here I compiled the Open Access charges for journals that publish geomorphology research (i.e., Gold Open Access; Author Pays). I’m sure some are missing — let me know which publications I should add to this list.
Keep in mind that some journals have page charges even if the articles are not Open Access, some journals provide open access after a given time period, and other journals ONLY publish Open Access. Your institution may also have an agreement with a publisher about paying the fee (i.e., they will pay for you)…
I hope to periodically update this list.. The data below was collected on March 26th 2017.
||Cost (various currencies)
|Earth and Space Science (AGU + Wiley)
|Earth’s Future (AGU + Wiley)
|GRL (AGU + Wiley)
|Water Resources Research (AGU+ Wiley)
|JGR – Earth Surface (AGU+ Wiley)
|Reviews of Geophysics (AGU + Wiley)
|ESurf (EGU + Copernicus)
||€50-120/ journal page
|Earth Surface Processes and Landforms (BSG+ Wiley)
|Progress in Physical Geography (Sage)
|Marine Geology (Elsevier)
|Scientific Reports (Nature)
|Nature Communications (Nature)
|Zeitschrift für Geomorphologie (Schweizerbart)
||€140 per article + €119 per published page
Last semester I taught an undergraduate level geomorphology class at UNC-Chapel Hill. It was a blast. In addition to reading lots of primary literature, and editing wikipedia, we conducted a class experiment. I built a small debris flow flume based on de Haas et al. (2015) and we did a few experiments. A description of the flume can be seen here on the Sediment Experimentalist Network site, and a slo-mo video of our debris flow can be seen here.
But what did we do with the flume? After watching the USGS debris flow videos and thinking about articles by John McPhee (one and two), the students decided to focus on how ‘baffles’ (obstructions in the outwash plain) can work as a debris flow mititgation strategy and modify debris flow runout (see an example of this type of research by Choi et al., 2014). The UNC students wrote up some preliminary results, and if you want more details (or the data), let me know… Eventually I will get it all up on figshare.
For now, here is a picture of our baby debris flow:
Wikipedia page views are immense. Editing Wikipedia to include more references to journals is one way to get more science into the public eye. Additionally, Wikipedia is a portal to peer-reviewed science. But how many Earth and Space science papers are actually cited in Wikipedia?
For this post, I’m focusing on articles published by AGU. From an earlier investigation, I found 1599 citations to AGU publications in Wikipedia. But how are these 1599 citations spread across the journals? Let’s look at works published in JGR-Planets, JGR-Biogeoscience and JGR-Earth Surface because they have a similar number of publications per year — with 123, 196 and 126 articles published in 2016 (see the AGU publication stats). (Compare these numbers to the other 4 sections of JGR: ~400 articles in 2016 for Solid Earth and Oceans, and ~800 articles in 2016 for Space Physics and Atmospheres).
A quick note on the data: I first downloaded all of the articles records for a given journal from the Web of Science. Using the article DOI numbers, I used the rAltmetric package created by rOpenSci to find Wikipedia mentions listed in the Altmetric database. Note that this was done in Dec. 2016 and Wikipedia changes constantly, so treat this data as a snapshot.
The top panel is the percent of articles (published in a given year) that are referenced in Wikipedia. The bottom panel is the number of articles (published in a given year) referenced in Wikipedia. Also plotted is the data for GRL.
JGR-Planets steals the show here..
For # of articles cited, GRL does well too.
I’ll post results for the other 4 JGR sections in a future post. In the meantime:
- Here is an open dataset of scholarly citations in Wikipedia, from Wikipedia Research.
- Here is an early analysis of the issue of scholarly citations in Wikipedia.
- This type of analysis has also been done for the PLoS Journals.
- I wrote an article that compared month page views of relevant Wikipedia pages, my website, and one of my articles (the only one with publicly available article level metrics) — Wikipedia page views are orders of magnitude higher.