Category Archives: random thoughts

Rules for an agent-based peer review model

Last week I wrote about a set of AGU EOS articles from 2003 that focus on anonymity in peer review. A quote from one of the articles really stuck with me regarding the personal decision to sign reviews:

Okal (2003) states that, as an editor of GRL, ~40% of the reviews he sees are signed. As a reviewer, he signs 2/3 of his reviews. And as an author, 1/2 the reviews he receives are signed. His experience suggest that:
The above numbers — 40%;two-thirds; one- half — suggest that the community is divided, with no overwhelming majority in its attitude toward anonymous versus signed reviews. This diversity may indeed be precious and should be respected. Why not keep the system as it is now, leaving it to the individual reviewer to exercise a free decision regarding waiving anonymity?”

Over the course of the next few weeks I hope to build a fun little toy model of ‘peer reviewing’ agents to see if I can tease out something  — is diversity in peer review behavior (re: signed vs blind) in some way ‘precious’?

the rules of the model are:

Each agent (scientist) is set to either sign or blind their reviews.

For each time step:

  • Randomly pick the number of scientists (‘P’) out of ‘N’ total scientists who will publish a single paper
  • Randomly assign ‘R’ reviewers for each paper
    • Nobody can review their own paper
    • Writing Sceintists can review
    • Scientist can do multiple reviews
  • Each reviewer gives a random review score (good or bad)
  • Reviews are returned to each writer and writers ‘mood’ changes
    • signed + reviews result in + feelings toward the reviewer
    • signed – reviews result in – feelings toward the reviewer
    • unsigned + reviews result in + feelings toward a random scientist
    • unsigned – reviews result in – feelings toward a random scientist

And we see how the feelings of the community (toward one another) develop through time.

The beginning of the code is already up on Github. Feel free to contribute or give an opinion.

The AGU EOS ‘Anonymous Peer Review’ debate of 2003-2004

This summer I stumbled upon a cache of EOS newsletters from 2003. Among the pages was a series of comments and letters about anonymous review, specifically problems and possible solutions. It’s nice to know that we struggle with the same issues 14 years later.

The original article written on July 1, 2003 by Beck (2003) was focused on the rejection of a paper by 2 anonymous reviews and an anonymous AE. After listing and discussing potential reasons that a reviewer and/or AE would prefer to remain anonymous. Beck ends by writing:

“The only reviews I remember that left me perma­nently angry were anonymous. There is far too much unpleasantness in the world already to needlessly introduce even a little bit more. Anonymous reviews are unnecessary, unacceptable, and should not be permitted.” 

Strong statement! I have my own opinions about anonymity in peer review (I’m sure everyone does), but what is most interesting to me is that fact that this article produced such a large reaction — I can find 15 letters and comments published in EOS as a response to Beck (2003) — compared to the rare comment-reply pairs in JGR-ES.

On July 29th, 2003

  • Roninove (2003) writes in to support Beck (2003), having written a letter about problems with anonymous reviews (back in 1990).
  • Criss and Hofmeister (2003) suggest discounting anonymous reviews, and discuss the issues surrounding signed vs unsigned reviews for underrepresented groups.

On Sept 23rd, 2003

  • Geller (2003) writes in to suggest that AEs should always sign reviews because they often make the decision for the editor.
  • Goff (2003) writes in to suggest AEs should sign reviews and that AGU should encourage signed reviews and newer journals should require signed reviews
  • Walder (2003) writes in to suggest that AGU AEs should sign reviews and we should collect data — reviewers be asked ‘why’ they choose to remain anonymous.

Sept 30th, 2003

  • Forel (2003) is an ‘advocate’ for anonymous reviewing, but believes editors should not be anonymous.
  • Fisher (2003) writes in to suggest double blind reviewing
  • Savov (2003) writes that science should be “…discussed in the open air.” and suggests that the paper, reviews, and reviewer names should all be published together.
  • Okal (2003) writes that the current system should be preserved and personal preference (re: signed vs unsigned reviews) should be respected. Okal writes that this debate has been going on for decades with no clear solution:

“The debate on peer review has been going on for decades. It may be the worst possible system, but by and large it works. And to paraphrase Sir Winston Churchill, wait until you consider all the other ones….”

Dec 23rd, 2003

  • The editors of JGR-Atmopsheres respond in O’Dowd et al. (2003). They discuss the editorial process in the journal and highlight the role of anonymity for the AEs and reviewers.

Dec 30th, 2003

  • Kirwan Jr. (2003) writes that peer reviews should not be signed because it could be self serving. Furthermore authors should not speculate about the authors of their anonymous reviews because of possible negative and counterproductive consequences.
  • Wesolowski (2003) writes that finding reviewers is difficult enough without requiring the identification of reviewers, and forced signing of reviews may lead to overly positive reviews.

April 20th, 2004

  • Armstrong (2004) discusses the possibility of multiple review stages, some with or without anonymity.  
  • Sturrock (2004) presents a ‘Code of Ethics’ for peer review.

April 27th, 2004

  • Genereaux and Sen (2004) discuss the NSF proposal review process, specifically how proposers do not have an opportunity to respond to “Incorrect and Overly Negative Statements (IONS)”.

 

N.B. — There was an article on anonymous peer review in GSA Today by McBirney (2003) — here is a link to the issue — something must have been in the air. 

Debris Flow Experiments (Spring 2017)

This spring I taught an undergraduate Geomorphology class at Duke. For the last few weeks of class, I broke out my debris flow flume. I have written about this debris flow previously, and is described here on the Sediment Experimentalist Network site. Also posted is a slo-mo video of a typical debris flow.

Students planned and executed an experiment of their choosing — an example of a  ‘Course-based Undergraduate Research Experience’ (CURE). Though there has been some work done with ‘scaled down’ debris flows (e.g., de Haas et al. 2015) there seemed to be lots of room for the students to do something new.

Both groups ended up investigating various mitigation measures for slowing or stopping debris flows. This involved 3D printed several pieces as mitigation structures, from solid walls of various sizes and angles:


…to plates with various densities of upright rods/sticks to function as tree/vegetation mimics:


Each group ended up writing up their work as a paper (data and plots included), and I’m happy to share them here:

  • Paper 1 focused on solid walls
  • Paper 2 focused on the ‘green infrastructure’ mimics.

‘Sleeping Beauties’ of Geomorphology: a case from the American Journal of Science

Most papers in disciplinary geomorphology journals are cited at some point, but citations to papers do not always accrue immediately upon publication — ideas and papers might take time to be used by researchers and therefore cited. Extreme examples of delayed recognition (‘Sleeping Beauties‘) — where papers recieve no citations for long stretches of time only to recieve a large, late burst in citations — have been identified and investigated previously.

Do geomorphology ‘Sleeping Beauties’ exist? Using the methods of Ke et al. (2015) to find and score ‘Sleeping Beauties’, it turns out that 9 out of the 20 most delayed papers in GSA Bulletin are focused on quantitative geomorphology.

What other papers show this interesting signature of delayed recognition?

I have looked in other journals and found a few neat examples, which I hope to chronicle in a series of posts. Today, I will look at an example from the American Journal of Science (AJS):

The AJS has been published since 1818, and has long been a geology venue. In January 2017  I downloaded the 500 most cited AJS articles from the Web of Science. I used the algorithm presented in Ke et al. (2015) to find the papers with the highest ‘delayed recognition’ score — a ranking of each paper’s citation time series based on the largest, latest peak  (I urge you all to read Ke et al. (2015) which describes the method).

The most delayed paper is about brachiopods, but I want to focus on research related to geomorphology, so let’s look at the 2nd most delayed paper:

W.W.Rubey (1933): Settling velocities of gravel, sand, and silt particles. Am J Sci April 1, 1933 Series 5 Vol. 25:325-338; doi:10.2475/ajs.s5-25.148.325

(n.b., settling velocity has a special place in my heart)

Rubey’s paper has a score that is similar to the papers from GSA Bulletin. Here is the citation time series for the Rubey paper:Rubey CTS.jpg

So the natural quesiton is —  what happened that caused this 2014 burst of citations? As far as I can tell (from looking at the papers that cited Rubey), nothing in particular… Most papers that cite Rubey are focused on typical sediment transport questions. A close read of all the citing papers would be needed to figure out what is going on here, if there is some ‘signal’. Not a satisfying answer, and I apologize —leave a comment if you have an idea and I’ll update the post if I find anything out.

Arduino and Raspberry Pi in Geoscience research

Nature reported last week on the uptick in usage of Arduino and Raspberry Pi for research. The idea of building research tools with open source hardware has been covered before (see Pearce 2012 for an example), but this recent article had a nice plot of the # of papers/year that mention these boards (using PubMed and Scopus) .

After the article last week, I wondered how many Geoscience articles actually use an Arduino or Raspberry Pi….

Using the Web of Science, there are less than 10 articles under the ‘Geosciences Multidisciplinary’, ‘Geology’, and ‘Geography Physical’ topics that use the word ‘Raspberry Pi’ or ‘Arduino’ in the title, key words, or abstract. Not much uptake in the Earth sciences I guess.

Though the articles that use the Arduino are very neat, such as a system for geophone data acquisition, a microscope focus stacker, an earth flow monitoring tool, and temperature-sensing waders.

I have seen other Earth science research using these boards — by attending poster sessions at AGU that highlight low cost tech, and I have read about the Raspberry Shake, which could generate a host of papers in the future…

My interest here comes from dabbling with these two tools in the past. With the Arduino I have actually built a few things, including a primitive Optical Backscatter Sensor (OBS), a datalogger, and an ultrasonic distance sensor (see below; pic from 2014). I hope to get back to that dabbling some day..

FullSizeRender.jpg

US East Coast foredune grasses —quantifying the abundance of literature

Two grasses tend to cover much of the coastal foredunes of the US Atlantic coast. North of the North Carolina/ Virginia area, foredunes are often covered in Ammophila breviligulata (American Beachgrass). South of the NC/VA area, foredunes are often covered in Uniola paniculata (Sea Oats). After my look at how much is written about ‘Coastal Dunes’, I wanted to look at how much is written about these two species. I searched for both of these species — separately — using the Web of Science in early March 2017. Each search is done as a ‘topic’ search, so responses come from paper titles, abstracts and keywords.

Various other plants are present on the shifting sands of East coast foredunes, such as Panicum amarum (Bitter Panic Grass), Spartina patens (Saltmeadow Cordgrass), and Iva imbricata (Dune-marsh elder), to name a few. I included P. amarum in this analysis just for fun.

Shown below is the number of papers written about each species in 5 year bins.

AUP.jpeg

A. breviligulata also grows along the shores of the US ‘Great Lakes’, and the US West coast — I would guess this is the cause of the dominance in  A. breviligulata studies.

  • The ratio of papers per 5 year period for A. breviligulata: U. paniculata: P. Amarum is roughly 5:3:1.
  • The ratio of articles sizes (measured in bytes) on Wikipedia for each of the species is currently 3:2:1.
  • I keep wondering if the ratio of papers about the species reflects the ratio of total shoreline covered the species… or perhaps the ratio of some other abundance metric…

I have a paper in review about some of the geomorphic consequences of these different foredune species.

Open Access charges for journals that publish geomorphology research

Here I compiled the Open Access charges for journals that publish geomorphology research (i.e., Gold Open Access; Author Pays). I’m sure some are missing — let me know which publications I should add to this list.

Keep in mind that some journals have page charges even if the articles are not Open Access, some journals provide open access after a given time period, and other journals ONLY publish Open Access. Your institution may also have an agreement with a publisher about paying the fee (i.e., they will pay for you)…

I hope to periodically update this list.. The data below was collected on March 26th 2017.

Journal Cost (various currencies)
Earth and Space Science (AGU + Wiley) $1800
Earth’s Future (AGU + Wiley) $1800
GRL (AGU + Wiley) $2500
Water Resources Research (AGU+ Wiley) $3500
JGR – Earth Surface (AGU+ Wiley) $3500
Reviews of Geophysics (AGU + Wiley) $3500
ESurf (EGU + Copernicus) €50-120/ journal page
GSA Journals $2500
Geomorphology (Elsevier) $3300
Earth Surface Processes and Landforms (BSG+ Wiley) $4200
Progress in Physical Geography (Sage) $3000
Marine Geology (Elsevier) $3300
PLoS ONE $1495
Scientific Reports (Nature) $1675
PNAS $1100- 1450
Nature Communications (Nature) $5200
Zeitschrift für Geomorphologie (Schweizerbart) €140 per article + €119 per published page