Category Archives: random thoughts

My Time at CSDMS 2019

IMG_0589.JPG

(This post originally appeared on the Coast and Ocean Collective Blog)

In May I went to my first annual meeting of CSDMS— the Community Surface Dynamics Modeling System. It was great to see old friends and meet new ones.

CSDMS is involved in a range of different projects and provides a suite of different services to the earth surface processes modeling community. You might know about CSDMS from its model repository (with metadata and links to source code) and the handy tools developed by CSDMS to link models together. For more background on CSDMS, check out their webpage.

One nice aspect of CSDMS is that the keynotes and panels are recorded and put on YouTube, and many poster presenters upload PDFs of their poster. I have spent a few hours skimming through these videos and PDFs from past meetings — lots of interesting ideas.

The annual meeting theme this year was ‘Bridging Boundaries’, and there was a range of interesting talks, posters, clinics, breakout sessions, and panels. I want to just mention a few highlight during those 3 packed days.

  • I really enjoyed the wide range of keynotes. Two particularly interesting ones were:
  • I really enjoyed the 2 panel discussions:
  • A real highlight for me was Dan Buscombe’s deep learning clinic. Dan walked us through a comprehensive Jupyter notebook based on his work on pixel-scale image classification. It was great to hear Dan explain his workflow, and it was great to meet him in person. I urge you to check out his work!
  • There were too many amazing posters to cover in one post. I recommend scrolling through the abstracts and poster pdfs online.
  • I live-tweeted the 3rd day through the CSDMS and AGU EPSP twitter accounts. This was really fun and I’m grateful for the opportunity from the AGU EPSP social media team.
  • I am very grateful to CSDMS for inviting me to give a keynote this year — it was exciting to share my ideas with such a talented group of people. My talk — video, slides — focused on ML work that I have done with the Coast and Ocean Collective (and others), specifically work on swash, runup, ‘hybrid’ models, and the ML review paper that was just published.
  • Lastly, I ate a lot of (good) pizza.

 

 

Twitter record of #AGU17

I missed the 2017 Fall AGU meeting, but I did follow along on twitter. However the coverage was spotty — some sessions were mentioned, some not at all. From this experience I kept wondering about the digital traces of the meeting on twitter. Lo and behold I saw this tweet from Dr. Christina K. Pikas (@cpikas) at the beginning of this year:

So let’s look at this awesome dataset that Dr. Pikas collected and published on figshare:. First, this data was collected using TAGS, and contains tweets from Nov. 4th, 2017 to Jan. 4th, 2018 that used the hashtag #AGU17There are a total of 31,909 tweets in this dataset. In this post I am subsetting the data to look only at the meeting (with a 1 day buffer, so Sunday Dec. 10, 2017 to Saturday Dec. 17, 2017) — a total of 25,531 tweets during the 7 days:

Hourly.jpg

I noticed:

  • Twitter activity decays through the week (fatigue? do most people just tweet their arrival? Daily attendance variations?)
  • There is a noticeable lunch break on M, W, Th, and F
  • Each day twitter activity starts suddenly, but has a slow(er) decay at the end of the day (late night activities?)

Retweets account for 44% of the 25,531 tweets during the meeting. Removing RTs yields an almost identical plot, but there is small peak that appears at the end of each day (pre-bedtime tweets?):

HourlyNoRT.jpg

Lastly, the biggest #AGU17 twitter user is @theAGU (by far), which sent 1063 tweets during the week. Here is the timeseries with only @theAGU tweets:

HourlyAGU.jpg

I see the lunch break and not as many late nights for the organization.

Thanks @cpikas for collecting and publishing the data! It is available on figshare:

My code is on github here

Nonlinear Dynamics and Geomorphology

This is a list of geomorphology papers that map onto the chapter headings of Strogatz — ‘Nonlinear Dynamics and Chaos’. This is a work in progress — some headings are left blank because I can’t find concrete examples (i.e., strange attractors) and while others remain blank because of too many examples (i.e., fractals). I envision this list could be used when teaching or discussing nonlinear dynamics in a geomorphology setting.

  • Part I: One-Dimensional Flows
    • Ch. 2: Flows on a Line
    • Ch. 3: Bifurcations
      • Fagherazzi, S., Carniello, L., D’Alpaos, L., & Defina, A. (2006). Critical bifurcation of shallow microtidal landforms in tidal flats and salt marshes. Proceedings of the National Academy of Sciences, 103(22), 8337-8341. 10.1073/pnas.0508379103
      • Anderson, R. S. (2002). Modeling the tor-dotted crests, bedrock edges, and parabolic profiles of high alpine surfaces of the Wind River Range, Wyoming. Geomorphology, 46(1), 35-58. 10.1016/S0169-555X(02)00053-3
      • Pelak, N. F., Parolari, A. J., & Porporato, A. (2016). Bistable plant–soil dynamics and biogenic controls on the soil production function. Earth Surface Processes and Landforms, 41(8), 1011-1017.10.1002/esp.3878
      • Yizhaq, H., Ashkenazy, Y., & Tsoar, H. (2007). Why do active and stabilized dunes coexist under the same climatic conditions?. Physical Review Letters, 98(18), 188001. 10.1103/PhysRevLett.98.188001
      • Yizhaq, H., Ashkenazy, Y., & Tsoar, H. (2009). Sand dune dynamics and climate change: A modeling approach. Journal of Geophysical Research: Earth Surface, 114(F1). 10.1029/2008JF001138
      • Bel, G., & Ashkenazy, Y. (2014). The effects of psammophilous plants on sand dune dynamics. Journal of Geophysical Research: Earth Surface, 119(7), 1636-1650. 10.1002/2014JF003170
      • Goldstein, E.B., and L.J. Moore, (2016) Stability and bistability in a one-dimensional model of coastal foredune height, J. Geophys. Res. Earth Surf.121964977doi: 10.1002/2015JF003783
    • Ch. 4: Flows on a Circle
  • Part II: Two-Dimensional Flows
    • Ch. 5: Linear Systems
      • Plant, N. G., Todd Holland, K., & Holman, R. A. (2006). A dynamical attractor governs beach response to storms. Geophysical Research Letters, 33(17). 10.1029/2006GL027105
    • Ch. 6: Phase Plane
      • Marani, M., D’Alpaos, A., Lanzoni, S., Carniello, L., & Rinaldo, A. (2007). Biologically‐controlled multiple equilibria of tidal landforms and the fate of the Venice lagoon. Geophysical Research Letters, 34(11).10.1029/2007GL030178
      • Marani, M., D’Alpaos, A., Lanzoni, S., Carniello, L., & Rinaldo, A. (2010). The importance of being coupled: Stable states and catastrophic shifts in tidal biomorphodynamics. Journal of Geophysical Research: Earth Surface, 115(F4). 10.1029/2009JF001600
      • Stark, C. P., & Passalacqua, P. (2014). A dynamical system model of eco‐geomorphic response to landslide disturbance. Water Resources Research, 50(10), 8216-8226.10.1002/2013WR014810
      • Stark, C. P. (2006), A self-regulating model of bedrock river channel geometry, Geophys. Res. Lett., 32, L04402, doi:10.1029/2005GL023193.
      • Limber, P. W., A.B. Murray, P. N. Adams and E.B. Goldstein, (2014), Unraveling the dynamics that scale cross-shore headland amplitude on rocky coastlines, Part 1: Model Development,Journal of Geophysical Research: Earth Surface, 119, doi: 10.1002/2013JF002950
      • Limber, P. W., & Murray, A. B. (2014). Unraveling the dynamics that scale cross‐shore headland relief on rocky coastlines: 2. Model predictions and initial tests. Journal of Geophysical Research: Earth Surface, 119(4), 874-891.10.1002/2013JF002978
      • Mariotti, G., & Fagherazzi, S. (2013). Critical width of tidal flats triggers marsh collapse in the absence of sea-level rise. Proceedings of the National Academy of Sciences, 110(14), 5353-5356. 10.1073/pnas.1219600110
    • Ch. 7: Limit Cycles
      • Stark, C. P. (2010). Oscillatory motion of drainage divides. Geophysical Research Letters, 37(4).10.1029/2009GL040851
    • Ch. 8: Bifurcations revisited
      • Mariotti, G., & Fagherazzi, S. (2013). A two‐point dynamic model for the coupled evolution of channels and tidal flats. Journal of Geophysical Research: Earth Surface, 118(3), 1387-1399. 10.1002/jgrf.20070
  • Part III: Chaos
    • Ch. 9: Lorenz Equations
    • Ch. 10: One-Dimensional Maps
      • Goldstein, E.B., and L.J. Moore, (2016) Stability and bistability in a one-dimensional model of coastal foredune height, J. Geophys. Res. Earth Surf.121964977doi: 10.1002/2015JF003783
    • Ch. 11: Fractals
      • There are too many papers/books/issues to discuss here…
    • Ch. 12: Strange Attractors

The AGU EOS ‘Editorial Practices’ discussion of 1984

On May 15 1984, Russell and Reiff published a (jokey) flow chart of the AGU editorial and peer review process with several time delay terms and a ‘counting’ step for the multiple revisions. This set off 6 responses in EOS, similar to the episode in 2003-2004.

RusellReiff1984.jpeg

  1. On Oct 23, 1984, Baum wrote in to discuss how peer review tended to filter out controversial new ideas. Baum recommended that authors be allowed to publish controversial new ideas even if reviewers protested, but reviewers should also be allowed to publish their criticisms. In addition Baum offered some mathematical changes to the Russell and Reiff flow chart.
  2. Dessler also wrote in on Oct 23, 1984, with remarks that referees are often named and thanked by the editor or author. As a result, authors may be more wary of support for controversial ideas. Dessler also suggests that Comment—Reply pairs should be published more often (I have written about these in JGR-ES).  
  3. On Dec. 25, 1984, Sonnerup (the editor of JGR-Space Physics) wrote to EOS in support the idea that peer review should permit new and unorthodox ideas. Additionally, Sonnerup provides additional details regarding the review process at JGR-Space Physics. 
  4. On Feb 19, 1985 Walker and van der Voo wrote in to EOS to discuss the editorial process at GRL. Choice quote (bold type highlighted by me): Because of the importance attached to prompt publication in GRL we will gener­ally use only one reviewer for each paper, communicating with this reviewer, when necessary, by telephone or telemail. More reviewers are used only when a paper seems likely to be particularly controver­sial or is otherwise difficult to deal with.”
  5. Baker wrote in on April 25, 1985 to suggest that JGR collect the rejected papers and publish them. Baker stated, in jest, that there is likely a “large body of unpub­lished papers out there which have been rejected by Neanderthal referees. I say let’s do something about it! I suggest that all of these brilliant, creative, earthshaking pa­pers be collected into a special JGR issue each year.”
  6. Murdoch wrote in on March 10, 1987 to suggest that abstracts of rejected papers be published. If a scientist wanted to see the rejected paper, then the author could provide the paper AND the critical reviews.

 

These papers just highlight the role of editors, something still missing from my peer review agent model (pointed out by a commenter/Jazz legend).

Retaliation in the Peer Review Model

(The full motivation, rule set, and previous results for this model are collected here)

Today I am adding a new rule into my toy peer review model. Suppose some fraction of the population is set to ‘retaliate’ when called upon to be a reviewer. Specifically, these agents assign review scores based on their feelings (postive or negative) toward the author. This is an example of biases that might influence a reviewers decision (e.g., Lee et al., 2012).

So the new rule is:

  • If a ‘retaliator’ is assigned a review, and they feel positively or negatively toward the author, the review is postive or negative, respectively (overriding a random review).

(n.b.: A more gentle statement of this review could instead focus solely on ‘cliques’ — if a reviewer feels positively toward the author, the review is positive. if the review feels negatively, the review is random. )

The issue is now there are 4 types of peple in the model:

  • Those who sign reviews, and do not retaliate
  • Those who sign reviews, and retaliate
  • Those who do not sign reviews, and do not retaliate
  • Those who do not sign reviews, and retaliate

Again I will use the range of incoming and outgoing node weights to visualize model results. As a reminder:

R_{i}^{in} is the maximum incoming weight minus the minimum incoming weight. This represents the range of feelings all other scientists have about scientist i.

R_{i}^{out} is the maximum outgoing weight minus the minimum outgoing weight. This represents the range of feelings scientist i has about all other scientists in the discpline.

So here are the results with 30% of all scientists being ‘retaliators’.

Figure_2.jpg

  • Compared to the previous results, the same trends hold: Rin is larger for signed reviewers (blue), and Rout is roughly the same for signed vs unsigned. (ranges are different for the previous results because of a change in the number of model timesteps).
  • Unsigned retaliators (empty orange markers) are similar to Unsigned non-retaliators. If you never sign reviews, no author will end up knowing that you are a retaliator (the editor is a different story).
  • Signed retaliators (empty blue markers) have a large Rin — they are polarizing figures. Authors are either on the good side of these people (they are friends) or on the bad side (they are enemies).

Peer review model — results part 2

Backstory of the model:

This model is based on networks, so I’ll use some of the language and techniques from the study of networks to analyze the data.This peer review model creates a directed and weighted network. In other words, the ‘scientists’ (nodes) are connected (via edges) to other scientists (other nodes). The connections (edges) have a direction (how ‘scientist A’ feels toward ‘B’) and weight (-3, negatively). The book-keeping for this model is an adjacency matrix.

A=\begin{pmatrix}0&5&-1&0\\ 1&0&2&0\\ 0&-3&0&0\\ 0&0&1&0\end{pmatrix}

Where A_{ij} denotes the an edge from i to j with a given weight. In this model, it is the mood that scientist i has toward scientist j . (Some other texts do the reverse convention).

A measurement for this sort of matrix is incoming and outgoing node strength. The outgoing strength of scientists i — how scientist i feels about all other scientists — can be denoted as:

s_{i}^{out}=\sum_{j \neq i} A_{ij}

And can be calculated by summing rows. The incoming strength of scientists i — how all other scientists feel about scientist i — can be denoted as:

s_{i}^{in}=\sum_{j \neq i} A_{ji}

And can be calculated by summing columns. (for reference, my previous post showed time series plots of the mean of incoming weights, similar to the strength metric we are talking about here (s_{i}^{in})).

Signed reviewers can be polarizing — weights can quickly become very negative and/or very positive. So the strengths (s_{i}^{in} and s_{i}^{out}) will be a sum of extreme positives and negatives — this is not very descriptive because it can lead to 0 strength. Instead I want to look at the range of incoming and outgoing weights, or:

R_{i}^{out}= \max\limits_{j \neq i} A_{ij} - \min\limits_{j \neq i} A_{ij} which denotes the maximum outgoing weight minus the minimum outgoing weight.

R_{i}^{in}=\max\limits_{j \neq i} A_{ji} - \min\limits_{j \neq i} A_{ji} which denotes the maximum incoming weight minus the minimum incoming weight.

Now let’s now look at some model results, R_{i}^{out} and R_{i}^{in}, for each scientist.

rangenoretal.jpg

  • Both types of reviewers have similar R_{i}^{out} — they tend to have a similar range in their opinions about the scientists in the discipline.
  • Signed reviewers tend to have a larger R_{i}^{in} —  the range of feelings that other scientists have toward the signed reviewers — compared to those who do not sign reviews. Scientists tend to either like or dislike signed reviewers more strongly that unsigned reviewers. 

An added feedback is coming….

Some inspiration for this work comes from:

Initial results from the agent-based Peer Review ‘mood’ model

Today, the first results from my Peer Review agent model (description of the model and code). I’m going to build this model slowly, week-by-week, adding complexity.. (a good way to practice writing coding in julia).

On to the model. For these results, Before the model runs, it decides which 50% of reviews are signed and which 50% are unsigned. Each paper submitted gets 2 reviews. Reviews can either be bad (-1) or good (+1). During each timestep, all reviews are randomly assigned to be good or bad (split 50-50 for the entire pool of reviewers). Remember that these good and bad reviews are returned to each writer and writers feeling toward the reviewers therefore changes according to these rules:

  • signed good (+1) reviews result in +1 feelings toward the reviewer
  • signed bad (–1) reviews result in –1 feelings toward the reviewer
  • unsigned good (+1) reviews result in +1 feelings toward a random scientist
  • unsigned bad (–1) reviews result in –1 feelings toward a random scientist

Note that the model at this point is some modified version of a random walk.

Now, some results. For 10,000 times steps I am tracking the collective feeling of scientists toward one another — in other words, these are plots of the average feeling of all scientists toward a specific Signed or unsigned reviewer.  Signed reviewers (50) in the top panel, and unsigned reviewers (49) in the bottom panel. Each line represents the average mood toward a single scientist (signed or unsigned reviewer).

PRbots.jpeg

Signed reviewers are more polarizing — there is wider dispersion of emotions toward signed reviewers. This seems obvious, they get credit for both positive and negative reviews

When review scores are random (between -1 and +1) and the change in feeling is random (equal to review score), the results are same as above, but with less difference (less spread in feelings toward signed reviewers).

There is no feedback mechanism in the model right now — reviewers don’t take into account their feelings for authors. This will change in the next version….

Rules for an agent-based peer review model

Last week I wrote about a set of AGU EOS articles from 2003 that focus on anonymity in peer review. A quote from one of the articles really stuck with me regarding the personal decision to sign reviews:

Okal (2003) states that, as an editor of GRL, ~40% of the reviews he sees are signed. As a reviewer, he signs 2/3 of his reviews. And as an author, 1/2 the reviews he receives are signed. His experience suggest that:
The above numbers — 40%;two-thirds; one- half — suggest that the community is divided, with no overwhelming majority in its attitude toward anonymous versus signed reviews. This diversity may indeed be precious and should be respected. Why not keep the system as it is now, leaving it to the individual reviewer to exercise a free decision regarding waiving anonymity?”

Over the course of the next few weeks I hope to build a fun little toy model of ‘peer reviewing’ agents to see if I can tease out something  — is diversity in peer review behavior (re: signed vs blind) in some way ‘precious’?

the rules of the model are:

Each agent (scientist) is set to either sign or blind their reviews.

For each time step:

  • Randomly pick the number of scientists (‘P’) out of ‘N’ total scientists who will publish a single paper
  • Randomly assign ‘R’ reviewers for each paper
    • Nobody can review their own paper
    • Writing Sceintists can review
    • Scientist can do multiple reviews
  • Each reviewer gives a random review score (good or bad)
  • Reviews are returned to each writer and writers ‘mood’ changes
    • signed + reviews result in + feelings toward the reviewer
    • signed – reviews result in – feelings toward the reviewer
    • unsigned + reviews result in + feelings toward a random scientist
    • unsigned – reviews result in – feelings toward a random scientist

And we see how the feelings of the community (toward one another) develop through time.

The beginning of the code is already up on Github. Feel free to contribute or give an opinion.

The AGU EOS ‘Anonymous Peer Review’ debate of 2003-2004

This summer I stumbled upon a cache of EOS newsletters from 2003. Among the pages was a series of comments and letters about anonymous review, specifically problems and possible solutions. It’s nice to know that we struggle with the same issues 14 years later.

The original article written on July 1, 2003 by Beck (2003) was focused on the rejection of a paper by 2 anonymous reviews and an anonymous AE. After listing and discussing potential reasons that a reviewer and/or AE would prefer to remain anonymous. Beck ends by writing:

“The only reviews I remember that left me perma­nently angry were anonymous. There is far too much unpleasantness in the world already to needlessly introduce even a little bit more. Anonymous reviews are unnecessary, unacceptable, and should not be permitted.” 

Strong statement! I have my own opinions about anonymity in peer review (I’m sure everyone does), but what is most interesting to me is that fact that this article produced such a large reaction — I can find 15 letters and comments published in EOS as a response to Beck (2003) — compared to the rare comment-reply pairs in JGR-ES.

On July 29th, 2003

  • Roninove (2003) writes in to support Beck (2003), having written a letter about problems with anonymous reviews (back in 1990).
  • Criss and Hofmeister (2003) suggest discounting anonymous reviews, and discuss the issues surrounding signed vs unsigned reviews for underrepresented groups.

On Sept 23rd, 2003

  • Geller (2003) writes in to suggest that AEs should always sign reviews because they often make the decision for the editor.
  • Goff (2003) writes in to suggest AEs should sign reviews and that AGU should encourage signed reviews and newer journals should require signed reviews
  • Walder (2003) writes in to suggest that AGU AEs should sign reviews and we should collect data — reviewers be asked ‘why’ they choose to remain anonymous.

Sept 30th, 2003

  • Forel (2003) is an ‘advocate’ for anonymous reviewing, but believes editors should not be anonymous.
  • Fisher (2003) writes in to suggest double blind reviewing
  • Savov (2003) writes that science should be “…discussed in the open air.” and suggests that the paper, reviews, and reviewer names should all be published together.
  • Okal (2003) writes that the current system should be preserved and personal preference (re: signed vs unsigned reviews) should be respected. Okal writes that this debate has been going on for decades with no clear solution:

“The debate on peer review has been going on for decades. It may be the worst possible system, but by and large it works. And to paraphrase Sir Winston Churchill, wait until you consider all the other ones….”

Dec 23rd, 2003

  • The editors of JGR-Atmopsheres respond in O’Dowd et al. (2003). They discuss the editorial process in the journal and highlight the role of anonymity for the AEs and reviewers.

Dec 30th, 2003

  • Kirwan Jr. (2003) writes that peer reviews should not be signed because it could be self serving. Furthermore authors should not speculate about the authors of their anonymous reviews because of possible negative and counterproductive consequences.
  • Wesolowski (2003) writes that finding reviewers is difficult enough without requiring the identification of reviewers, and forced signing of reviews may lead to overly positive reviews.

April 20th, 2004

  • Armstrong (2004) discusses the possibility of multiple review stages, some with or without anonymity.  
  • Sturrock (2004) presents a ‘Code of Ethics’ for peer review.

April 27th, 2004

  • Genereaux and Sen (2004) discuss the NSF proposal review process, specifically how proposers do not have an opportunity to respond to “Incorrect and Overly Negative Statements (IONS)”.

 

N.B. — There was an article on anonymous peer review in GSA Today by McBirney (2003) — here is a link to the issue — something must have been in the air. 

Debris Flow Experiments (Spring 2017)

This spring I taught an undergraduate Geomorphology class at Duke. For the last few weeks of class, I broke out my debris flow flume. I have written about this debris flow previously, and is described here on the Sediment Experimentalist Network site. Also posted is a slo-mo video of a typical debris flow.

Students planned and executed an experiment of their choosing — an example of a  ‘Course-based Undergraduate Research Experience’ (CURE). Though there has been some work done with ‘scaled down’ debris flows (e.g., de Haas et al. 2015) there seemed to be lots of room for the students to do something new.

Both groups ended up investigating various mitigation measures for slowing or stopping debris flows. This involved 3D printed several pieces as mitigation structures, from solid walls of various sizes and angles:


…to plates with various densities of upright rods/sticks to function as tree/vegetation mimics:


Each group ended up writing up their work as a paper (data and plots included), and I’m happy to share them here:

  • Paper 1 focused on solid walls
  • Paper 2 focused on the ‘green infrastructure’ mimics.