Category Archives: random thoughts

Twitter record of #AGU17

I missed the 2017 Fall AGU meeting, but I did follow along on twitter. However the coverage was spotty — some sessions were mentioned, some not at all. From this experience I kept wondering about the digital traces of the meeting on twitter. Lo and behold I saw this tweet from Dr. Christina K. Pikas (@cpikas) at the beginning of this year:

So let’s look at this awesome dataset that Dr. Pikas collected and published on figshare:. First, this data was collected using TAGS, and contains tweets from Nov. 4th, 2017 to Jan. 4th, 2018 that used the hashtag #AGU17There are a total of 31,909 tweets in this dataset. In this post I am subsetting the data to look only at the meeting (with a 1 day buffer, so Sunday Dec. 10, 2017 to Saturday Dec. 17, 2017) — a total of 25,531 tweets during the 7 days:

Hourly.jpg

I noticed:

  • Twitter activity decays through the week (fatigue? do most people just tweet their arrival? Daily attendance variations?)
  • There is a noticeable lunch break on M, W, Th, and F
  • Each day twitter activity starts suddenly, but has a slow(er) decay at the end of the day (late night activities?)

Retweets account for 44% of the 25,531 tweets during the meeting. Removing RTs yields an almost identical plot, but there is small peak that appears at the end of each day (pre-bedtime tweets?):

HourlyNoRT.jpg

Lastly, the biggest #AGU17 twitter user is @theAGU (by far), which sent 1063 tweets during the week. Here is the timeseries with only @theAGU tweets:

HourlyAGU.jpg

I see the lunch break and not as many late nights for the organization.

Thanks @cpikas for collecting and publishing the data! It is available on figshare:

My code is on github here

Advertisements

Nonlinear Dynamics and Geomorphology

This is a list of geomorphology papers that map onto the chapter headings of Strogatz — ‘Nonlinear Dynamics and Chaos’. This is a work in progress — some headings are left blank because I can’t find concrete examples (i.e., strange attractors) and while others remain blank because of too many examples (i.e., fractals). I envision this list could be used when teaching or discussing nonlinear dynamics in a geomorphology setting.

  • Part I: One-Dimensional Flows
    • Ch. 2: Flows on a Line
    • Ch. 3: Bifurcations
      • Fagherazzi, S., Carniello, L., D’Alpaos, L., & Defina, A. (2006). Critical bifurcation of shallow microtidal landforms in tidal flats and salt marshes. Proceedings of the National Academy of Sciences, 103(22), 8337-8341. 10.1073/pnas.0508379103
      • Anderson, R. S. (2002). Modeling the tor-dotted crests, bedrock edges, and parabolic profiles of high alpine surfaces of the Wind River Range, Wyoming. Geomorphology, 46(1), 35-58. 10.1016/S0169-555X(02)00053-3
      • Pelak, N. F., Parolari, A. J., & Porporato, A. (2016). Bistable plant–soil dynamics and biogenic controls on the soil production function. Earth Surface Processes and Landforms, 41(8), 1011-1017.10.1002/esp.3878
      • Yizhaq, H., Ashkenazy, Y., & Tsoar, H. (2007). Why do active and stabilized dunes coexist under the same climatic conditions?. Physical Review Letters, 98(18), 188001. 10.1103/PhysRevLett.98.188001
      • Yizhaq, H., Ashkenazy, Y., & Tsoar, H. (2009). Sand dune dynamics and climate change: A modeling approach. Journal of Geophysical Research: Earth Surface, 114(F1). 10.1029/2008JF001138
      • Bel, G., & Ashkenazy, Y. (2014). The effects of psammophilous plants on sand dune dynamics. Journal of Geophysical Research: Earth Surface, 119(7), 1636-1650. 10.1002/2014JF003170
      • Goldstein, E.B., and L.J. Moore, (2016) Stability and bistability in a one-dimensional model of coastal foredune height, J. Geophys. Res. Earth Surf.121964977doi: 10.1002/2015JF003783
    • Ch. 4: Flows on a Circle
  • Part II: Two-Dimensional Flows
    • Ch. 5: Linear Systems
      • Plant, N. G., Todd Holland, K., & Holman, R. A. (2006). A dynamical attractor governs beach response to storms. Geophysical Research Letters, 33(17). 10.1029/2006GL027105
    • Ch. 6: Phase Plane
      • Marani, M., D’Alpaos, A., Lanzoni, S., Carniello, L., & Rinaldo, A. (2007). Biologically‐controlled multiple equilibria of tidal landforms and the fate of the Venice lagoon. Geophysical Research Letters, 34(11).10.1029/2007GL030178
      • Marani, M., D’Alpaos, A., Lanzoni, S., Carniello, L., & Rinaldo, A. (2010). The importance of being coupled: Stable states and catastrophic shifts in tidal biomorphodynamics. Journal of Geophysical Research: Earth Surface, 115(F4). 10.1029/2009JF001600
      • Stark, C. P., & Passalacqua, P. (2014). A dynamical system model of eco‐geomorphic response to landslide disturbance. Water Resources Research, 50(10), 8216-8226.10.1002/2013WR014810
      • Stark, C. P. (2006), A self-regulating model of bedrock river channel geometry, Geophys. Res. Lett., 32, L04402, doi:10.1029/2005GL023193.
      • Limber, P. W., A.B. Murray, P. N. Adams and E.B. Goldstein, (2014), Unraveling the dynamics that scale cross-shore headland amplitude on rocky coastlines, Part 1: Model Development,Journal of Geophysical Research: Earth Surface, 119, doi: 10.1002/2013JF002950
      • Limber, P. W., & Murray, A. B. (2014). Unraveling the dynamics that scale cross‐shore headland relief on rocky coastlines: 2. Model predictions and initial tests. Journal of Geophysical Research: Earth Surface, 119(4), 874-891.10.1002/2013JF002978
      • Mariotti, G., & Fagherazzi, S. (2013). Critical width of tidal flats triggers marsh collapse in the absence of sea-level rise. Proceedings of the National Academy of Sciences, 110(14), 5353-5356. 10.1073/pnas.1219600110
    • Ch. 7: Limit Cycles
      • Stark, C. P. (2010). Oscillatory motion of drainage divides. Geophysical Research Letters, 37(4).10.1029/2009GL040851
    • Ch. 8: Bifurcations revisited
      • Mariotti, G., & Fagherazzi, S. (2013). A two‐point dynamic model for the coupled evolution of channels and tidal flats. Journal of Geophysical Research: Earth Surface, 118(3), 1387-1399. 10.1002/jgrf.20070
  • Part III: Chaos
    • Ch. 9: Lorenz Equations
    • Ch. 10: One-Dimensional Maps
      • Goldstein, E.B., and L.J. Moore, (2016) Stability and bistability in a one-dimensional model of coastal foredune height, J. Geophys. Res. Earth Surf.121964977doi: 10.1002/2015JF003783
    • Ch. 11: Fractals
      • There are too many papers/books/issues to discuss here…
    • Ch. 12: Strange Attractors

The AGU EOS ‘Editorial Practices’ discussion of 1984

On May 15 1984, Russell and Reiff published a (jokey) flow chart of the AGU editorial and peer review process with several time delay terms and a ‘counting’ step for the multiple revisions. This set off 6 responses in EOS, similar to the episode in 2003-2004.

RusellReiff1984.jpeg

  1. On Oct 23, 1984, Baum wrote in to discuss how peer review tended to filter out controversial new ideas. Baum recommended that authors be allowed to publish controversial new ideas even if reviewers protested, but reviewers should also be allowed to publish their criticisms. In addition Baum offered some mathematical changes to the Russell and Reiff flow chart.
  2. Dessler also wrote in on Oct 23, 1984, with remarks that referees are often named and thanked by the editor or author. As a result, authors may be more wary of support for controversial ideas. Dessler also suggests that Comment—Reply pairs should be published more often (I have written about these in JGR-ES).  
  3. On Dec. 25, 1984, Sonnerup (the editor of JGR-Space Physics) wrote to EOS in support the idea that peer review should permit new and unorthodox ideas. Additionally, Sonnerup provides additional details regarding the review process at JGR-Space Physics. 
  4. On Feb 19, 1985 Walker and van der Voo wrote in to EOS to discuss the editorial process at GRL. Choice quote (bold type highlighted by me): Because of the importance attached to prompt publication in GRL we will gener­ally use only one reviewer for each paper, communicating with this reviewer, when necessary, by telephone or telemail. More reviewers are used only when a paper seems likely to be particularly controver­sial or is otherwise difficult to deal with.”
  5. Baker wrote in on April 25, 1985 to suggest that JGR collect the rejected papers and publish them. Baker stated, in jest, that there is likely a “large body of unpub­lished papers out there which have been rejected by Neanderthal referees. I say let’s do something about it! I suggest that all of these brilliant, creative, earthshaking pa­pers be collected into a special JGR issue each year.”
  6. Murdoch wrote in on March 10, 1987 to suggest that abstracts of rejected papers be published. If a scientist wanted to see the rejected paper, then the author could provide the paper AND the critical reviews.

 

These papers just highlight the role of editors, something still missing from my peer review agent model (pointed out by a commenter/Jazz legend).

Retaliation in the Peer Review Model

(The full motivation, rule set, and previous results for this model are collected here)

Today I am adding a new rule into my toy peer review model. Suppose some fraction of the population is set to ‘retaliate’ when called upon to be a reviewer. Specifically, these agents assign review scores based on their feelings (postive or negative) toward the author. This is an example of biases that might influence a reviewers decision (e.g., Lee et al., 2012).

So the new rule is:

  • If a ‘retaliator’ is assigned a review, and they feel positively or negatively toward the author, the review is postive or negative, respectively (overriding a random review).

(n.b.: A more gentle statement of this review could instead focus solely on ‘cliques’ — if a reviewer feels positively toward the author, the review is positive. if the review feels negatively, the review is random. )

The issue is now there are 4 types of peple in the model:

  • Those who sign reviews, and do not retaliate
  • Those who sign reviews, and retaliate
  • Those who do not sign reviews, and do not retaliate
  • Those who do not sign reviews, and retaliate

Again I will use the range of incoming and outgoing node weights to visualize model results. As a reminder:

R_{i}^{in} is the maximum incoming weight minus the minimum incoming weight. This represents the range of feelings all other scientists have about scientist i.

R_{i}^{out} is the maximum outgoing weight minus the minimum outgoing weight. This represents the range of feelings scientist i has about all other scientists in the discpline.

So here are the results with 30% of all scientists being ‘retaliators’.

Figure_2.jpg

  • Compared to the previous results, the same trends hold: Rin is larger for signed reviewers (blue), and Rout is roughly the same for signed vs unsigned. (ranges are different for the previous results because of a change in the number of model timesteps).
  • Unsigned retaliators (empty orange markers) are similar to Unsigned non-retaliators. If you never sign reviews, no author will end up knowing that you are a retaliator (the editor is a different story).
  • Signed retaliators (empty blue markers) have a large Rin — they are polarizing figures. Authors are either on the good side of these people (they are friends) or on the bad side (they are enemies).

Peer review model — results part 2

Backstory of the model:

This model is based on networks, so I’ll use some of the language and techniques from the study of networks to analyze the data.This peer review model creates a directed and weighted network. In other words, the ‘scientists’ (nodes) are connected (via edges) to other scientists (other nodes). The connections (edges) have a direction (how ‘scientist A’ feels toward ‘B’) and weight (-3, negatively). The book-keeping for this model is an adjacency matrix.

A=\begin{pmatrix}0&5&-1&0\\ 1&0&2&0\\ 0&-3&0&0\\ 0&0&1&0\end{pmatrix}

Where A_{ij} denotes the an edge from i to j with a given weight. In this model, it is the mood that scientist i has toward scientist j . (Some other texts do the reverse convention).

A measurement for this sort of matrix is incoming and outgoing node strength. The outgoing strength of scientists i — how scientist i feels about all other scientists — can be denoted as:

s_{i}^{out}=\sum_{j \neq i} A_{ij}

And can be calculated by summing rows. The incoming strength of scientists i — how all other scientists feel about scientist i — can be denoted as:

s_{i}^{in}=\sum_{j \neq i} A_{ji}

And can be calculated by summing columns. (for reference, my previous post showed time series plots of the mean of incoming weights, similar to the strength metric we are talking about here (s_{i}^{in})).

Signed reviewers can be polarizing — weights can quickly become very negative and/or very positive. So the strengths (s_{i}^{in} and s_{i}^{out}) will be a sum of extreme positives and negatives — this is not very descriptive because it can lead to 0 strength. Instead I want to look at the range of incoming and outgoing weights, or:

R_{i}^{out}= \max\limits_{j \neq i} A_{ij} - \min\limits_{j \neq i} A_{ij} which denotes the maximum outgoing weight minus the minimum outgoing weight.

R_{i}^{in}=\max\limits_{j \neq i} A_{ji} - \min\limits_{j \neq i} A_{ji} which denotes the maximum incoming weight minus the minimum incoming weight.

Now let’s now look at some model results, R_{i}^{out} and R_{i}^{in}, for each scientist.

rangenoretal.jpg

  • Both types of reviewers have similar R_{i}^{out} — they tend to have a similar range in their opinions about the scientists in the discipline.
  • Signed reviewers tend to have a larger R_{i}^{in} —  the range of feelings that other scientists have toward the signed reviewers — compared to those who do not sign reviews. Scientists tend to either like or dislike signed reviewers more strongly that unsigned reviewers. 

An added feedback is coming….

Some inspiration for this work comes from:

Initial results from the agent-based Peer Review ‘mood’ model

Today, the first results from my Peer Review agent model (description of the model and code). I’m going to build this model slowly, week-by-week, adding complexity.. (a good way to practice writing coding in julia).

On to the model. For these results, Before the model runs, it decides which 50% of reviews are signed and which 50% are unsigned. Each paper submitted gets 2 reviews. Reviews can either be bad (-1) or good (+1). During each timestep, all reviews are randomly assigned to be good or bad (split 50-50 for the entire pool of reviewers). Remember that these good and bad reviews are returned to each writer and writers feeling toward the reviewers therefore changes according to these rules:

  • signed good (+1) reviews result in +1 feelings toward the reviewer
  • signed bad (–1) reviews result in –1 feelings toward the reviewer
  • unsigned good (+1) reviews result in +1 feelings toward a random scientist
  • unsigned bad (–1) reviews result in –1 feelings toward a random scientist

Note that the model at this point is some modified version of a random walk.

Now, some results. For 10,000 times steps I am tracking the collective feeling of scientists toward one another — in other words, these are plots of the average feeling of all scientists toward a specific Signed or unsigned reviewer.  Signed reviewers (50) in the top panel, and unsigned reviewers (49) in the bottom panel. Each line represents the average mood toward a single scientist (signed or unsigned reviewer).

PRbots.jpeg

Signed reviewers are more polarizing — there is wider dispersion of emotions toward signed reviewers. This seems obvious, they get credit for both positive and negative reviews

When review scores are random (between -1 and +1) and the change in feeling is random (equal to review score), the results are same as above, but with less difference (less spread in feelings toward signed reviewers).

There is no feedback mechanism in the model right now — reviewers don’t take into account their feelings for authors. This will change in the next version….

Rules for an agent-based peer review model

Last week I wrote about a set of AGU EOS articles from 2003 that focus on anonymity in peer review. A quote from one of the articles really stuck with me regarding the personal decision to sign reviews:

Okal (2003) states that, as an editor of GRL, ~40% of the reviews he sees are signed. As a reviewer, he signs 2/3 of his reviews. And as an author, 1/2 the reviews he receives are signed. His experience suggest that:
The above numbers — 40%;two-thirds; one- half — suggest that the community is divided, with no overwhelming majority in its attitude toward anonymous versus signed reviews. This diversity may indeed be precious and should be respected. Why not keep the system as it is now, leaving it to the individual reviewer to exercise a free decision regarding waiving anonymity?”

Over the course of the next few weeks I hope to build a fun little toy model of ‘peer reviewing’ agents to see if I can tease out something  — is diversity in peer review behavior (re: signed vs blind) in some way ‘precious’?

the rules of the model are:

Each agent (scientist) is set to either sign or blind their reviews.

For each time step:

  • Randomly pick the number of scientists (‘P’) out of ‘N’ total scientists who will publish a single paper
  • Randomly assign ‘R’ reviewers for each paper
    • Nobody can review their own paper
    • Writing Sceintists can review
    • Scientist can do multiple reviews
  • Each reviewer gives a random review score (good or bad)
  • Reviews are returned to each writer and writers ‘mood’ changes
    • signed + reviews result in + feelings toward the reviewer
    • signed – reviews result in – feelings toward the reviewer
    • unsigned + reviews result in + feelings toward a random scientist
    • unsigned – reviews result in – feelings toward a random scientist

And we see how the feelings of the community (toward one another) develop through time.

The beginning of the code is already up on Github. Feel free to contribute or give an opinion.