In the summer of 2013, I was coordinating a class meant to prime incoming graduate students on what it takes to succeed in graduate school. One session dealt with writing good abstracts. You have heard the usual advice: keep it short and simple, avoid jargon, write it for a general reader, etc.
I thought that it would be fun to test whether following this type of advice increases readership (citations). After a few months, I pitched this idea to my friend James Evans, and we decided to try it out with the help of Cody Weinberger, an undergraduate student in my laboratory.
We collected about 1M abstracts from 8 disciplines, and we tested the impact of following the usual advice on citations, once accounted for obvious factors such as age of the article, journal where it was published, etc. To our surprise, we found that following some of the most common suggestions leads to a significant decrease in citations!
The short article starts with a quote from Boyle’s “Proemial Essay”. Robert Boyle was one of the main proponents of the use of “modern” scientific articles to disseminate science (i.e., instead of books). Amusingly, while describing the advantages of this approach, Boyle already states some guidelines on how the essays should be written: we’ve been told how should we write our science for at least 350 years!
I thought that this would be a great occasion to review the progress my laboratory has made on the study of the stability of large ecological systems. Even better, this article could outline a research program on this topic, listing the main challenges that we are facing.
My former student Si Tang (now pursuing a second PhD in Statistics) and I set to work with this idea in mind. You can now read this hybrid between a review and a list of “grand challenges”:
We live in a world dominated by rankings. Besides soccer teams, movies and restaurants, rankings of Universities and researchers have become commonplace.
The Scientific Wealth of Nations has been measured in many ways, all centered on a very simple idea: if a country producing a certain proportion of papers (pp) accrues a much larger proportion of citations (pc), then the country is producing high-quality science. Conversely, countries for which pc < pp would produce lower-quality research.
This appealing simplicity, however, conceals one of the most important factors determining the influence of a scientific article, the journal where it was published. Clearly, publishing a paper in Nature would guarantee a much wider audience than that reached by The Bulletin of Koala Research — even for papers of the same quality.
We thus took 1.25M articles in eight disciplines (from 1996 to 2012), and parsed the country of affiliation of all the authors. We then measured how the country(ies) of affiliation influenced the journal placement (i.e., where was the paper published) and the citation performance (i.e., whether the article received more or fewer citations than its “peers”). Differently from other studies, we kept a tally for each possible combination of countries, such that we can see which international collaborations are more effective.
Originally, we thought of measuring the effect of the institution (rather than country) of affiliation—how much is an Oxford affiliation worth? We’re sufficiently proficient in regular expressions to distinguish India from Indiana, but affiliations like The Miami University in Oxford, Ohio made us decide to stick with countries.
In the paper, we start by talking about the 1982 study by Peters and Ceci. This is one of the most intriguing paper I’ve ever seen, and even the lengthy commentary (you can find here) is a pleasure to read.
In hindsight, we should have changed our own affiliations to the wonderful ones used by Peters & Ceci. The Northern Plain Center for Human Potential sounds just right!
After studying the stability of large ecological networks, we wanted to try to describe the transient dynamics following a small perturbation of the equilibrium. We thus studied “reactivity”, which tells us whether perturbations of a stable equilibrium are going to be amplified before decaying.
which we published in frontiers in Ecology and Evolution, a new journal with an interesting peer-review scheme (a topic dear to my heart!). In fact, I am so happy journals are trying new ways of doing peer review I decided to join the editorial board.
I always joke in the lab that anybody who wants to propose a new measure in ecology (or index, etc.) should pay $1,000, $2,000 if the new measure has an acronym. These funds could pay for graduate students to go to some conference.
The only exception to the rule is for studies showing that two seemingly different measures are in fact the same thing. This is the case of our recent study on nestedness, published today in Nature Communications:
The ghost of nestedness in ecological networks
Phillip P. A. Staniczenko, Jason C. Kopp & Stefano Allesina Ecologists are fascinated by the prevalence of nestedness in biogeographic and community data, where it is thought to promote biodiversity in mutualistic systems. Traditionally, nestedness has been treated in a binary sense: species and their interactions are either present or absent, neglecting information on abundances and interaction frequencies. Extending nestedness to quantitative data facilitates the study of species preferences, and we propose a new detection method that follows from a basic property of bipartite networks: large dominant eigenvalues are associated with highly nested configurations. We show that complex ecological networks are binary nested, but quantitative preferences are non-nested, indicating limited consumer overlap of favoured resources. The spectral graph approach provides a formal link to local dynamical stability analysis, where we demonstrate that nested mutualistic structures are minimally stable. We conclude that, within the binary constraint of interaction plausibility, species preferences are partitioned to avoid competition, thereby benefiting system-wide resource allocation.
We uploaded the code needed for the analysis here.