My #365papers Experiment in 2018

This year, based on initiatives by some other ecologists in the past, I embarked on the #365papers challenge. The idea of the challenge is that in academia, we end up skimming a lot of material in papers: we jump to the figures, or look for a specific part of the methods or one line of the results we need. Instead, this challenge urged people to read deeper. Every day, they should read a whole paper.

(Jacquelyn Gill and Meghan Duffy launched the initiative and wrote about their first years of it. But #365papers is now not just in ecology, but in other academic fields. Some of the past recaps I read were by Anne Jefferson, Joshua Drew, Elina Mäntylä, and Caitlin MacKenzie. Caitlin’s was probably the post that catalyzed m to do the challenge.)

I knew that 365 papers was too ambitious for me, and that I wouldn’t (and didn’t want to!) read on the weekends, for example. I decided to try nevertheless to read a paper every weekday in 2018, which would be 261 days total.

In the end, I clocked in at 217 papers (I read more than that, but see below for what I counted as a “paper” for this challenge) – not bad! I tweeted links to all the papers, so you can see my list via this Twitter search. I can confidently say that I have never read so many papers in a year.

In fact, I am guessing that this is more papers than I have read in their entirety (not skimming or extracting, as mentioned above), in my total career before 2018. That’s embarrassing to admit but I am guessing it’s not that unusual. (What do you think, if we’re all being honest here?)

This was a great exercise. I learned so much about writing, for one thing – there’s no better way to learn to write than to read a lot.

But the thing that was most exciting was that I read a lot more, and a lot of fun pieces. I had gotten to a place where there were so many papers that I felt I had to read for my own work, that I would just look at the pile, blanche, and put it off for later. Reading had become a chore, not something fun.

Titles_wordle

A Wordle of the paper titles. On my website it says I am a community and ecosystem ecologist, and I guess my reading choices reflect that! (I’d be interested to make a Wordle based on the abstracts, to see if there are more diverse words than the ones we choose for titles – but I didn’t make time to extract all the abstracts for that.)

That’s not a great way to do research, and luckily the challenge changed my reading status quo. If I was reading every day, I reasoned, then not every paper had to be directly related to my work as a community ecologist. There would be ample time for that, but I could also read things that simply looked interesting. And I did! I devoured Table of Contents emails from journals with glee and read about all sorts of things – evolution, the physical science of climate change, remote sensing.

These papers, despite seeming like frivolous choices, taught me a lot about science. Just because they were not about exactly what I was researching does not mean they did not inform how I think about things. This was incredibly valuable. We get stuck in our subfields, on our PhD projects, in our own little bubbles. Seeing things from a different angle is great and can catalyze new ideas or different framing of results. Things that didn’t make sense might make sense in a different light.

But I also did read lots of papers directly related to what I was working on. I think I could only do that because it no longer felt like a chore, like a big stack of paper sitting on the corner of my desk glaring at me. This challenge freed me, as strange as that sounds given the time commitment!

And finally, I tweeted each paper, and tagged the authors if possible. This helped me make some new connections and, often, learn about even more cool research. It helped me put faces to names at conferences and gave me the courage to strike up conversations. The social aspect of this challenge was fun and also probably pretty useful in the long run.

For all of the reasons I just mentioned, I would highly recommend this challenge to other academics. (It’s not just in ecology – if you look at the #365papers hashtag on Twitter, there are a lot of different people in different fields taking up the challenge.) Does 365 or 261 papers sound like too many? Set a different goal.  But make it ambitious enough that you are challenging yourself. For me, I found that making it a daily habit was key, because then it doesn’t feel like something you have to schedule (or something you can put off) – you just do it. Then sit down and read a whole paper, with focus and attention to detail. If you like it, why is that? Is the topic of interest to you? The writing good? The analyses particularly appropriate and well-explained? Is it that the visuals add a lot to the paper? Are the hypotheses (and alternative hypotheses) identified clearly, making it easier to follow? Or, if you don’t like it, why is that? Is it the science, or the presentation? What would you do differently?

One thing I didn’t nail down was how to keep notes. I read on paper, so I would highlight important or relevant bits or references to look up. But I don’t have a great system for how to transfer this to Evernote (where I keep papers’ abstracts linked to their online versions, each tagged in topic categories). In the beginning I was adding photos of each part of the paper I had highlighted to its note, but this was too time-consuming and I gave up. In the end it was like, if I had time, I would manually re-type my reading notes into Evernote, and if not, I wouldn’t. I do think the notes are valuable and important to have searchable, so this probably limits the utility of all that reading a little bit. It’s something I will think how to improve for next year. The biggest challenge is time.

In addition to reading a lot, I kept track of some minimal data about each paper I read. I’ll present that below, in a few sections:

  • Where (journals) and when the papers were published
  • Who wrote them – first authorship (gender, nationality, location)
  • A few thoughts about last authorship
  • Grades I assigned the papers when reading – potential biases (had I eaten lunch yet!?) and the three papers I thought were best at the time I read them

I plan to try this challenge again next year, and the data that I summarize will probably inform how I go about it. I’ll discuss that a little at the very end.

What Did I Count as One of My 365  261  217 Papers?

First, some methodological details. For this effort, I didn’t count drafts of papers that I was a co-author on, although that would have upped the number or papers quite a bit because I have been working on a lot of collaborative projects this year. I also didn’t count reading chapters of colleagues’ theses, or a chapter of a book draft. And I didn’t count book chapters, although I did read a few academic books, among them Dennis Chitty’s Do Lemmings Commit Suicide, Mathew Leibold & Jonathan Chase’s Metacommunity Ecology, Andy Friedland and Carol Folt’s Writing Successful Science Proposals, and a book about R Markdown. I started but haven’t finished Mark McPeek’s Evolutionary Community Ecology.

I did count manuscripts that I read for peer review.

Where The Papers Were Published

I didn’t go into this challenge with a specific idea of what I wanted to read. I find papers primarily through Table of Contents alerts, but also through Twitter, references in other papers, and searches for specific topics while I was working on my dissertation or on research proposals. This biases the papers I read to be more likely to be by people I’m already aware of or in journals I already read. Not entirely, but substantially.

We also have a “journal club” in our Altermatt group lab meeting which doesn’t function like a standard one, but instead each person is assigned one or two journals to “follow” and we rotate through each person summarizing the interesting papers in “their” journals once every few months (the cycle length depends on the number of people in the lab at a given time). That’s a good way to notice papers that might be good to read, and since we are a pretty diverse lab in terms of research topics, introduces some novelty. I think it’s a clever idea by my supervisor, Florian.

Given that I wasn’t seeking out papers in a very systematic way, I wasn’t really sure what the final balance between different journals and types of journals would be at the end of the year. The table below shows the number of papers for each of the 63 (!) journals that I read from. That’s more journals than I was expecting! (Alphabetical within each count category)

In addition, I read one preprint on BioRxiv.

I don’t necessarily think that Nature papers are the best ecology out there; that’s not why it tops the list. Seeing EcologyOikos, and Ecology Lettersas the next best-represented journals is probably a better representation of my interests.

But, I do think that Nature (and Science, which had just a few fewer papers) papers get a lot of attention and must have been chosen for a reason (am I naive there?). There are not so many of them in my field and I do try to read them to gauge what other people seem to see as the most important topics. I also read them because it exposes me to research tangential to my field or even entirely in other fields – which I wouldn’t find in ecology journals, but which are important to my overall understanding of my science.

I’m pleased that Ecology & Evolution is one of my top-read journals, because it indicates (along with the rest of the list) that I’m not only reading things for novelty/high-profile science, but also more mechanistic papers that are important to my work even if they aren’t so sexy per se. A lot of the journals pretty high up the list are just good ecology journals with a wide range of content.

There are a lot of aquatic-specific journals on the list, which reflects me trying to get background on my specific research. But there are also some plant journals on the list, either because I’m still interested in plant community ecology despite being in freshwater for the duration of my PhD, or because they are about community ecology topics that are useful to all ecology. It will be interesting to see if the aquatic journals stay well-represented when I shift to my next research project in a postdoc.

Society journals (from the Ecological Society of America, Nordic Society Oikos, British Ecological Society, and American Society of Naturalists, among others) are well represented. Thanks, scientific societies!

When The Papers Were Published

The vast, vast majority of papers I read were published very recently. Or, well, let’s be honest, because this is academic publishing: who knows when they were written? I didn’t systematically track this, but definitely noticed some were accepted months or maybe even a year before final paginated publication. And they were likely written long before that. But you get the point. As for the publication year, that’s below.

year_published

This data was not a surprise from me as a fair amount of my paper choices come from seeing journal table of contents alerts. I probably should read more older papers though.

Who Wrote the Papers: First Authors

Okay, on to the authors. Who were they? As I mentioned for journals, I didn’t systematically choose what I was reading, so I was curious what the gender and geographic breakdown of the authors would be. Since I didn’t very consciously try to read lots of papers authored by women, people of color, or people outside of North America and Europe, I guess I expected that the gender of first authors to be male-skewed, white, and from those continents. I wasn’t actively trying to counteract bias in this part of the process, so I expected to see evidence of it.

I did my best to find the gender of all first authors. Of those for which this was deducible based on homepages, Twitter profiles listing pronouns, in-person meetings at conferences, etc.,:

  • 59 first authors were women
  • 155 first authors were men
  • 2 papers had joint first authors
  • 1 paper I peer-reviewed was double-blind (authorship unknown to me)

I’m fairly troubled by this. I certainly wasn’t going out of my way to read papers by men, and I didn’t think it would be this skewed when I did a final tally. If I want to support women scientists by reading their work – and then citing it, sharing it with a colleague, contacting them about it, starting discussions, etc. – I am going to have to be a lot more deliberate. I want to learn about other women scientists’ ideas! They have a lot of great ones. I’m going to try harder in the future. Or, really, I’m going to try in the future – as mentioned, I was not intentionally reading or not reading women this past year.

I initially tried to track whether authors were people of color, but it’s just too fraught for me to infer from Googling. I don’t want to misrepresent people. But I can say that the number of authors who were POC was certainly quite low.

I did, however, take some geographic stats: where (to the best of my Googling abilities) authors originally came from, and where their primary affiliation listed on the paper was located.

For the authors for whom I could identify nationality based on websites, CVs, etc., 31 countries were represented.

FA nationality

The authors were numerically dominated by native English speakers, but those had relative geographic diversity, coming from the US, Canada, the UK, Ireland, Australia, and New Zealand (I’m not sure if English is the first language of the South African author). 15 different European nationalities were represented. There were a number of authors from Brazil, and one each from Chile, Colombia, and Ecuador, as well as Central America being represented by a Guatemalan. Maybe a surprise was that Chinese authors were underrepresented, either from Chinese institutions (see below) or those outside China; there were just five. There are many countries from which there are great scientists which are not represented in this dataset.

When it came to institutional country, the field narrowed to 24 countries plus the Isle of Man.

FA_inst

While there were 78 American first authors, 90 first authors came from American universities/institutions. In Europe, Denmark, Sweden and Switzerland gobbled up some of the institutional affiliations despite having low numbers of first authors originally from those countries (this is very consistent with my experience in those places).

(Note: it would have been really nice to make a riverplot showing how authors moved between countries, but I was too lazy to build a transition matrix. Sorry.)

This isn’t really surprising, the consolidation into fewer countries. It reflects that while small countries have great scientists, they often don’t have as many resources to have great research funding or many universities. Some places, even those with traditionally strong academic institutions, are simply going through austerity measures. I think of many Europeans I know who decided that leaving their countries – Portugal, Spain, the Baltics and Balkans, and other places – was their best bet to be able to do the research they wanted to do, and have a job. I think of others, notably a friend in Slovenia, who is staying there because he loves it, but whose opportunities are probably curtailed because of that.

I’d like to read more widely in terms of institutional location and author nationality, but it’s a bit overwhelming to make a solid plan. Reading more women is fairly straightforward. But when I think of all the places with good science but where I didn’t read a single paper, there are a lot of them. I can only read so many papers! So part of it will be recognizing that I can make an effort to read more diversely but I’m not going to solve bias in science just with my reading project. I need to make an effort that is meaningful, and then be okay with what it doesn’t accomplish.

Also, I don’t always know the gender, race, or nationality of an author before I Google them – this past year, I only did that after I read the papers. I might need to sometimes reverse that process, perhaps?

Do you have other ideas of how to tackle this? I’d love suggestions if anyone has them.

One thought is to more deliberately read from the non-North American, non-European authors in the journals I already read from. I already know I like the papers those editorial teams select. This would probably be the least amount of extra work required to diversify my reading, because I could stick to the same method of choosing papers (table of contents alerts), but execute differently on those tables of contents.

And a Bit About the Last Authors

I did not collect as detailed information about the last authors of each paper, but I did collect some. A big topic in academia is that women get fewer and fewer the higher you go in the academic hierarchy. I wondered if that was true in the papers I was reading.

There were fewer last authors because some papers were single-author. Of those that were multi-author, I filtered the dataset to look at only those where last-authorship seemed to denote seniority (based on author contribution statements, lab composition and relationships between authors, etc.) rather than being alphabetical or based on something else (on some papers with very many authors, all the senior authors were listed at the front of the author list). Of these,

  • 19 senior last authors were women
  • 105 senior last authors were men

Yikes!

That’s all one can say! Yikes!

Like the first authors, the last authors came from 31 different countries… but some different ones were represented (Venezuela, Serbia, India). They represented institutions in a few more places than the first authors, 28 different countries vs. 24 for first/single authors. I’m not sure what to make of that, especially since this is from a smaller subset of papers (since the single-author papers were removed), but obviously collaborative research and writing is alive and well.

Ratings and Favorite Papers

Right after I read each paper, I assigned it a letter grade. Looking back through my record keeping, I am less and less convinced that this is really meaningful. I think it had to do a lot with my mindset at the time, among other things. Did I just have a stressful meeting? Was I impatient to finish my reading and go home? Was I tired? Maybe I was less receptive to what I was reading. Or conversely, maybe if I was tired and a little distracted I was less likely to notice flaws in the paper. Who knows. Anyway, “B” was the grade I most frequently assigned.

grades

I didn’t keep detailed notes of why I felt different grades were merited, but I can make a few generalizations. Quite a number of the papers I gave poor grades were because I didn’t find methods to be well enough explained. I either couldn’t follow what the authors did, or maybe important statistical information wasn’t even included (or only in the supplementary information when I thought it was so essential to understanding the work that it really needed to be brought to the center). In particular this included some papers using fancy and cutting edge methods… just using those statistics or analysis techniques doesn’t make your paper magic. You still need to say what those analyses show and what they mean ecologically, and convince me that the fancy stats actually lead to a better understanding of what’s going on!

In some ways this is not authors’ faults – journals are often pressing for shorter word counts, and some don’t even publish methods in the main text, which is a total pain if you’re a reader. Also, it’s one of the biggest things I struggle with when writing – you know perfectly well what you did, and it can be hard to see that for an outsider your methods description seem incomplete. I get it! Reading papers where you don’t understand the methods is always a good cue to think about how you present your own work.

I assigned three papers grades of “A+”. Were they better than the ones I deemed “A”s? I’m not sure, but at the time, whether because of my general mood or their true brilliance, I sure thought they were great. They were:

I read a lot of other great papers too! But looking back, I can say that these were among my favorites, all for different reasons. I could go and add more papers to a “best-of” list but I’ll just leave it at that.

Recap!

Besides all the great reasons to do this challenge that I mentioned in the opening, this was pretty interesting data to delve into. I think I will try to keep doing the challenge in 2019, and I am currently thinking about how I choose which papers to read and if there are good strategies to read more diverse authors. I’m happy with the diversity of research that I read, but I would be happier if the voices describing that research were more diverse, to reflect the diversity of scientists in our world.

Do you have ideas about that? Comment below.

This was the final year of my PhD, and so in some ways a great time to do a reading challenge. It probably would have been more helpful if I had done this in the first year of my PhD, but hey, too late now. This year I wasn’t doing lab work, just writing and analyzing, so it was easy to fit in a lot of reading. It’s not good to stare at a screen writing all day, and I prefer to read on paper, so it was often a welcome break.

I don’t know what my work life will be like next year, so I will see how many papers I end up reading. It could be more, as I start a new project and need to get up to speed on a new subfield. Or it could be less as my working habits change. I’ll just do my best and adapt.

Finally, I’m thinking about whether there’s additional data I should track for next year’s challenge. Whether there is a difference between first and corresponding authors might be interesting. I’d welcome other suggestions too, but only if they don’t take much work to extract!

Advertisements

part 2: suppressed results.

In part one of my writeup on survey results, I talked a lot about the file drawer effect and why we end up not publishing some potentially useful results because we don’t have time. In a high-pressure environment where publication in the best journals is important to advance our careers, we often focus that limited time on the manuscripts with the highest potential impact. In some unfortunate cases, that means that professors do not prioritize giving their students the support necessary to publish results from projects, theses, or dissertations.

There’s no doubt that this can hurt younger scientists’ careers. Helping a student aim high and write higher-quality papers is great…. but it can go too far, too.

“There was no specific pressure NOT to publish, but rather my supervisor could not provide useful and supportive feedback and he was never satisfied with any draft I submitted to him for review,” wrote one respondent. “After numerous iterations of my projects over many years, I became discouraged and decided it wasn’t worth the effort to try and publish my results. Others in my lab have had the same or similar experience.”

Today I will talk about something more insidious: when you are discouraged from publishing something for other reasons, like politics, that your data didn’t support your research group’s hypothesis, or that external partners did not understand the results or the underlying science.

(If you want to know more about the dataset I am working with, its small size, and its various biases, I discussed it in part one: click over here.)

As an ecologist, I didn’t think that this happened a lot in our field, at least not compared to other fields where there’s more often commercial connections and money at stake. Perhaps if you are an environmental consultant or doing impact reports for the government or companies. But in a purely academic community I assumed that it was a fairly rare occurrence for results to be kept out of publication.

One thing quickly became obvious. It does happen, sometimes. There are lots of reasons, some of which are highly case-specific, i.e. the government of the researcher’s native country didn’t allow him/her to import her samples in the end, after all…. but there are some common patterns, too.

With such a small sample size – 40 of the 184 respondents reported this happening – and also the fact that I made it clear online that I really wanted to hear from people who had been discouraged from publishing, it’s impossible to say how prevalent such events actually are. The proportion of responses does not reflect the proportion of total scientists who have had this experience.

I can certainly say that comparatively, many fewer unpublished papers are due to these events than due to the self-created file drawer effect. Two thirds of survey respondents said they had at least one unpublished dataset, if not a handful or more, even though many were just in the first five years of their research careers.

The file drawer effect means that there are tens of thousands of unpublished datasets out there, maybe 100,000. Many probably have no significant results, since some of the most cited reasons for not publishing were inconclusive data, needing to collect more data, and doubting that the results would be accepted by a high impact journal.

Other pressures happen in a smaller number of cases, but primarily for the opposite reason: results did show something interesting, but maybe not what someone – a supervisor or a government employee – wanted to see.

And while I cannot draw any conclusions about prevalence, I can (hopefully) draw some conclusions about why this happens and who it happens to.

A brief table of contents:

First, student-specific challenges.

Second, government and, to a lesser extent, industry challenges.

Third, “internal” and interpersonal political challenges.

Students Bear The Brunt of It

“As a grad student, the concept of this is crazy to me,” wrote one respondent. “In many ways and instances, publications are the currency by which scientists are measured against one another. Thus, not publishing work seems counterintuitive to me. I’d like to hear the reasons behind why it happens.”

Well, dear student, there are many. And being discouraged not to publish in fact seems to happen mostly to students. Here’s who the 40 survey respondents who reported being pressured not to publish their work were:

title at timeMostly students. One explanation is that as we go along in our careers, we get a better concept of what is good and valuable science, and make some of the decisions to jettison a project ourselves rather than being told by a supervisor. We also become more and more crunched for time, meaning that we make more of these types of prioritizations before it gets to the point of having someone else weigh in.

But that’s not the only explanation. Let’s look first at when a direct supervisor was involved. With 32 responses of this type, it was about twice as common in my dataset than when an external person pressured a respondent not to publish. In these supervisor-related cases, it was most frequently a tenured or tenure-track professor discouraging a graduate student from publishing a chapter of their thesis or dissertation.

titles for direct

And as discussed in part one, part of the issue was that these driven supervisors were strapped for time and transferred their own expectations about significance of results and journal quality onto their students, even if students would have been happy settling for a lower-impact publication.

Sometimes this is very appropriate, sometimes less so. Where this line is drawn probably depends on your goals in science.

“A paper was published, but it excluded the results that I found the most interesting because they were not in line with the story that my advisor wished to push,” one respondent wrote of bachelors thesis research. “Instead, results from the same project that I though were not well thought-out were published in a way that made them seem flashy, which seemed to be the main goal for my tenure-track advisor.”

Another respondent had a similar story with a different ending, about work done as part of a masters thesis.

“The situation was not resolved; I just ended up not publishing,” he/she wrote. “I wanted to publish, as I considered the results to be high-quality science and the information very useful to disseminate, but I could not agree to change the research focus entirely to suit my supervisor’s personal interests.”

You can see both sides of the coin in some cases. What is the goal? To advance scientific theory and knowledge, or to share system-specific data that might help someone in the future? Ideally, a manuscript does both, but sometimes that’s not possible and just the second is still a good aim.  In some cases the supervisor is probably guiding the student towards using their data to address some question larger than the one they had initially considered. But, as the bachelors thesis respondent noted, it’s not always appropriate to do so – some people think that overreaching and drawing conclusions based on data not really designed to do so is a big problem in some fields.

“Some datasets and analyses I have collected and analysed don’t tell a clear story that would be readily publishable given the current state of how research articles are assessed for impact thus I tend to move on to things that tell a better story,” wrote another respondent. “This feels disingenuous at times though perhaps it is how science moves forward more quickly.”

A surprising amount of the time, supervisors discouraged students from publishing because the results turned out to not support their hypothesis. This was actually the most common single reason that a supervisor told a student not to publish. I may be naive, but it’s hard for me to think of a situation in which this is not just straight-up bad.

reasons for directI was quite explicit to ask whether the results did not support “our” hypothesis, or whether they did not support a supervisor, department, or company’s hypothesis. Sometimes the two overlapped, but most of the time when this happened the respondent selected the second option: the researcher themself might not have been surprised by the results, but the supervisor, lab group, or company did not like them.

(About 60% of the 32 responses came from ecology and evolution, but many also came from other fields.)

fields for direct

This really surprised me. In our training as scientists it is drilled into us that we might learn as much from a null result or a reversal of our hypothesis as we would if our hypothesis was supported – maybe even more, because it tells us that we have to carefully look at our assumptions and logic, and can lead us down new and more innovative paths.

In the U.S. at least, a substantial proportion of the population just has no respect for science. Whether its climate change deniers or anti-vaxxers, as a science community we tell them: go ahead, prove us wrong! Science is very open to accepting data that disproves something we had previously thought was true. We try to tell the public that we are not close-minded, that we are following evidence, and that if the evidence showed us something else, we’d still accept it.

On some small scale, that might not be true, and it’s very troubling. Without knowing more about the research in question here, it’s impossible to say much more. But it’s not a very inspiring trend. And again: this was happening coming from direct supervisors who were mostly in academia and shouldn’t have had a financial or political conflict of interest or anything like that.

And it also has potentially big implications for the sum of our community’s knowledge. Luckily there are so many researchers out there that probably someone else will ask the same question and publish it eventually, but this sort of attitude can delay learning important and valuable things.

“Unfortunately it’s hard to tell what could become interesting later, or what could be interesting to another researcher, so it’s too bad that these results never see the light of day,” wrote one early-career biologist. “What’s more concerning to me is the tendency of some researchers in my field to ignore or leave out results that they can’t explain, or worse, that contradict their pet hypothesis.”

When pressure came from an external source – someone not supervising the study respondent – the prevalence of this reason for discouraging publication was even higher. The data not supporting someone’s hypothesis rose from roughly two-thirds of respondents citing it, to almost half.

reasons externalAnd relatedly, the person doing the pressuring was afraid that the results would make them, their group, or the government look bad. In other words, these are classic cases of repressing research, the worst case scenario that we think of!

Governments are Not Always Great (for Science)

Sometimes, this external pressure came from within academia, but it was also often from governments.

Screen Shot 2015-10-07 at 1.38.56 PM“Yes, the results were published, yes it created an public uproar, yes all authors were chastised by the agency and external company, and yes all subsequent follow-up research papers on the topic were expressly forbidden,” wrote one federal government employee. “There are considerable research accomplished by state and federal government agencies. Much of those data results never see the light of day because the results may be divergent from what the chain of command’s perspective or directive may be, I.e. support the head official’s alternative energy, logging harvest, endangered species delisting, stream restoration, etc. policy.”

It’s clear that one place where state, local, and federal government officials can be particularly destructive is Canada. Apart from the cuts to research funding which have been hitting many countries, it’s been discussed by people far more knowledgeable than I that the government literally muzzles its scientists by not allowing them to talk to the media, among other policies: see here, here, and here.

Here’s what one anonymous survey respondent had to say: “The Canadian government has been muzzling scientists for years…I was just the latest in their ‘Thou Shalt Not Publish’ scheme. If the research you’re doing will make them look bad in any way, you’re not allowed to publish the results without fear of massive repercussions: job loss, degree removal, job losses of your superiors if they can’t fire you, being blacklisted in the scientific community, being blacklisted for grants, etc.”

Multiple survey respondents cited the Canadian government. So, about those elections coming up….

Consultants and researchers in the corporate/industrial sectors are often muzzled as well, but many of them are aware of this from the time they are hired.

“It is simply understood that if the research results from work we do for clients are inconvenient, they will attempt to redact the reports as trade secrets,” wrote one consultant. “They own the data so they are often able to do this. But not always.”

But even if companies are upfront about data ownership policies, it can still feel tough. One person told me that it was discouraging not to be able to get a patent and get credit for his/her work because a company owned all the intellectual property rights and would use the discovery as proprietary and secret until it was no longer profitable to do so.

In a variety of fields, there’s also some crossover between the industrial and academic sectors of research. Companies often provide funding to students or research groups working in an essential location or on a related topic. The companies shouldn’t be able to use their influence to suppress results, but in some cases they do seem to.

This is actually what happened in the case that inspired me to create my survey: the International Association of Athletics Federations squashed survey results showing that a huge proportion of championships competitors were doping. They were not involved in the research itself, but had provide access to the athletes, and thus felt like it was their prerogative to police the results.

One survey respondent said that he had been let go from his position after publishing research about the effects of pesticides, and had heard a researcher with industry ties imply that the same thing would happen to someone else publishing similar research.

Several people in environmental and earth sciences fields mentioned this happening to people that they knew or had talked to, but it’s hard to pin down other than in news stories.

We Can Be Our Own Worst Enemies

Finally, other politics are more about internal power dynamics, be it within a department or within a research field.

“A person, invited late to the project, was asked to provide simple review in return for coauthor ship,” wrote one respondent. “They hijacked the project and it is still unpublished four years on.”

It’s pretty tragic to see a good experiment, or maybe a whole grant that some agency spent hundreds of thousands of dollars on and researchers spent years of their lives on, get derailed by interpersonal problems and arguments about data ownership or authorship.

In many fields the community of specific experts is fairly small, so you are likely to have to work with people again, or have them review papers, etc etc etc. The problems are hard to resolve once they begin.

It was also clear that sometimes people nixed manuscripts because they didn’t understand the science or the value of this. Sometimes this meant a bureaucrat at a funding agency, but sadly, sometimes it also came from within the scientific community itself.

“Because my scientific community is so small, in some cases only one review has been given by a local expert, and of course the editors don’t have time to fact-check, but my paper will not be accepted because these few experts are, as I perceive it, not wanting recent data contrary to results from their systems to be published, and assume that someone with an M.Sc. cannot be a diligent scientist, in many cases providing lots of evidence in reviews that they have not read the manuscript with care… possible skipping entire sections,” wrote one student.

There’s even outright theft sometimes.

“The results were made partially public at a conference,” wrote one researcher. “Another researcher who has hard feelings towards my former supervisor, and viceversa, started to use the date as if it was a ‘public domain information’ and later my supervisor considered that the publication is not worth going out. The problem has not been resolved yet.”

A Reminder

This has been, in some ways, a worst-ever tour of the scientific research community. We all know someone who has had some terrible experience with their research.

But many of us have had relatively happy tenures in science and research. At least in my field, ecology, I can say that the vast majority of people are good people and fun to work with. It’s part of what I love about my job. If the only people around me were those who stole results, bullied me into not publishing, constantly asked me to change the focus of my research, or demeaned what I did because I was a graduate student, I would quit.

But here I am, and I’m happy! Such people do not make up the majority in our fields. But it’s worth remembering that even one major interaction like this can seriously discourage people from continuing to do research. There are lots of other jobs out there, and if the research environment is malevolent it’s easy to feel that the grass is greener on the other side.

So: with the knowledge that there is some scummy behavior going on, can we try to be nicer and kinder to one another? After all, our goals are to advance scientific knowledge and to create more capable, creative, and conscientious scientists.

Thanks to all who participated in the survey. I hope it has been interesting and helpful to read about.

the contagion of perfectionism & the scientific publication bias.

A few weeks ago I sent out a survey to many of my scientist friends. I wanted to know: why does some research stay unpublished? Those outside academia or research might think that science always proceeds in a linear fashion. A person does a study, they publish it, now it is out there for other scientists to reference. Once research is performed, it is a known quantity. But that’s not necessarily true.

For any number of reasons, a fair chunk of research never makes it to the publication stage. Sometimes it’s because it’s bad research: it is biased or the methods are bad, so during the peer review process the paper is rejected. This might not be through any real fault of the scientists. The problems might have only become apparent after the research was completed. This is pretty inevitable, and can lead a research group to design a second study that is much better and really gets at their question.

But does all the good research even get published? No, definitely not. There’s research out there that remains unpublished even though it probably could have been.

Some possible reasons for this are that the researchers ran out of time to write up the results, or the results just didn’t seem very interesting, or their hypothesis was rejected. For these reasons people might choose to focus on another project they had going at the same time. But that leaves a gap in the record of published science: results with bigger effect sizes are published proportionally more often than null results. Results with no effect might be left in a drawer to be published later, or never.

This phenomena is called the “file drawer effect” and is a major contribution to a bias in publication which is problematic for many reasons, which I’ll discuss later. Here’s a nice paper on the file drawer effect.

With my survey, I wanted to get at why people don’t publish. First I asked about how much research they leave in the file drawer, so to speak, out of their own choice. Then I asked how often other people pressured them to avoid publishing, and why. I’ll get to that second question in part two of this post.

First, as a caveat before getting to what people told me. The responses certainly don’t represent the whole science community, and I can’t draw any conclusions about frequency of the types of things I’m asking about. I had 182 responses, which is not a lot, and the majority were from ecologists and evolutionary biologists relatively early in their careers. The survey was spread by word of mouth so this is just a function of who I know.

Here’s some data on who responded to my call:

fields

Age of researchers

(I’ll also add that most respondents were in academia, but there were also some who worked for government research institutes or companies. I’ll get more into that in part two, but for now I’m going to write primarily from the academia perspective. Just be aware that there are some non-academia responses in here as well.)

Now. One of the first questions I asked was, “How many papers or reports worth of results of your own work remain unpublished, by your own choice?”

It’s obvious now that I could have worded this a little bit better. Some people include all of their unpublished work in this answer, while others said that just because something hadn’t been published yet didn’t mean that it would never be published, and left some work out of the count. They may be right about that: some work does eventually get published years later, when researchers finally have a chunk of free time and nothing “more important”.

(“Pressure was not direct – just lack of support to move the paper forward,” one survey responder wrote of his/her supervisor. “Ultimately he approached me to finally publish the work – after more than 20 years!”)

But that data that you swear you will write up one day can also remain in the file drawer indefinitely.

In any case, here’s what I found:

ownchoice

The other thing that is unclear in the results is how realistic it is to publish all of these datasets – are they each an individual paper, for instance? Some people take a dataset and divide it into as many pieces as possible so that they can get the most publications out of it, when in reality publishing all the data together would have made a more interesting, meaningful, and high-impact single manuscript. So has someone doing research for six years really accumulated ten unpublished datasets? Perhaps they have. Meanwhile, I am impressed by the few people who had been doing research for 25 or even 40 years and had seemingly published every worthwhile dataset they had ever collected. These people must be writing machines. (And I say that as someone who writes quite a lot!)

Adding a regression line is probably inappropriate here, but let’s just say that in the first ten years of their career (depending on how they are counting, this is a bachelors thesis, a masters, a PhD, and maybe a postdoc or two), many people accumulate four or five studies that they could have published, but they didn’t. After that they might accumulate one every five or ten years. It makes sense that more of the unpublished papers come early in the career because people aren’t yet adept or fast at writing papers. They also don’t have as much experience doing research, so data from projects like bachelors theses often go unpublished because of flaws in study design or data collection. These mistakes are what eventually lead us to learn to do better science, but they can keep a piece of research out of a top journal.

As of 2013, there were about 40,000 postdocs in the United States. Add that to the rest of the world and there’s potentially a lot of unpublished research out there – clearly over 100,000 datasets worth! (Is that good or bad? Both, and I’ll get to that later.)

The answers are partially biased, I am sure, by the differences in productivity and funding between different researchers. This might depend on what kind of appointment the researcher has – is their job guaranteed? – and how much funding they have. A bigger lab can generate a lot more results. But someone still needs to write them; labs might go through phases where the writing falls primarily on the PI (primary investigator, a.k.a. lab head) or other phases where there are highly productive postdocs or precocious PhD students who also get a lot of papers out the door.

And one of the biggest constraints is, of course, time. With pressure to publish your best research in order to get that postdoc position, to be competitive for a tenure-track job, and to eventually get tenure, if researchers have to choose between publishing a high-impact paper and low-impact one they will certainly focus their energies on the high-impact results. The results that were confusing or didn’t seem to show much effect of whatever was being investigated might stay in that file drawer.

One thing that was clear is that this problem of time as a limiting resource is contagious. Later, I asked people if they had ever been discouraged from publishing something which they had wanted to publish. Of the 32 cases where a supervisor discouraged publishing, two answers as to why emerged as particularly common.

context_perfectionismLet’s look at the “more data was needed” issue first. What I offered as a potential response in the multiple choice question was, “My supervisor thought that we needed to collect more data before publishing, even though I thought we could have published as-is.”

In some cases, the supervisor might be right. Maybe more data really was needed, maybe the experiment needed to be replicated to ensure the results were really true, maybe the team needed to do a follow-up experiment or correct some design flaws. After all, the supervisor should have more experience and be able to assess whether the research is really good science which will stand up to peer review.

“Simply, what I thought were publishable results were probably not worth the paper it would be printed in,” one responder wrote of research (s)he had done during a bachelors thesis, but which the supervisor had not supported publishing. “The results did serve as the basis for several other successful grant applications.”

But at the same time: as Meghan Duffy recently noted on Dynamic Ecology, perfect is the enemy of good. That can go for writing an email to your lab, and also for doing experiments. In the discussions of her blog post, someone noted that “perfect is the enemy of DONE” and Jeremy Fox wrote that often graduate students can get into the rut of wanting to just add one more experiment to their thesis or dissertation, so that it is complete, but at some point you just have to stop.

“I have not directly been pressed not to publish, but I have 2 paper drafts which have not been published yet,” wrote another respondent. “I wrote them as a PhD student and now I think they will be published, but I have the feeling that for some period, one of my supervisors did not want to publish them because it was just correct but not perfect enough.”

If more data should be collected, but probably never will be, does that mean that the whole study should sit in the file drawer? If it was done correctly, should it still be published so that other people can see the results, and maybe they can do the follow-up work? Different researchers might have different answers to this question depending on how possessive they are of data or an idea, or what level of publication they expect from themselves. But if a student, for example, is the primary person who did the research, their opinions should be taken into account too.

Why is this publication gap a problem?

That gets into the second idea: that as-is, the research won’t be accepted into a top journal.

Ideally, this shouldn’t matter, if the research itself is sound. There are plenty of journals, some of them highly ranked, which accept articles based more on whether the science is good and the methods correct, rather than whether the results are groundbreaking.

(Unfortunately, these journals often have big publication fees, whereas highly-ranked journals have a large time investment but publication is free. Plos One, one of the most well-known journals which focuses on study quality rather than necessarily the outcome, is raising its publication fee from $1350 to $1495, which must be paid by the author. For some labs this doesn’t matter, but for other less-flush research groups the cost of open-access publishing can definitely deter publication.)

It is important to get well-done studies with null results out there in the world. Scientific knowledge is gathered in a stepwise fashion. Other scientists should know about null results, arguably just as much or more than they should know about significant results. We can’t move knowledge forward without also knowing when things don’t work or don’t have an effect.

Here’s two quick examples. First, at least in ecology and evolution, we often rely on meta-analyses to tell us something about whether ideas and theories are correct, or, for example, how natural systems respond to climate change or pollution. The idea is to gather all of the studies that have been done on a particular topic, try to standardize the way the responses were measured, and then do some statistics to see whether, overall, there is a significant trend in the responses in one direction or another. (Or, to get a little bit more sophisticated, to see why different systems might respond in different ways to similar experiments.) This both provides a somewhat definitive answer to a question, and makes it so that we can track down all of the work on a topic in one place rather than each scientist having to scour the literature and try to find every study which might be relevant.

If researchers only 20% of the scientists studying a question find a significant effect, but these are the only results which get published, then literature searches and meta-analyses will show that there is, indeed, a significant effect – even if actually, across all the studies which have been done (including the unpublished ones), it’s a wash. Scientific knowledge is hindered and confounded when this happens.

A second example. When you are designing a study, you search the literature to find out what has been done before. You want to know if someone else has already asked the same question, and if so, what results they found. You also might want to know what methods other people use, so that you can either use the same ones, or improve them. If research is never published, then you might bumble along and make the same mistakes which someone already has made. The same flawed study might be performed several times with each person realizing only later that they should have used a different design, but never bothering to disseminate that information. (And sure, you can ask around to find unpublished results, but if there’s no record of someone ever studying a topic, you’re unlikely to know to ask them!)

Almost everyone in the scientific community acknowledges that the publication bias towards positive or significant results is problematic. But that doesn’t really solve the problem. It’s just a fact that null results are often much harder to publish, and much harder to get into a good journal. And considering the pressure that researchers are under to always shoot for the highest journals, so that they can secure funding and jobs and advance their careers, they are likely to continue neglecting the null results.

“I think a lot of pressure comes from the community rather than individuals to avoid publishing negative results,” one early-career ecologist wrote in a comment. “I think negative results are useful to publish but there needs to be more incentives to do so!”

This pressure can be so great that, I was told in a recent discussion, having publications in low-impact journals can actually detract from your CV, even if you have high-impact publications as well. Two candidates with the same number of Ecology Letters or American Naturalist or even Nature papers (those are good) might be evaluated differently if one of them has a lot of papers in minor regional or topic-specific journals mixed in. Thus, some researchers opt for “quality not quantity” and publish only infrequently, and only their best results. Others continue to publish datasets that they feel are valuable even if they know a search or tenure committee might not see that value, but consider leaving some things off their CV.

One thing I’d like to mention here is that with the “contagion”, students are sometimes affected by their supervisors’ standards of journal quality. While a tenure-track supervisor may only consider a publication worthwhile if it’s in a top journal, a masters student may be greatly benefited by having any publication (well not any, but you see my point) on their CV when applying for PhD positions. I also know from my own experience that there is incredible value, as a student, in going through the publication process as the corresponding author: learning to write cover letters, respond to reviewer comments, prepare publication-quality figures, etc. Doing so at an early stage with a less-important manuscript might be highly beneficial when, a few years later, you have something with a lot of impact that you need to shepherd through the publication process.

There are many good supervisors who balance these two competing needs: to get top publications for themselves, but to also do what is needed to help their students and employees who might be at very different career stages. In many cases, of course, supervisors are indeed the ones pushing a reluctant graduate to publish their results!

Unfortunately, this is not always the case. Again, because of the low number and biased identity of survey participants I can’t say anything about how frequently supervisors hinder their students in publishing. But I think almost everyone has some friend who has experienced this, even if they haven’t themselves.

“I have been in the situation where a supervisor assumed that I would not publish and showed no interest in helping me publish,” wrote one responder. “As a student, being left hanging out to dry like that is rough – might as well have told me not to publish.”

“Depending on the lab I’ve been in, the supervisory filter is strong in that only things deemed interesting and important by them get the go ahead to go towards publication,” wrote another. “Thus, the independence to determine what to publish of the multiple threads of my research is lacking in certain labs and management structures.”

That obviously feeds in to the publication bias. So how do we get past it, in the name of science? There aren’t a lot of answers.

Why is the publication gap maybe not so bad?

At the same time, it’s clear that if all this research (100,000 or more papers!) was submitted for publication there would be some additional problems. Scientific output is roughly doubling every nine years. There are more and more papers being published; there are more postdocs (although less tenure-track professor positions) in today’s scientific world, and I’m pretty sure the number of graduate students increased after the “Great Recession”, about the time when I was finishing my bachelors degree and all of a sudden many of my classmates’ seemingly guaranteed jobs disappeared.

This puts a lot of stress on the peer review system. Scientists are not paid to review research for journals, and reviewing may or may not be included as a performance metric in their evaluations (if it is, it’s certainly not as important as publishing or teaching). With more and more papers being submitted more and more reviews are needed. That cuts time out of, you guessed it, doing their own research. It’s a problem lots of people talk about.

Screen Shot 2015-10-04 at 10.21.16 AM

Others lament that with so many papers out there, it’s getting harder and harder to find the one you need. Science is swamped with papers.

Even without publishing in a journal, there are other ways to find data. For instance, masters theses and PhD dissertations are often published online by their institutions, even if the individual chapters never make it into a peer-reviewed journal (perhaps because the student leaves science and has no motivation to go through the grueling publication process). But this type of literature can be harder to find, and is not indexed in Web of Knowledge, for example. So if it’s the data or methods you need, you might not find it.

Reconciliation?

I’m not particularly convinced by the argument that there’s too much science out there. Research is still filtered by journal quality. Personally, I read journal tables of contents for the best and most relevant journals in my field. I also have google scholar alerts set for a few topics relevant to my research, so that when someone publishes something in a place that would be harder to find I know about it. This has been useful. I’m glad they published it, even if it’s in an obscure place.

With that in mind, I wonder if there is a way to publish datasets with a methods description and some metadata but without having to write a full paper.

There are, of course, many online data repositories. But I don’t believe people use them for this purpose as much as they could. It is now becoming common for journals to require that data be archived when a paper is published, so much of the data in these repositories is simply data that actually already has been published. In other cases people only bother with publishing a dataset as-is if it is large or has taken a lot of time to curate, and might be of particular interest and use to the community. Smaller datasets of pilot projects or null results are not often given the same treatment.

And while published datasets are searchable within the individual repositories archives, they don’t show up in the most common literature search tools, because they aren’t literature: they are just data.

Is there a way that we could integrate the two? If you have five papers-worth of data that you don’t think you’ll ever publish, why can’t we have a data repository system which includes a robust methods and metadata section, but skips the other parts of a traditional manuscript? If this were searchable like other kinds of literature, it could contribute to more accurate meta-analyses and a faster advancement of science, because people would be able to see what had been done before, whether it “worked” and was published with high impact or not. The peer review process could also be minimal and, as with code or already existing data archives, these data papers could have DOI’s and be citable.

But I’m not sure if this is realistic (and honestly, I haven’t thought through the specifics!). Science seems slow to change in a lot of ways. Methods change fast. Open access and online-only publishing have swept through to success. But creative ideas like post-publication review, preprints, and other innovations have been slower to catch on. These types of ideas tend to generate a group of fierce supporters, but to have a difficult time really permeating the scientific “mainstream”.

The scientific community is big – how can we change the culture to prevent our large and growing file drawers full of unpublished results from biasing the literature?

Stay tuned for part two of this series, about other reasons that people are pressured not to publish results – for instance, internal or external politics, competing hypotheses, stolen data. Part two will be published later this week. If you want to take the survey before it goes up, click here.

open access for who?

IMG_1090Beautiful Uppsala.

A lot has been written about the push for open access publishing in academia. In case you’re not familiar with it, it means publishing in journals where content is available, free of charge, online, to everyone. This is very different than the traditional journal model, where libraries pay exorbitant fees to publishers for access to the journals, and if you aren’t working through one of those libraries you will hit a paywall where access to a single article is likely to cost $30-45 if not more.

In a lot of ways I feel like I can’t add much: it’s a great idea, it helps science be more accessible, it often helps data be more accessible, it opens the conversation. It’s another high cost, borne to authors instead of to libraries. It’s confusing how the journals make all the money no matter what way we publish.

I fully support the idea of open access, and most of my papers so far have been published in open-access journals. That includes one, about climate change effects on a seemingly unassuming (but actually ecologically and reproductively fascinating) arctic/alpine cushion plant, Silene acaulis. That paper went on to be one of the most highly-accessed articles on the Springer’s catch-all open access journal, SpringerPlus. To date it has over 4,000 accesses, according to the article metrics. Would this have been more if it were published in a different journal? I have no idea, but it is much more popular than I had expected.

Based partly on this positive experience, my masters supervisor (Juha Alatalo) and I decided to publish in primarily open-access journals. (I did not make the same decision about my other work, and have a different manuscript based on my research in Davos submitted at a traditional journal.) Which brings me to the unique question I have: how do I pay for it?

In traditional journals, there might not be a fee to get a manuscript published. There might be, but more likely (at least in the better journals) there is a fee for color figure printing, or perhaps a per-page fee. In open access, that goes out the window. Because journals can’t charge libraries fees to access these manuscripts, instead they charge the authors. Fees usually run greater than $1000, sometimes up to $3000.

Some departments and lab groups work this into their budgets. Some researchers also include a category on their grant applications to cover publication fees. However, some funding agencies also explicitly do not pay for publication fees. If you are a researcher in between grants, money might be tight. Or, like me, you might be a graduate student working to publish your first first-authored paper. It would take more than a month’s worth of my masters scholarship payment just to pay the open access fees. And, like me, you might work in a small lab group that does not have additional funding to easily cover these sorts of things.

I looked around and found that many universities (not all, but a chunk of the R1 schools in the U.S.) have special funds to cover open-access publishing. Just via google, here are a few examples: Harvard; University of Calgary; Cornell; University of Arizona.

The University of Heidelberg in Germany has a funny way of describing the rationale for their fund: “Heidelberg University supports researchers who are willing to publish articles in open access journals with a publishing fund to cover article processing charges.” Are willing. As if it’s some burden.

PLoS One even has a list of universities which have funds to cover PLoS (a journal consortium which stands for Public Library of Science) publishing. That’s really nice on first read, but then you think about it more and it seems less “open”: the publishing house itself is referring people to ways to convince third parties to pay the publishing house.

It also, and I am being petty and jealous here, makes it much easier for some researchers to publish in open access journals than others. The university where I did my masters, Uppsala University in Sweden, does not have such a fund. During the time when I wrote the paper I am seeking to publish, I was supported only by a small scholarship from my masters program. I received no funding from my supervisor or his lab. It’s not like I have leftover grant money with which I can pay publication fees.

Being in Sweden, home of Pirate Bay and the Pirate Party, Uppsala of course loves the idea of making science publicly available. Sweden has a program, OpenAccess.se, which promotes open access. Trolling through the Uppsala library archives, I am unable to find any evidence of funding to cover open access fees, but I did find a powerpoint presentation which stated, awkwardly, that there was at the moment no available funding to cover these expenses even though they really would like researchers to publish open-access.

Instead, Uppsala has a database called DiVA, which they call an open-access repository. This type of “repository” is listed as one of the main goals of OpenAccess.se. Up until recently, students were required to submit their theses to DiVA, so that they could be read by all; departments then realized that actually, if a student tried to then publish some part of that thesis, the journals might balk since it had already actually been published. When I finished my masters, we were first told to submit our theses, and then told not to because the university had to sort out some legal issues.

There are also published articles in DiVA, and researchers are encouraged to upload their work which is published in journals. There are a few problems with this: copyright on journal articles is complex, and you aren’t necessarily allowed to “make” an article open access by posting it online. The journal owns the copyright, even if you own the data. As such, there are not so many full-text articles in DiVA. If I do a keyword search for the major ecological concept I am studying in my PhD, dendritic networks, nothing comes up. If I search for “dendritic”, I get some clinical medicine articles.

And DiVA is Uppsala’s crowning library achievement, in some sense. It is heavily promoted within the university, and touted as their contribution to open access.

(It also has other functions. “All publications by researchers and staff at Uppsala University should be registered in DiVA,” the FAQ reads. “The reason for this is to produce a complete picture of what is being published by staff at the university. In addition departments can use this information to facilitate the evaluation and distribution of funds.” There are many records of publications which do not have the actual full-text articles attached to them.)

It’s pretty clear that while DiVA might be useful for many applications, it is not the same thing as an open-access journal. And if you want your work to be accessed by all, Uppsala – consistently ranked in the top 100 universities in the world, and the second-best in Sweden – is not going to help you.

Here’s another example of how I’m stuck: the Ecological Society of America, which publishes multiple highly-regarded journals, waives page fees (for the first 15 pages per year, at least) for members who lack grant money, for its flagship publications Ecology, Ecological Applications, and Ecological Monographs. For their open-access journal Ecosphere, members get a reduced price for publication: $1250 instead of $1500. There are no grant funds available to further cover costs for researchers who lack grant money.

And so, I’d like to ask: open access publishing is frequently discussed in very idealistic terms, with lofty goals for the future. But is it so egalitarian? If you lack funding, for instance if you are early in your career – not coincidentally the point where open access to your work might be extremely beneficial – there seems to be a clear message: open access is not for you. Finding a broader audience for your publication might be unattainable, as is your hope of sharing knowledge with all.

beginnings.

Every time I talk to my mother (hi mom!) she asks me something like, “so what is your usual day like?” I’m the  first one in my family to go to a research-based graduate program in the sciences – my cousin Jess is in med school and my uncles got PhDs in history and economics, but the routine of those lives are very different. There’s a certain amount of mystery and allure about what happens when you are a graduate student, besides of course my mother’s general curiosity about what I’m doing with my life. I’m not going to class, so what is it that I’m doing?

Apparently I'm a professional now. Headshot for the institute website www.eawag.ch.

Apparently I’m a professional now. Headshot for the institute website http://www.eawag.ch.

But there’s not a real answer. Days both fly by and drag past. There isn’t really so much to distinguish them from one another at this point – I haven’t started fieldwork or labwork for real. The main thing is that Fridays are filled with group meetings, department meetings, and seminars. Sometimes other days are, too. Those days it can feel like you get nothing done and are running around from one thing to the next all day. Other days it can feel like you get nothing done and are just reading all day. No matter what kind of day it is, it’s hard to measure progress or have any tangible outcome of what you’ve done.

For me, the biggest change is to have a community, a structure, obligations, meetings. For more than the last year, I worked pretty independently. In Davos at SLF, we were a tiny department and had a short meeting once a week. That was it, other than checking in with co-workers whenever it was convenient just by popping my head into their office, or taking out my earbuds and striking up a conversation across my desk.

In Sweden, my supervisor was on paternity leave and only came into the office a few days a week. I didn’t have a real office – he offered me a place in the computer lab – so I worked in the library, which was a considerably nicer place, or from home. I checked in with him in his office a few times a week, but other than that it was up to me to make my own schedule.

No matter where you are in academia, there’s some flexibility in scheduling. People keep their own hours. Night owls hunch over their computers deep into the night; early birds cycle to work and have pumped out a few pages of writing or analysis before the rest of us even arrive. I don’t have to keep a timesheet, clock in and out, or tell anyone my schedule.

But compared to Davos and Gotland, it’s jarring to be back in an environment where people more or less arrive at 8 and leave at 5, taking a regular lunch break all together in the cafeteria. Where there are meetings and seminars and journal clubs that you will be shamed if you don’t go to. Where if you decide to leave early and spend the rest of the afternoon reading that book from the comfort of your sofa at home, probably there was some important obligation that you will have totally forgotten about and subsequently miss.

This is, of course, real life. I don’t dislike it. In fact, I do like it. I like our coffee breaks, our lunches, having other people. I can chat with the postdocs; I can turn my chair around and ask my fellow PhD student, Roman, how to go through the maze-like University matriculation process. Once a week I have a scheduled meeting with my supervisor Florian. We spend an hour or two talking about my project, ranging from experimental details to theory.

On Thursday, the ECO department had its annual Christmas party. A few wonderful people dressed the old teaching lab up with streamers, a disco ball, and other decoration. The department purchased more beer and wine than the 60 of us could possibly drink; they hired a pizza truck to park outside (I was initially alarmed to see an actual firewood-fired pizza oven in the back of a truck, because it just seems dangerous, but on second thought it’s no more crazy than a usual food truck) and make us pizzas to order. The rest of us brought salads and desserts; three student’s DJ’ed and everyone, from students to lab techs to the administrative staff, danced. It was great fun. It’s really nice to be part of a department with such a sense of community.

The hardest thing is that I have to try to get my daily run in before work, which is challenging when it doesn’t get light until after 7. I struggle to pry myself out of bed. But I shouldn’t complain. Millions of people manage to go running in the dark before work, why is it so hard for me?

The second hardest thing is that these meetings with Florian never have a concrete outcome. Embarking on a PhD in Switzerland is different than in the U.S. because I only have three years to finish. That means that right off the bat, there is a certain sense of urgency to figure out what I’m doing and get started. But at the same time, mistakes can’t be made. Things have to be carefully planned, connected to theory. We have to make sure we have good questions that we are answering, that we’re not just collecting data willy-nilly. It’s a fine balance between making decisions and taking more time.

It can feel frustrating, but I think that is what a PhD, and indeed research itself, is all about. Still though, it is easy to feel jealous of Roman, who is six months ahead of me and already busy with labwork, PCR’s, and weeklong trips to other labs to learn new techniques. I’m still figuring out what it is I have to learn and it can feel like everyone around me is leaving me behind as they move on with their projects.

None of that really answers the question of what I do all day. What I do all day is very different than what I will be doing in three months; in some ways it isn’t very representative of what PhD life is like.

But maybe it answers what I feel all day. I feel giddy if I make a GIS map; I feel sleepy if I read too many chapters of a book on ecological theory; I feel excited when I listen to a seminar about some cool research someone in my department is doing; I feel overwhelmed when I think about how to try to link all the pieces of my project together. I feel responsible for some important, long-term decision when I go to buy hip-waders for my stream work, even though actually this is probably the least-important decision I will make in the whole project.

And this combination of excitement, stress, and confusion is probably what will characterize my life for the next three years. One of the best things I am doing now is looking at the people around me and trying to glean information about how they manage their days, their projects, their home lives, their expectations. Luckily, I have a great set of mentors to learn from.

reality (of) settling in.

IMG_1337

By now I’m a pro at traveling and moving. Over the last two years I lived in four different countries; at this point I’m up to 16 months straight where I haven’t stayed in one country for a whole month, whether it means a “permanent” move or a conference or vacation.

When I left the U.S. on Halloween to move to Zurich, things were easy. Packing went more smoothly than it ever has in the past. The travel was no problem; at JFK I talked my way into the U.S. Airways Lounge by charming the front desk lady. There, I enjoyed free breakfast and stocked up on snacks for my trip. I even managed to actually sleep on the overnight flight.

Moving in and settling down, I thought, would be just as easy. Paperwork? I’ve done that before. Bureaucratic rules? Check. Learning where the nearest grocery store is? Yeah, I’ve done that. (Not to mention, I now have an iPhone so I can actually get directions and maps on the fly, which is huge.)

After all, I have been looking forward to this so much: coming to Zurich, working in an amazing institute, living in one place for three years. Making myself a home and finding a community. Traveling with a backpack, instead of a duffel bag and a ski bag each weighing 50 pounds. Having a kitchen; having a living room. Having a bedroom that is separate from those rooms. Having a yard! Having plants. Being able to get into a routine, to have habits like going for a run in the morning. Having space to plan in.

I’m certainly well on my way to those goals.

IMG_1319

But surprisingly, the moving-in and settling-down part is proving harder than I had expected. Way harder. And I don’t mean that it’s a stuggle: most things were checked off my list in the first day. I have a phone number, a bank account, a train pass, some furniture. The main things remaining are university registration (all my documents are in but the University of Zurich inexplicably takes up to three months to process it?) and, once that’s complete, insurance. And the migration board. Switzerland lets you in as long as you have a visa, but then waits to offer you an appointment to get a residence permit – so for weeks, potentially, you’re living with no Swiss-issued permission or ID. Which prevents you from checking a whole bunch of other things off your list.

What’s hard is that every day when I come home from work, I’m exhausted. Totally, completely done with the world. As I recently wrote in an e-mail to one friend, ” I get home and I melt into a puddle of useless sofa-glop.” (At least I have a sofa.)

The things that I want to do in my evenings don’t get done. I don’t go for that jog or do that circuit workout. I don’t read that paper or work on that manuscript left over from my masters degree. I don’t even reply to e-mails from my friends – the act of typing out my thoughts is too much. I don’t write the article for FasterSkier.

Probably, I peruse the internet and, in a haze, read some pop culture news that doesn’t even absorb into my brain.

Why this is so, I can’t seem to explain.

There’s been a lot of discussion in academic circles recently about how we all complain about how busy and stressed we are, but that’s only because we choose to see ourselves as busy and stressed. There have been some fabulous rebuttals, my favorite of which comes from Timothée Poisot:

“the raw volume of things we have to do increases over time; so does our productivity, but with a delay. We are essentially in a Red-Queen dynamics with ourselves: more work to do means that we have to develop a new coping strategy, in the form of more productive habits. Then when we feel comfortable, we take on more work, and become overworked again.”

(If you’re not familiar with the Red Queen hypothesis, here‘s a nice explanation of how a chapter in Through The Looking Glass is related to evolutionary theory.)

Looking back over the last few years, I totally see this in my life. That’s why I think it’s such a great explanation. I’ve gone from producing 50 to 75 to over 100 race reports (of increasingly better quality) for FasterSkier every winter, while simultaneously holding better and more serious jobs – hell, I did a masters degree which involved writing my own grants and administering a field season. I never feel totally comfortable, but as I pile on more things, they always seem to get done with no more stress than the previous, slightly-smaller workload.

Do things get lost in the lurch? Yeah. Personal relationships. But I still have good epistolary (ok, e-mail) relationships with some great friends, and things always fall back into place when I see them. I still wish I was better though, and wish I was closer to some of the people I care most about. And I wish I had more time to exercise – that for sure gets lost. I’m in the worst shape of my life since high school, but on the other hand, I’m still certainly in better shape than most people. It’s just my personality and life experience that keep me saying that this isn’t good enough.

Business, and busy-ness, marches onward, to both ever-new heights and exactly the same height.

What I can’t reason my way around is my sudden crash once I moved to Zurich. If anything, I’m less “busy” than I have been: the grind of the PhD has barely started. I’m still reading papers and trying to feel my way out. I will start seminars and journal clubs for the first time this week; up until now I got out of many of the demands of my position by virtue of being “new” and “still settling in”. Compared to my classmates all around me, life is a breeze.

So perhaps it comes down to this. For the last two years, I have known that every move is more or less temporary. That I need to make friends, but maybe not worry about them too much because I’ll just leave them soon. That the main thing I need to do is keep myself happy from day to day. (And in the course of being happy, of course, you end up making friends who are much more than temporary.)

Now, there’s a lot more pressure. I have to find the things that can keep me happy for the next three years. I have to make better, lasting relationships. If I go for a run, I’m thinking, oh yes, this is how my morning run will be! Which means, wait, what if I don’t like that morning run?

Which is silly, of course.

Yesterday I went for a hike with my friend Timothée (not the same as the guy who wrote the blog post). I randomly picked a place on the map where there were nice-looking mountains (at least according to contour lines) that wasn’t too far from Zurich. We took the train for an hour and set out.

It turned out to be up, for 3000+ feet straight. No breaks, no little flats or downhills as you head for the next ridge. It took impossibly long (well, just 1 1/2 hours with some breaks to look at chamois and sketchy cable cars) to reach the point I had marked on the map as where we could decide which of several routes to take onward. Sweating profusely and out of breath, I’d look at my watch and realize that it had only been ten minutes since the last time I looked at my watch, thinking, we must be getting somewhere by now.

Of course, we eventually got somewhere – somewhere with beautiful views. It was rewarding and I was thrilled to be in the mountains. All of the things that you remember when hiking as soon as the part that sucks stops.

And that’s a little bit like what starting in Zurich is like. It’s uphill and I am more and more exhausted, and I keep thinking, somewhere up ahead is a trail that traverses across the side of the mountain. Sometime it won’t be uphill anymore. It must be right around the next corner.

The bottom line is that moving takes a lot more out of you than you expect. Over the next few months, things will get easier. Routines will develop without me consciously thinking, “oh yes, this is a routine which is developing.” Days will become a blur of office, seminars, meetings, lunch with the lab group, German class, presentations. Weekends will be for skiing and reporting. I won’t notice so much the weird starkness of settling in before you are settled in.

And I will get back to being busy as a student, as a writer, as a crazy-ass human being. You know, like usual. For some reason, that doesn’t exhaust me.

IMG_1345

up by the bootstraps.

IMG_0695

Amid the recent economic downturn, there has been a lot of criticism of my generation. We say we have no jobs? Well, we’re lazy; we expect everything to be handed to us; we don’t plan for the future and then complain when the future is not good. That’s why we don’t have jobs. One common refrain is “well you should have majored in a STEM field, not the humanities, and you’d have a reliable job.”

It’s certainly true that there are jobs in the STEM fields. It’s true that a degree in biology, for instance, teaches you skills like data management and statistics which may be transferable to normal jobs. Recent census data shows that only 1 in 4 people with a bachelors degree in a STEM field go on to a job in those fields. So: there aren’t that many jobs available, actually. Good luck getting one. (Also, no shock here, most of the people who do get these jobs are men.)

It’s also interesting to hear people’s reactions when I say that I am about to start a PhD program. Getting a PhD is still so respected, so mythical: people tell me that they could never imagine receiving a PhD offer, much less doing the work to get the degree. Maybe I’ll learn otherwise, but I disagree. Do you have to be smart? Yes. Do you have to have done good work in your career up to this point? Yes. But in my mind, the biggest challenge to a long-term research degree is working really hard, working long hours, and staying motivated when there is no end in sight and things aren’t going well. It can be a very discouraging slog; many people hate their PhD project by the time they finish.

But more and more people are finishing. More and more degrees are being handed out. According to the NSF, the number of doctorates awarded increases by 3.4% annually. Clearly, achieving this is not an impossible feat: it just takes a lot of sacrifices and hard work.

Hard work is something I’m good at. That’s why I feel comfortable taking on a PhD. I think I can muscle my way through. But will it matter, in the long run?

For people in our grandparents’ generation, becoming a college professor was a good, secure, respected life. By no means easy or necessarily affluent, but solidly middle-class. Things have changed. In 1969, 78% of faculty positions were tenure-track; today that number has dipped to just 33%. The good jobs are disappearing. There are three times as many part-time faculty jobs as there were then, and reports of adjunct professors who have to work at multiple schools to pay the bills, or go on food stamps. Obviously it’s not like that for every person, but it’s not good.

So once you have gotten a doctorate, the future is really no clearer or more rosy than it was before. Friends and family might consider the achievement some sort of pinnacle or achievement; the labor market might not agree.  The same NSF report states,

“The proportion of doctorate recipients with definite commitments for employment or postdoctoral (postdoc) study fell in 2012 for life sciences, physical sciences, and engineering, the third consecutive year of decline in these fields. In every broad science and engineering (S&E) field, the proportion of 2012 doctorate recipients who reported definite commitments for employment or postdoc study was at or near the lowest level of the past 10 years, 2 to 11 percentage points lower than the proportion of 2002 doctorate recipients reporting such commitments.”

Only 2/3 of PhD recipients in science fields had definite employment commitments after graduation, in 2012; this was lower (closer to 60%) for life sciences, my field, than for others. If you want to stay in academia the costs are high, as a post-doctoral position earns less than half the average salary of an industry job.

I’m still not sure if I want to go down this road all the way, to become a PI (Principal Investigator: usually a professor, someone who gets a grant is in charge of a research project). But it’s clear to me that if I do, it will be challenging. I’m confident in my ability to finish a PhD, do good work, and continue to publish papers. But is this good enough to be able to have the type of life that I eventually want to have? Being smart is not enough to guarantee a professorship.

The journal/magazine Science recently made a widget for young career scientists to figure out what their chances are of becoming a PI. I filled in the information on a whim. I feel like I’m doing pretty well compared to many people at a similar point in their masters; I have a lot of research experience both in my masters and as a technician, I have a few papers published, I have made connections. I have applied for and received my own grant funding, something which is rare for students of my cohort.

Even if I don’t become a PI (or don’t want to become a PI), I think of other dream jobs – “reach goals” – and imagine that they might have similar requirements. Want to get a National Geographic Explorers Grant? Work for a nonprofit nature research group you admire in a cool part of the world? In order to choose your own future, you need to be pretty much a badass.

So I was discouraged when I saw this.

current

I’m a girl, so lucky me, I’m sitting at 6% with my chance of becoming a PI.

I have written, maybe on this blog or maybe just on facebook, about why there are fewer women in science. There are more women in the life sciences, actually, than men, at early career stages. But that changes over time. There was a great article about why in the New York Times last fall. Read it. But regardless, I’m pissed: it’s not fair that I’m down at 6% while the boys are so much higher.

6%!? With all the hard work I’ve already put it? It felt like an insult.

The cool thing about the widget is that you can toggle the different variables and see how the line changes. The most important thing, it seems, is the number of first-author publications you have. I began thinking about what I could change in the next year. I will move universities to a more prestigious one and start a PhD program. But hopefully, I’ll also publish more. Publication is a long process, so the odds that anything from my work this summer is published by Christmas is zero. There’s hope though. We have one paper in review which has actually come back from reviewers and is sitting on the editor’s desk: hopefully, with revisions, they will take it. I have two other papers in progress where drafts are already being circulated among co-authors. On those two, I would be first author – the coveted position which I haven’t occupied so far.

The thing that makes a difference.

If all goes well, by the end of the year my chances could look more like this.

goal by the end of September

So I have to get to work. And that’s science, for you: there is no relaxing. You’re in the field in a remote location? Doesn’t matter. You have to be working on papers. Haven’t seen your family in months? Too bad, keep working on papers. On vacation? Keep working on papers.

That’s the future I maybe have to look forward to – if I’m lucky and make it through.

I knew that it was tough to become a PI. Over the last decade, getting a tenure-track job is no longer the most common outcome for PhD recipients (and to be clear, I don’t mean the most common immediate outcome: I mean, outcome at all, even after one or a few postdoctoral positions). In the biological sciences, only 8% of PhDs receive a tenure-track job within 5 years of getting their doctorate.

If you get a postdoc, the future isn’t much better. 10% of postdocs are unemployed: that’s actually higher than our country’s unemployment rate. In 2012, 20% of postdocs were handed a faculty position.

There are many pros and cons to getting a faculty position, or a tenure-track one, at that. I’m years away from deciding if that’s right for me. But the knowledge that despite everything I’m doing, I might not be able to? It’s frustrating. It motivates me to work harder, but also to hate the system a little bit.

I have pride in the work that I do. I’m not asking anyone to hand anything to me. What irks me is that for myself and the many very talented, motivated students I work with, our work is not valued.

I hate being told I can’t do something, when I know that if I was just given a chance, I’d hit the ball out of the park.

It’s the worst feeling.

Science today is not the same as science used to be.

***

On a final strange yet amusing note, here’s a predatory publishing e-mail I received today! Good for some laughs – until you think that people fall for this scheme. The paper they are asking about has already been published in an established journal, downloaded hundreds of times, and has copyright. So, nope, I’m not interested in paying you to publish it in paperback…. just no.

I guess to one extent, you know you’ve made it when you start receiving predatory publishing e-mails.

Screen Shot 2014-07-18 at 2.01.06 PM