How to Give a Great Presentation

A few weeks ago now, I hosted the Biotweeps Twitter account for a week. It’s an account with rotating hosts: every week, a different biologist takes over and posts about, well, whatever they want, but usually at least partly about their research.

I had a lot of fun hosting. I talked about research in the Arctic and climate change; I talked about stream biodiversity and why freshwater conservation is important; I talked about what an ecosystem is and how ecosystems are connected in cool ways; and I also talked about work-life balance and what I do for fun. I had great discussions with the account’s followers about what other jobs they’ve had, if they always knew they were interested in science, and how life experience shapes us into scientists.

(You can see my whole week of posts here– it will show up with someone else’s name because someone else is hosting now, but these are my posts from June.)

But some of the threads that got the most attention were about organization and what we might consider “transferrable skills”: the non-research part of how to do good science (these skills are very important for many other jobs, too, not just science!).

I had the idea to post about this because last year, Kevin Burgio shared his organizational system when he was hosting BioTweeps. I have since adopted part of his system and it has been immensely helpful. I wondered if some of the things I’ve learned could help others.

So I posted about how to organize and manage data, from pre-experiment planning to analysis, both to make your life easier and to promote reproducibility. I talked about general organization, and how to get the most out of going to conferences. These posts generated a lot of discussion and I learned a few things too.

Hosting BioTweeps was a lot of work, and it took more of my time than I would have expected.

At the same time, I was in a bit of a blogging rut. I had resolved to blog more this year, and for a while I did, but then I sort of ran out of steam.

I realized that I was talking about a lot of things on Twitter that earlier I might have turned into blog posts. This wasn’t just true in BioTweeps week, although it certainly hit a peak then. I have been trying to think about which venue is more important to put time into in terms of communicating. There are a lot of stories, especially ones about travel or my outdoor adventures, that I don’t think I could tell well on Twitter. I want the space of a blog post to compose and tell them.

But there are other things that could be either. It’s a bit easier to make a Twitter thread than a blog post, so maybe I have been leaning in that direction, and I now have enough Twitter followers that I feel like I might reach more people that way.

But blog posts have their own plus sides. They are a bit more permanent, and they are easier to find with a search or refer back to. I don’t think I should stop blogging.

And so, here I am: I’m going to make a blog post out of one of my BioTweeps advice threads, about how to give a good presentation.

How to give a good talk

Why do I think I’m qualified to give advice about this? First, I really enjoy giving presentations. What’s more fun than talking about work you’re excited about? Second, I’ve won a few “best presentation” prizes at conferences, so I think I don’t suck at it.

Those prizes are not all because of me. Especially in the last 4 ½ years, I have gotten a lot of advice from colleagues. In my (now-former) research group, we had a culture of giving practice talks and getting extensive feedback on them. It was sometimes brutal but we all gave as good as we got. As a result, everyone in our lab gave great presentations.

But there are other things that go into making a talk that I do myself. I was motivated to share tips after I realized last summer that people thought it was easy for me to give a good talk. Haha. Nope.

I first saw the mismatch in expectations when attended a conference with my boss. He was shocked that the night before I was scheduled to give my talk, I locked myself in my room and practiced for hours. He thought that I was just good at presenting and didn’t need to practice.

I’m sure that there are people who can wing it and do great. But that’s not most of us, and it’s also not me. Quite the opposite. One way to make your delivery seem effortless is to practice it until it is, at which point, the hard work becomes partly invisible.

Practice is a basic, but very important, tip. Here are some others that will help make a great presentation.

  1. How should you structure your talk?

I’ve recently moved away from structuring my talks like research papers, with sections for introduction, methods, results, and then discussion. Now I just try to tell a good story. My talks are better for it – and I think this would be true for any kind of presentation, on any topic.

I try to think a bit about what I learned about constructing a story as a journalist: how do you draw the reader in, and then keep them reading?

Just as in a written story, start with a lead that grabs the the audience’s attention. This probably shouldn’t take more than a minute or two, although depending on how long the presentation is, it could vary.

Then, you get to what in journalism we would call the “nut graf”. This follows the lead and should be one slide, if you’re using PowerPoint. It’s the thesis and motivation of your whole presentation: what question are you trying to solve and what are you going to do? Give this to the audience early enough that they know where you’re going with the rest of the talk.

From here, there are lots of possible structures depending on the length of your talk and the material. You could have a bit more introduction after the nut graf, or you could get into the meat of your presentation.

Supplement your story with technical details, but not too many. You want to include enough that people trust that you are an expert and did things right, but you want them to remember the big picture, not that you had a lot of equations on your slides. Once you’ve demonstrated your expertise, just highlight the things the audience needs to know. Don’t distract the audience with some number or result that isn’t important.

(If you really think someone will want more detail, make extra slides that you can go to in Q&A. But don’t overwhelm or bore the 95% of your audience that doesn’t want that level of detail.)

For scientists, I find that using the traditional paper structure can lead to a lot of repetition, and if you have several analyses or result to present, the audience might have a hard time remembering what methods went with which results or why you are jumping from one topic to the next. Remember that this is a talk. It’s not a paper where they can flip back and remind themselves which method you used.

So if you have several sub-questions/results, explain each one separately. For three research questions, this would go, methods, results, methods, results, methods, results. Use narrative to link them together: “based on result x, we were interested to follow up with experiment y.”

Then, make sure to have a discussion and conclusion that ties all those sub-questions together.

  1. Content: what do you put on your slides?

This is not original advice, but it’s advice I swear by: don’t put too much on your slides. I like to have some that are just a photo, a figure, or a few words. I leave them up as an anchor/background while I talk about something. You want people listening to you, not reading your slides.

I also use visuals because then the audience has two ways of getting information. The text you write will probably be very similar to what you will say, so if they don’t understand your spoken explanation, additional text might not help very much.

There are a few more things you can do to make it easy for your audience to follow the story. For example, choose a consistent font throughout the presentation and make the text big enough to read from the back of the room. You think it’s big enough? Make it bigger.

Do you have charts or graphs? Make the text big. Don’t just recycle figures from a research paper or take them straight out of Excel. Make the labels and text bigger and easier to interpret from far away, and if possible use the same font as your slides’ text.

Choose a nice color scheme. I make my graphs in R (a statistical software) and I like to use the online tool ColorBrewer or the ‘viridis’ R package to choose color schemes that are more pleasing than the defaults. If you have multiple charts with the same set of variables, make sure the color scheme is consistent throughout all of them – this makes it easier to follow.

No matter what software you are using to produce figures, make sure that they are easy to interpret for people who are color-blind, of which there will probably be at least a few in your audience. ColorBrewer indicates whether a color scheme is colorblind-friendly.

In PowerPoint, you can use the same tools to choose a color scheme to apply to your layout.

There are also some good sources of artwork you can use to make your slides nice. Pixabay has some free images, or search Wikipedia or Creative Commons. Phylopic is great for free images of plants and animals. Government agencies often have free imagery too. IMPORTANT: attribute any images that are not your own!

Whether it’s a chart or diagram, I often go through visuals sequentially. For example, for the first graph, I will often start with a slide that has just the axes, and I will explain what they are before adding the data to the plot. I will sometimes add the data in a few different steps if it is a complicated figure. This can make your results much easier to understand. The same goes for a complicated diagram: adding elements sequentially can allow you to highlight and explain what’s important about each one.

Finally, don’t forget to have a slide thanking people who helped with your research (including funders). I don’t like to end on this slide, because during question time I want to leave a slide about my conclusions up. Recently I’ve tried putting my thank-you slide second, or in the middle.

  1. Delivery: you’ve made your slides, now how do you do the talking?

As mentioned above, my best advice is simple: practice, practice, then practice some more.

I’ve been working in Europe for seven years, and people often tell that presentations must be easy for me because I’m a native English speaker. And yes, it’s obviously an advantage to give a talk in your first language!

But they’re surprised when I say, “Well, I practiced this talk 10 times…”

I am incredibly lucky that I don’t have to do science in a foreign language, and I don’t want to downplay this advantage. But remember: we’ve all heard terrible talks by people presenting in their native language. This might be because they haven’t practiced, or they simply don’t care very much, or they are nervous and uncomfortable with public speaking.

Practice and enthusiasm can go a long way. If you aren’t quite done with your project but have to present about it anyway, you can absolutely give a good talk despite lacking a finished conclusion. If your results are disappointing, your talk can still be great – it’s all about how you present and deliver it. If you aren’t completely comfortable giving a talk in a non-native language, copious practice can help. If your slides are clear and your demeanor is positive and enthusiastic, the audience will almost definitely be on your side.

Practice also helps me fit a lot of content into a short time slot. My talks are dense with information, but most feedback I receive is that they are still clear and easy to understand. By practicing repeatedly, I can pare my language down, cut minutes off of the length of the talk, and settle on the most concise, clear wording.

So, practice. One colleague said he didn’t want to memorize his talk because he didn’t want to sound like a robot. I told him if you memorize a talk well, you can actually be quite dynamic. The trick is to practice until your delivery is more or less automatic, and then keep going. Once you know it 100%, you will become so comfortable you will begin riffing a little bit, and you won’t sound like a robot at all.

One you’ve practiced a bit on your own, practice in front of colleagues and friends. Even if they aren’t familiar with the content, they can still give great advice on delivery and slide design. Scheduling this ahead of time and some days/week(s) before your presentation will also force you to make your presentation before the last possible moment.

Two reminders that should go without saying, but apparently are needed. No matter whether you think your voice is loud or not, use a microphone. Your audience might not be able to hear you, and you actually aren’t the best judge of whether they can or not.

And finally, keep to time. If you don’t, you are inconveniencing everyone else, whether it’s by making a meeting run long or running into the next slot at a conference and possibly costing someone else the chance to communicate their project.

I hope these tips can help you nail it next time you need to talk to an audience about whatever you’ve been up to!

From #fieldworkfail to Published Paper

Amphipods are, unfortunately, not very photogenic. But here you can see some of my study organisms swimming around in a mesocosm in the laboratory, shredding some leaf litter like it’s their business (because it is).

It can be intimidating to try to turn your research into an academic paper. I think that sometimes we have the idea that a project has to go perfectly, or reveal some really fascinating new information, in order to be worth spending the time and effort to publish.

This is the story of not that kind of project.

One of my dissertation chapters was just published in the journal Aquatic Ecology. You can read it here.

The project originated from a need to show that the results of my lab experiments were relevant to real-world situations. To start out my PhD, I had done several experiments with amphipods – small crustacean invertebrates common to central European streams – in containers, which we call mesocosms. I filled the mesocosms with water and different kinds of leaves, then added different species and combinations of amphipods. After a few weeks, I saw how much leaf litter the amphipods had eaten.

We found that there were some differences between amphipod species in how much they ate, and their preferences for different kinds of leaves based on nutrient content or toughness (that work is here). But the lab setting was quite different than real streams.

So I worked with two students from our limnoecology course (which includes both bachelors and masters students) to develop a field experiment that would test the same types of amphipod-leaf combinations in streams.

We built “cages” out of PVC pipe with 1-mm mesh over the ends. We would put amphipods and leaf litter inside the cages, zip tie them to a cement block, and place the cement block in a stream. We did this in two places in Eastern Switzerland, and with two different species of amphipod.

After two weeks, we pulled half the cement blocks and cages out. After four weeks, we pulled the other half out. Moving all those cement blocks around was pretty tough. I think of myself as strong and the two students were burly Swiss guys, but by the time we pulled the last cement block up a muddy stream bank I was ready to never do this type of experiment again.

Elvira and our two students, Marcel and Denis, with an experimental block in the stream. This was the stream with easy access; the other had a tall, steep bank that was a real haul to get in and out of.

Unfortunately, when I analyzed the data, it was clear that something had gone wrong. The data made no sense.

The control cages, with no amphipods in them, had lost more leaf litter than the ones with amphipods – which shouldn’t be the case since they only had bacteria and fungi decomposing them, whereas the amphipod cages had shredding invertebrates. And the cages we had removed after two weeks had lost more leaf litter than the ones we left in the stream for four weeks.

These are not the “results” you want to see.

We must have somewhere along the way made a mistake in labeling or putting material into cages, though I couldn’t see how. I tried to reconstruct what could have gone wrong, if labels could have gotten swapped or material misplaced. I don’t have an answer, but the data weren’t reliable. I couldn’t be sure that there was some ecological meaning behind the strange pattern. It could have just been human error.

I felt bad for the students I was working with, because it can be discouraging to do your first research project and not find any interesting results. It wasn’t the experience I wanted to have given them.

My supervisor and I agreed, with regret, that we had to redo the experiment. I was NOT HAPPY. I wasn’t mad at him, because I knew he was right, but I really didn’t want to do it. I’ve never been less excited to go do fieldwork.

But back out into the field I went with my cages and concrete blocks (and no students this time). In case we made more mistakes, we designed the experiment a bit differently. We had one really well-replicated timepoint instead of two timepoints with less replicates, and worked in one stream instead of two.

Begrudgingly, we hauled the blocks to the stream and then hauled them back out again.

Cages zip-tied to cement blocks and deployed in the stream. You can see the brown leaf litter inside the enclosure.

And then for 2 ½ years I ignored the data, until my dissertation was due, at which point I frantically analyzed it and turned it into a chapter.

The draft that I initially submitted (to the journal and in my dissertation) was not what I would call my best work. My FasterSkier colleague Gavin generously offered to do some copy-editing, and I was ashamed at how many mistakes he found. I hope he doesn’t think less of me. A fellow PhD student, Moritz, also read it for me, and had a lot of very prescient criticisms.

But through all of that and peer review, the paper improved. Even though it is not going to change the course of history, I’m glad that I put together the analyses and published it, because we found two kind of interesting things.

The first was about species differences. I had used two amphipod species in the experiment (separately, not mixed together). Per capita, one species ate a lot more/faster than the other… but that species was also twice as big as the other! So per biomass, the species had nearly identical consumption rates.

The metabolic theory of ecology is a powerful framework that explains a lot of patterns we see in the world. One of its rules is that metabolism does not scale linearly with body size (here’s a good blog post explainer of the theory and data and here’s the Wikipedia article). That is, an organism twice as big shouldn’t have twice the metabolic needs of a smaller organism. It should need some more energy, but not double.

This relates to my results because the consumption of leaf litter was directly fueling the amphipods’ metabolism. They may have gotten some energy and resources from elsewhere in the cages, but we didn’t put any plant material or other food sources in there. So we could expect to roughly substitute “consumption” for “metabolism” in this body size-metabolism relationship.

Metabolic theory was originally developed looking across all of life, from tiny organisms to elephants, so our twofold size difference among the two amphipod species isn’t that big. That makes it less surprising that the two species have the same per-biomass food consumption rates. But it’s still interesting.

The second interesting result had to do with how the two species fed when they were offered mixtures of different kinds of leaves. Some leaves are “better”, with higher nutrient contents, for example. Both species had consumed these leaves at high rates when they were offered those leaves alone, and had comparatively lower consumption rates when offered only poor-quality leaves.

In the mixtures, one species ate the “better” leaves even faster than would be expected based on the rates in single-species mixtures. That is, when offered better and worse food sources, they preferentially ate the better ones. The other species did not exhibit this preferential feeding behavior.

I thought this was mildly interesting, but I realized it was even cooler based on a comment from one of our peer reviewers. (S)he pointed out that this meant that streams inhabited by one species or the other might have different nutrient cycling patterns, if it was the species that preferentially ate all of the high-nutrient leaves, or not. We could link this to neat research by some other scientists. It was a truly helpful nudge in the peer review process.

So, while I had hated this project at one point, it’s finally published. And I think it was worth pushing through.

It was not a perfect project, but projects don’t have to be perfect for it to be worth telling their stories and sharing their data.

My #365papers Experiment in 2018

This year, based on initiatives by some other ecologists in the past, I embarked on the #365papers challenge. The idea of the challenge is that in academia, we end up skimming a lot of material in papers: we jump to the figures, or look for a specific part of the methods or one line of the results we need. Instead, this challenge urged people to read deeper. Every day, they should read a whole paper.

(Jacquelyn Gill and Meghan Duffy launched the initiative and wrote about their first years of it. But #365papers is now not just in ecology, but in other academic fields. Some of the past recaps I read were by Anne Jefferson, Joshua Drew, Elina Mäntylä, and Caitlin MacKenzie. Caitlin’s was probably the post that catalyzed m to do the challenge.)

I knew that 365 papers was too ambitious for me, and that I wouldn’t (and didn’t want to!) read on the weekends, for example. I decided to try nevertheless to read a paper every weekday in 2018, which would be 261 days total.

In the end, I clocked in at 217 papers (I read more than that, but see below for what I counted as a “paper” for this challenge) – not bad! I tweeted links to all the papers, so you can see my list via this Twitter search. I can confidently say that I have never read so many papers in a year.

In fact, I am guessing that this is more papers than I have read in their entirety (not skimming or extracting, as mentioned above), in my total career before 2018. That’s embarrassing to admit but I am guessing it’s not that unusual. (What do you think, if we’re all being honest here?)

This was a great exercise. I learned so much about writing, for one thing – there’s no better way to learn to write than to read a lot.

But the thing that was most exciting was that I read a lot more, and a lot of fun pieces. I had gotten to a place where there were so many papers that I felt I had to read for my own work, that I would just look at the pile, blanche, and put it off for later. Reading had become a chore, not something fun.

Titles_wordle

A Wordle of the paper titles. On my website it says I am a community and ecosystem ecologist, and I guess my reading choices reflect that! (I’d be interested to make a Wordle based on the abstracts, to see if there are more diverse words than the ones we choose for titles – but I didn’t make time to extract all the abstracts for that.)

That’s not a great way to do research, and luckily the challenge changed my reading status quo. If I was reading every day, I reasoned, then not every paper had to be directly related to my work as a community ecologist. There would be ample time for that, but I could also read things that simply looked interesting. And I did! I devoured Table of Contents emails from journals with glee and read about all sorts of things – evolution, the physical science of climate change, remote sensing.

These papers, despite seeming like frivolous choices, taught me a lot about science. Just because they were not about exactly what I was researching does not mean they did not inform how I think about things. This was incredibly valuable. We get stuck in our subfields, on our PhD projects, in our own little bubbles. Seeing things from a different angle is great and can catalyze new ideas or different framing of results. Things that didn’t make sense might make sense in a different light.

But I also did read lots of papers directly related to what I was working on. I think I could only do that because it no longer felt like a chore, like a big stack of paper sitting on the corner of my desk glaring at me. This challenge freed me, as strange as that sounds given the time commitment!

And finally, I tweeted each paper, and tagged the authors if possible. This helped me make some new connections and, often, learn about even more cool research. It helped me put faces to names at conferences and gave me the courage to strike up conversations. The social aspect of this challenge was fun and also probably pretty useful in the long run.

For all of the reasons I just mentioned, I would highly recommend this challenge to other academics. (It’s not just in ecology – if you look at the #365papers hashtag on Twitter, there are a lot of different people in different fields taking up the challenge.) Does 365 or 261 papers sound like too many? Set a different goal.  But make it ambitious enough that you are challenging yourself. For me, I found that making it a daily habit was key, because then it doesn’t feel like something you have to schedule (or something you can put off) – you just do it. Then sit down and read a whole paper, with focus and attention to detail. If you like it, why is that? Is the topic of interest to you? The writing good? The analyses particularly appropriate and well-explained? Is it that the visuals add a lot to the paper? Are the hypotheses (and alternative hypotheses) identified clearly, making it easier to follow? Or, if you don’t like it, why is that? Is it the science, or the presentation? What would you do differently?

One thing I didn’t nail down was how to keep notes. I read on paper, so I would highlight important or relevant bits or references to look up. But I don’t have a great system for how to transfer this to Evernote (where I keep papers’ abstracts linked to their online versions, each tagged in topic categories). In the beginning I was adding photos of each part of the paper I had highlighted to its note, but this was too time-consuming and I gave up. In the end it was like, if I had time, I would manually re-type my reading notes into Evernote, and if not, I wouldn’t. I do think the notes are valuable and important to have searchable, so this probably limits the utility of all that reading a little bit. It’s something I will think how to improve for next year. The biggest challenge is time.

In addition to reading a lot, I kept track of some minimal data about each paper I read. I’ll present that below, in a few sections:

  • Where (journals) and when the papers were published
  • Who wrote them – first authorship (gender, nationality, location)
  • A few thoughts about last authorship
  • Grades I assigned the papers when reading – potential biases (had I eaten lunch yet!?) and the three papers I thought were best at the time I read them

I plan to try this challenge again next year, and the data that I summarize will probably inform how I go about it. I’ll discuss that a little at the very end.

What Did I Count as One of My 365  261  217 Papers?

First, some methodological details. For this effort, I didn’t count drafts of papers that I was a co-author on, although that would have upped the number or papers quite a bit because I have been working on a lot of collaborative projects this year. I also didn’t count reading chapters of colleagues’ theses, or a chapter of a book draft. And I didn’t count book chapters, although I did read a few academic books, among them Dennis Chitty’s Do Lemmings Commit Suicide, Mathew Leibold & Jonathan Chase’s Metacommunity Ecology, Andy Friedland and Carol Folt’s Writing Successful Science Proposals, and a book about R Markdown. I started but haven’t finished Mark McPeek’s Evolutionary Community Ecology.

I did count manuscripts that I read for peer review.

Where The Papers Were Published

I didn’t go into this challenge with a specific idea of what I wanted to read. I find papers primarily through Table of Contents alerts, but also through Twitter, references in other papers, and searches for specific topics while I was working on my dissertation or on research proposals. This biases the papers I read to be more likely to be by people I’m already aware of or in journals I already read. Not entirely, but substantially.

We also have a “journal club” in our Altermatt group lab meeting which doesn’t function like a standard one, but instead each person is assigned one or two journals to “follow” and we rotate through each person summarizing the interesting papers in “their” journals once every few months (the cycle length depends on the number of people in the lab at a given time). That’s a good way to notice papers that might be good to read, and since we are a pretty diverse lab in terms of research topics, introduces some novelty. I think it’s a clever idea by my supervisor, Florian.

Given that I wasn’t seeking out papers in a very systematic way, I wasn’t really sure what the final balance between different journals and types of journals would be at the end of the year. The table below shows the number of papers for each of the 63 (!) journals that I read from. That’s more journals than I was expecting! (Alphabetical within each count category)

In addition, I read one preprint on BioRxiv.

I don’t necessarily think that Nature papers are the best ecology out there; that’s not why it tops the list. Seeing EcologyOikos, and Ecology Lettersas the next best-represented journals is probably a better representation of my interests.

But, I do think that Nature (and Science, which had just a few fewer papers) papers get a lot of attention and must have been chosen for a reason (am I naive there?). There are not so many of them in my field and I do try to read them to gauge what other people seem to see as the most important topics. I also read them because it exposes me to research tangential to my field or even entirely in other fields – which I wouldn’t find in ecology journals, but which are important to my overall understanding of my science.

I’m pleased that Ecology & Evolution is one of my top-read journals, because it indicates (along with the rest of the list) that I’m not only reading things for novelty/high-profile science, but also more mechanistic papers that are important to my work even if they aren’t so sexy per se. A lot of the journals pretty high up the list are just good ecology journals with a wide range of content.

There are a lot of aquatic-specific journals on the list, which reflects me trying to get background on my specific research. But there are also some plant journals on the list, either because I’m still interested in plant community ecology despite being in freshwater for the duration of my PhD, or because they are about community ecology topics that are useful to all ecology. It will be interesting to see if the aquatic journals stay well-represented when I shift to my next research project in a postdoc.

Society journals (from the Ecological Society of America, Nordic Society Oikos, British Ecological Society, and American Society of Naturalists, among others) are well represented. Thanks, scientific societies!

When The Papers Were Published

The vast, vast majority of papers I read were published very recently. Or, well, let’s be honest, because this is academic publishing: who knows when they were written? I didn’t systematically track this, but definitely noticed some were accepted months or maybe even a year before final paginated publication. And they were likely written long before that. But you get the point. As for the publication year, that’s below.

year_published

This data was not a surprise from me as a fair amount of my paper choices come from seeing journal table of contents alerts. I probably should read more older papers though.

Who Wrote the Papers: First Authors

Okay, on to the authors. Who were they? As I mentioned for journals, I didn’t systematically choose what I was reading, so I was curious what the gender and geographic breakdown of the authors would be. Since I didn’t very consciously try to read lots of papers authored by women, people of color, or people outside of North America and Europe, I guess I expected that the gender of first authors to be male-skewed, white, and from those continents. I wasn’t actively trying to counteract bias in this part of the process, so I expected to see evidence of it.

I did my best to find the gender of all first authors. Of those for which this was deducible based on homepages, Twitter profiles listing pronouns, in-person meetings at conferences, etc.,:

  • 59 first authors were women
  • 155 first authors were men
  • 2 papers had joint first authors
  • 1 paper I peer-reviewed was double-blind (authorship unknown to me)

I’m fairly troubled by this. I certainly wasn’t going out of my way to read papers by men, and I didn’t think it would be this skewed when I did a final tally. If I want to support women scientists by reading their work – and then citing it, sharing it with a colleague, contacting them about it, starting discussions, etc. – I am going to have to be a lot more deliberate. I want to learn about other women scientists’ ideas! They have a lot of great ones. I’m going to try harder in the future. Or, really, I’m going to try in the future – as mentioned, I was not intentionally reading or not reading women this past year.

I initially tried to track whether authors were people of color, but it’s just too fraught for me to infer from Googling. I don’t want to misrepresent people. But I can say that the number of authors who were POC was certainly quite low.

I did, however, take some geographic stats: where (to the best of my Googling abilities) authors originally came from, and where their primary affiliation listed on the paper was located.

For the authors for whom I could identify nationality based on websites, CVs, etc., 31 countries were represented.

FA nationality

The authors were numerically dominated by native English speakers, but those had relative geographic diversity, coming from the US, Canada, the UK, Ireland, Australia, and New Zealand (I’m not sure if English is the first language of the South African author). 15 different European nationalities were represented. There were a number of authors from Brazil, and one each from Chile, Colombia, and Ecuador, as well as Central America being represented by a Guatemalan. Maybe a surprise was that Chinese authors were underrepresented, either from Chinese institutions (see below) or those outside China; there were just five. There are many countries from which there are great scientists which are not represented in this dataset.

When it came to institutional country, the field narrowed to 24 countries plus the Isle of Man.

FA_inst

While there were 78 American first authors, 90 first authors came from American universities/institutions. In Europe, Denmark, Sweden and Switzerland gobbled up some of the institutional affiliations despite having low numbers of first authors originally from those countries (this is very consistent with my experience in those places).

(Note: it would have been really nice to make a riverplot showing how authors moved between countries, but I was too lazy to build a transition matrix. Sorry.)

This isn’t really surprising, the consolidation into fewer countries. It reflects that while small countries have great scientists, they often don’t have as many resources to have great research funding or many universities. Some places, even those with traditionally strong academic institutions, are simply going through austerity measures. I think of many Europeans I know who decided that leaving their countries – Portugal, Spain, the Baltics and Balkans, and other places – was their best bet to be able to do the research they wanted to do, and have a job. I think of others, notably a friend in Slovenia, who is staying there because he loves it, but whose opportunities are probably curtailed because of that.

I’d like to read more widely in terms of institutional location and author nationality, but it’s a bit overwhelming to make a solid plan. Reading more women is fairly straightforward. But when I think of all the places with good science but where I didn’t read a single paper, there are a lot of them. I can only read so many papers! So part of it will be recognizing that I can make an effort to read more diversely but I’m not going to solve bias in science just with my reading project. I need to make an effort that is meaningful, and then be okay with what it doesn’t accomplish.

Also, I don’t always know the gender, race, or nationality of an author before I Google them – this past year, I only did that after I read the papers. I might need to sometimes reverse that process, perhaps?

Do you have other ideas of how to tackle this? I’d love suggestions if anyone has them.

One thought is to more deliberately read from the non-North American, non-European authors in the journals I already read from. I already know I like the papers those editorial teams select. This would probably be the least amount of extra work required to diversify my reading, because I could stick to the same method of choosing papers (table of contents alerts), but execute differently on those tables of contents.

And a Bit About the Last Authors

I did not collect as detailed information about the last authors of each paper, but I did collect some. A big topic in academia is that women get fewer and fewer the higher you go in the academic hierarchy. I wondered if that was true in the papers I was reading.

There were fewer last authors because some papers were single-author. Of those that were multi-author, I filtered the dataset to look at only those where last-authorship seemed to denote seniority (based on author contribution statements, lab composition and relationships between authors, etc.) rather than being alphabetical or based on something else (on some papers with very many authors, all the senior authors were listed at the front of the author list). Of these,

  • 19 senior last authors were women
  • 105 senior last authors were men

Yikes!

That’s all one can say! Yikes!

Like the first authors, the last authors came from 31 different countries… but some different ones were represented (Venezuela, Serbia, India). They represented institutions in a few more places than the first authors, 28 different countries vs. 24 for first/single authors. I’m not sure what to make of that, especially since this is from a smaller subset of papers (since the single-author papers were removed), but obviously collaborative research and writing is alive and well.

Ratings and Favorite Papers

Right after I read each paper, I assigned it a letter grade. Looking back through my record keeping, I am less and less convinced that this is really meaningful. I think it had to do a lot with my mindset at the time, among other things. Did I just have a stressful meeting? Was I impatient to finish my reading and go home? Was I tired? Maybe I was less receptive to what I was reading. Or conversely, maybe if I was tired and a little distracted I was less likely to notice flaws in the paper. Who knows. Anyway, “B” was the grade I most frequently assigned.

grades

I didn’t keep detailed notes of why I felt different grades were merited, but I can make a few generalizations. Quite a number of the papers I gave poor grades were because I didn’t find methods to be well enough explained. I either couldn’t follow what the authors did, or maybe important statistical information wasn’t even included (or only in the supplementary information when I thought it was so essential to understanding the work that it really needed to be brought to the center). In particular this included some papers using fancy and cutting edge methods… just using those statistics or analysis techniques doesn’t make your paper magic. You still need to say what those analyses show and what they mean ecologically, and convince me that the fancy stats actually lead to a better understanding of what’s going on!

In some ways this is not authors’ faults – journals are often pressing for shorter word counts, and some don’t even publish methods in the main text, which is a total pain if you’re a reader. Also, it’s one of the biggest things I struggle with when writing – you know perfectly well what you did, and it can be hard to see that for an outsider your methods description seem incomplete. I get it! Reading papers where you don’t understand the methods is always a good cue to think about how you present your own work.

I assigned three papers grades of “A+”. Were they better than the ones I deemed “A”s? I’m not sure, but at the time, whether because of my general mood or their true brilliance, I sure thought they were great. They were:

I read a lot of other great papers too! But looking back, I can say that these were among my favorites, all for different reasons. I could go and add more papers to a “best-of” list but I’ll just leave it at that.

Recap!

Besides all the great reasons to do this challenge that I mentioned in the opening, this was pretty interesting data to delve into. I think I will try to keep doing the challenge in 2019, and I am currently thinking about how I choose which papers to read and if there are good strategies to read more diverse authors. I’m happy with the diversity of research that I read, but I would be happier if the voices describing that research were more diverse, to reflect the diversity of scientists in our world.

Do you have ideas about that? Comment below.

This was the final year of my PhD, and so in some ways a great time to do a reading challenge. It probably would have been more helpful if I had done this in the first year of my PhD, but hey, too late now. This year I wasn’t doing lab work, just writing and analyzing, so it was easy to fit in a lot of reading. It’s not good to stare at a screen writing all day, and I prefer to read on paper, so it was often a welcome break.

I don’t know what my work life will be like next year, so I will see how many papers I end up reading. It could be more, as I start a new project and need to get up to speed on a new subfield. Or it could be less as my working habits change. I’ll just do my best and adapt.

Finally, I’m thinking about whether there’s additional data I should track for next year’s challenge. Whether there is a difference between first and corresponding authors might be interesting. I’d welcome other suggestions too, but only if they don’t take much work to extract!

part 2: suppressed results.

In part one of my writeup on survey results, I talked a lot about the file drawer effect and why we end up not publishing some potentially useful results because we don’t have time. In a high-pressure environment where publication in the best journals is important to advance our careers, we often focus that limited time on the manuscripts with the highest potential impact. In some unfortunate cases, that means that professors do not prioritize giving their students the support necessary to publish results from projects, theses, or dissertations.

There’s no doubt that this can hurt younger scientists’ careers. Helping a student aim high and write higher-quality papers is great…. but it can go too far, too.

“There was no specific pressure NOT to publish, but rather my supervisor could not provide useful and supportive feedback and he was never satisfied with any draft I submitted to him for review,” wrote one respondent. “After numerous iterations of my projects over many years, I became discouraged and decided it wasn’t worth the effort to try and publish my results. Others in my lab have had the same or similar experience.”

Today I will talk about something more insidious: when you are discouraged from publishing something for other reasons, like politics, that your data didn’t support your research group’s hypothesis, or that external partners did not understand the results or the underlying science.

(If you want to know more about the dataset I am working with, its small size, and its various biases, I discussed it in part one: click over here.)

As an ecologist, I didn’t think that this happened a lot in our field, at least not compared to other fields where there’s more often commercial connections and money at stake. Perhaps if you are an environmental consultant or doing impact reports for the government or companies. But in a purely academic community I assumed that it was a fairly rare occurrence for results to be kept out of publication.

One thing quickly became obvious. It does happen, sometimes. There are lots of reasons, some of which are highly case-specific, i.e. the government of the researcher’s native country didn’t allow him/her to import her samples in the end, after all…. but there are some common patterns, too.

With such a small sample size – 40 of the 184 respondents reported this happening – and also the fact that I made it clear online that I really wanted to hear from people who had been discouraged from publishing, it’s impossible to say how prevalent such events actually are. The proportion of responses does not reflect the proportion of total scientists who have had this experience.

I can certainly say that comparatively, many fewer unpublished papers are due to these events than due to the self-created file drawer effect. Two thirds of survey respondents said they had at least one unpublished dataset, if not a handful or more, even though many were just in the first five years of their research careers.

The file drawer effect means that there are tens of thousands of unpublished datasets out there, maybe 100,000. Many probably have no significant results, since some of the most cited reasons for not publishing were inconclusive data, needing to collect more data, and doubting that the results would be accepted by a high impact journal.

Other pressures happen in a smaller number of cases, but primarily for the opposite reason: results did show something interesting, but maybe not what someone – a supervisor or a government employee – wanted to see.

And while I cannot draw any conclusions about prevalence, I can (hopefully) draw some conclusions about why this happens and who it happens to.

A brief table of contents:

First, student-specific challenges.

Second, government and, to a lesser extent, industry challenges.

Third, “internal” and interpersonal political challenges.

Students Bear The Brunt of It

“As a grad student, the concept of this is crazy to me,” wrote one respondent. “In many ways and instances, publications are the currency by which scientists are measured against one another. Thus, not publishing work seems counterintuitive to me. I’d like to hear the reasons behind why it happens.”

Well, dear student, there are many. And being discouraged not to publish in fact seems to happen mostly to students. Here’s who the 40 survey respondents who reported being pressured not to publish their work were:

title at timeMostly students. One explanation is that as we go along in our careers, we get a better concept of what is good and valuable science, and make some of the decisions to jettison a project ourselves rather than being told by a supervisor. We also become more and more crunched for time, meaning that we make more of these types of prioritizations before it gets to the point of having someone else weigh in.

But that’s not the only explanation. Let’s look first at when a direct supervisor was involved. With 32 responses of this type, it was about twice as common in my dataset than when an external person pressured a respondent not to publish. In these supervisor-related cases, it was most frequently a tenured or tenure-track professor discouraging a graduate student from publishing a chapter of their thesis or dissertation.

titles for direct

And as discussed in part one, part of the issue was that these driven supervisors were strapped for time and transferred their own expectations about significance of results and journal quality onto their students, even if students would have been happy settling for a lower-impact publication.

Sometimes this is very appropriate, sometimes less so. Where this line is drawn probably depends on your goals in science.

“A paper was published, but it excluded the results that I found the most interesting because they were not in line with the story that my advisor wished to push,” one respondent wrote of bachelors thesis research. “Instead, results from the same project that I though were not well thought-out were published in a way that made them seem flashy, which seemed to be the main goal for my tenure-track advisor.”

Another respondent had a similar story with a different ending, about work done as part of a masters thesis.

“The situation was not resolved; I just ended up not publishing,” he/she wrote. “I wanted to publish, as I considered the results to be high-quality science and the information very useful to disseminate, but I could not agree to change the research focus entirely to suit my supervisor’s personal interests.”

You can see both sides of the coin in some cases. What is the goal? To advance scientific theory and knowledge, or to share system-specific data that might help someone in the future? Ideally, a manuscript does both, but sometimes that’s not possible and just the second is still a good aim.  In some cases the supervisor is probably guiding the student towards using their data to address some question larger than the one they had initially considered. But, as the bachelors thesis respondent noted, it’s not always appropriate to do so – some people think that overreaching and drawing conclusions based on data not really designed to do so is a big problem in some fields.

“Some datasets and analyses I have collected and analysed don’t tell a clear story that would be readily publishable given the current state of how research articles are assessed for impact thus I tend to move on to things that tell a better story,” wrote another respondent. “This feels disingenuous at times though perhaps it is how science moves forward more quickly.”

A surprising amount of the time, supervisors discouraged students from publishing because the results turned out to not support their hypothesis. This was actually the most common single reason that a supervisor told a student not to publish. I may be naive, but it’s hard for me to think of a situation in which this is not just straight-up bad.

reasons for directI was quite explicit to ask whether the results did not support “our” hypothesis, or whether they did not support a supervisor, department, or company’s hypothesis. Sometimes the two overlapped, but most of the time when this happened the respondent selected the second option: the researcher themself might not have been surprised by the results, but the supervisor, lab group, or company did not like them.

(About 60% of the 32 responses came from ecology and evolution, but many also came from other fields.)

fields for direct

This really surprised me. In our training as scientists it is drilled into us that we might learn as much from a null result or a reversal of our hypothesis as we would if our hypothesis was supported – maybe even more, because it tells us that we have to carefully look at our assumptions and logic, and can lead us down new and more innovative paths.

In the U.S. at least, a substantial proportion of the population just has no respect for science. Whether its climate change deniers or anti-vaxxers, as a science community we tell them: go ahead, prove us wrong! Science is very open to accepting data that disproves something we had previously thought was true. We try to tell the public that we are not close-minded, that we are following evidence, and that if the evidence showed us something else, we’d still accept it.

On some small scale, that might not be true, and it’s very troubling. Without knowing more about the research in question here, it’s impossible to say much more. But it’s not a very inspiring trend. And again: this was happening coming from direct supervisors who were mostly in academia and shouldn’t have had a financial or political conflict of interest or anything like that.

And it also has potentially big implications for the sum of our community’s knowledge. Luckily there are so many researchers out there that probably someone else will ask the same question and publish it eventually, but this sort of attitude can delay learning important and valuable things.

“Unfortunately it’s hard to tell what could become interesting later, or what could be interesting to another researcher, so it’s too bad that these results never see the light of day,” wrote one early-career biologist. “What’s more concerning to me is the tendency of some researchers in my field to ignore or leave out results that they can’t explain, or worse, that contradict their pet hypothesis.”

When pressure came from an external source – someone not supervising the study respondent – the prevalence of this reason for discouraging publication was even higher. The data not supporting someone’s hypothesis rose from roughly two-thirds of respondents citing it, to almost half.

reasons externalAnd relatedly, the person doing the pressuring was afraid that the results would make them, their group, or the government look bad. In other words, these are classic cases of repressing research, the worst case scenario that we think of!

Governments are Not Always Great (for Science)

Sometimes, this external pressure came from within academia, but it was also often from governments.

Screen Shot 2015-10-07 at 1.38.56 PM“Yes, the results were published, yes it created an public uproar, yes all authors were chastised by the agency and external company, and yes all subsequent follow-up research papers on the topic were expressly forbidden,” wrote one federal government employee. “There are considerable research accomplished by state and federal government agencies. Much of those data results never see the light of day because the results may be divergent from what the chain of command’s perspective or directive may be, I.e. support the head official’s alternative energy, logging harvest, endangered species delisting, stream restoration, etc. policy.”

It’s clear that one place where state, local, and federal government officials can be particularly destructive is Canada. Apart from the cuts to research funding which have been hitting many countries, it’s been discussed by people far more knowledgeable than I that the government literally muzzles its scientists by not allowing them to talk to the media, among other policies: see here, here, and here.

Here’s what one anonymous survey respondent had to say: “The Canadian government has been muzzling scientists for years…I was just the latest in their ‘Thou Shalt Not Publish’ scheme. If the research you’re doing will make them look bad in any way, you’re not allowed to publish the results without fear of massive repercussions: job loss, degree removal, job losses of your superiors if they can’t fire you, being blacklisted in the scientific community, being blacklisted for grants, etc.”

Multiple survey respondents cited the Canadian government. So, about those elections coming up….

Consultants and researchers in the corporate/industrial sectors are often muzzled as well, but many of them are aware of this from the time they are hired.

“It is simply understood that if the research results from work we do for clients are inconvenient, they will attempt to redact the reports as trade secrets,” wrote one consultant. “They own the data so they are often able to do this. But not always.”

But even if companies are upfront about data ownership policies, it can still feel tough. One person told me that it was discouraging not to be able to get a patent and get credit for his/her work because a company owned all the intellectual property rights and would use the discovery as proprietary and secret until it was no longer profitable to do so.

In a variety of fields, there’s also some crossover between the industrial and academic sectors of research. Companies often provide funding to students or research groups working in an essential location or on a related topic. The companies shouldn’t be able to use their influence to suppress results, but in some cases they do seem to.

This is actually what happened in the case that inspired me to create my survey: the International Association of Athletics Federations squashed survey results showing that a huge proportion of championships competitors were doping. They were not involved in the research itself, but had provide access to the athletes, and thus felt like it was their prerogative to police the results.

One survey respondent said that he had been let go from his position after publishing research about the effects of pesticides, and had heard a researcher with industry ties imply that the same thing would happen to someone else publishing similar research.

Several people in environmental and earth sciences fields mentioned this happening to people that they knew or had talked to, but it’s hard to pin down other than in news stories.

We Can Be Our Own Worst Enemies

Finally, other politics are more about internal power dynamics, be it within a department or within a research field.

“A person, invited late to the project, was asked to provide simple review in return for coauthor ship,” wrote one respondent. “They hijacked the project and it is still unpublished four years on.”

It’s pretty tragic to see a good experiment, or maybe a whole grant that some agency spent hundreds of thousands of dollars on and researchers spent years of their lives on, get derailed by interpersonal problems and arguments about data ownership or authorship.

In many fields the community of specific experts is fairly small, so you are likely to have to work with people again, or have them review papers, etc etc etc. The problems are hard to resolve once they begin.

It was also clear that sometimes people nixed manuscripts because they didn’t understand the science or the value of this. Sometimes this meant a bureaucrat at a funding agency, but sadly, sometimes it also came from within the scientific community itself.

“Because my scientific community is so small, in some cases only one review has been given by a local expert, and of course the editors don’t have time to fact-check, but my paper will not be accepted because these few experts are, as I perceive it, not wanting recent data contrary to results from their systems to be published, and assume that someone with an M.Sc. cannot be a diligent scientist, in many cases providing lots of evidence in reviews that they have not read the manuscript with care… possible skipping entire sections,” wrote one student.

There’s even outright theft sometimes.

“The results were made partially public at a conference,” wrote one researcher. “Another researcher who has hard feelings towards my former supervisor, and viceversa, started to use the date as if it was a ‘public domain information’ and later my supervisor considered that the publication is not worth going out. The problem has not been resolved yet.”

A Reminder

This has been, in some ways, a worst-ever tour of the scientific research community. We all know someone who has had some terrible experience with their research.

But many of us have had relatively happy tenures in science and research. At least in my field, ecology, I can say that the vast majority of people are good people and fun to work with. It’s part of what I love about my job. If the only people around me were those who stole results, bullied me into not publishing, constantly asked me to change the focus of my research, or demeaned what I did because I was a graduate student, I would quit.

But here I am, and I’m happy! Such people do not make up the majority in our fields. But it’s worth remembering that even one major interaction like this can seriously discourage people from continuing to do research. There are lots of other jobs out there, and if the research environment is malevolent it’s easy to feel that the grass is greener on the other side.

So: with the knowledge that there is some scummy behavior going on, can we try to be nicer and kinder to one another? After all, our goals are to advance scientific knowledge and to create more capable, creative, and conscientious scientists.

Thanks to all who participated in the survey. I hope it has been interesting and helpful to read about.