A lab roadmap (an open science manifesto, part 2)

Following my ‘conversion’ to open science practices, detailed in my previous post, I have put together a roadmap for transitioning research in my lab to be more open. Whilst many of these changes won’t happen immediately, we’re at a point in time where we’re wrapping up many studies before starting new ones. So my intention is that all new studies will follow as many of these guidelines as possible. Studies which are completed but not yet published will also aim to use as much of the roadmap as is practical. Within 2-3 years this backlog of current work should be cleared, paving the way for a more open future.

Study design: Routinely pilot new paradigms

Pilot work is an important part of the scientific process. Without adequate piloting, experimental paradigms will be poorly understood, stimuli may not be optimised, and much time can be wasted by conducting full studies that contain a basic (avoidable) error. We are lucky in psychology and cognitive neuroscience that our participants are humans, and we are also humans. So it’s completely reasonable to run ourselves through several iterations of a pilot experiment, to get a feel for what is happening and make sure that experimental code is working and producing sensible results. This isn’t “cheating”, it’s just good scientific practice.

Study design: Larger sample sizes and internal replication

My background is in low-level vision, where it is quite common to run a very small number of participants (usually about N=3) on a very large number of trials (usually several thousand per experiment, taking many hours across multiple sessions). For some types of study, this is the only realistic way of completing an experiment – we simply cannot expect dozens of volunteers to each do 40 hours of psychophysics. But even in these sorts of studies, it is often possible to confirm our main results using a subset of critical conditions, with a much larger sample size (see e.g. this paper). This constitutes a form of internal replication, where the basic effect is confirmed, and sometimes the findings are extended to additional conditions. Other types of internal replication we have used in the past include running a similar experiment using a different imaging modality (EEG and MRI, or EEG and MEG), or replicating in a different population (e.g. adults vs children). These types of study design help us to move away from the single-experiment paper, and guard against false positives and other statistical anomalies, to make sure that the work we publish is robust and reproducible. I’m not sure it’s helpful to specify minimum sample sizes, as this will vary depending on the paradigm and the relevant effect sizes, but I’d expect to at least double the sample size of all new studies going forwards. A few years back we ran an EEG project where we tested 100 people using a steady-state paradigm. The average data you get with such a large sample size are incredibly clean, and can be used for answering any number of secondary questions.

Study design: Include meta-analyses where appropriate

In many areas of research, there are already lots of studies investigating the same basic phenomenon. A meta-analysis is a systematic, empirical method for summarising all of these results. Sometimes this will answer a research question for you, without requiring any new data collection. In other situations, the meta-analysis can make explicit what is currently unknown. In visual psychophysics, we have a huge body of existing research using consistent methods, and spanning many decades. Yet meta-analyses are rarely used. Where appropriate, we will conduct meta-analyses to complement empirical work.

Study design: Preregister all studies

Preregistration requires very similar information to an ethics proposal, so these two steps should be done at around the same time, before any formal data collection begins. Importantly, the preregistration documents should detail both the design of the experiment, and also how the data will be analysed (and what hypotheses are being tested). This helps to guard against the common, but problematic, practice of “HARKing” (Hypothesising after the results are known), though of course it is still acceptable to perform additional and exploratory analyses after the data have been collected. The Open Science Framework provides a straightforward platform for preregistration, and will also host analysis scripts and data once the study has been conducted.

Study design: Move from Matlab/PTB to Python/Psychopy

For over a decade, I have used a combination of Matlab and Psychtoolbox to run most experiments. This works well, and has many advantages, not least that my familiarity with these tools makes setting up new experiments very quick. But Matlab is a closed commercial language, and Psychtoolbox development has slowed in recent years. In contrast, Jon Peirce’s Psychopy is completely open source, and is under intensive development by many people. It also benefits from a graphical interface, making it easier for project students to use to set up their own experiments. I’ve made some inroads to starting to learn Python, though I’m aware that I still have a very long way to go on that front. But Rome wasn’t built in a day, as they say, and I’m sure I’ll get there in a few years.

Analysis: Script all data analysis in R

Although graphical interfaces in packages such as SPSS are intuitive and straightforward, statistical analyses performed in this way are not easy to reproduce. Creating a script in R means that others (including yourself in the future) can reproduce exactly what you did, and get the same results. Since every aspect of R is open source, sharing analysis scripts (and data, see below) means that others can reproduce your analyses. R also produces excellent figures, and copes well with alpha transparency, meaning it can be used for the entire analysis pipeline. There are some down sides to doing this – R is sometimes slower than Matlab, it has less provision for parallel computing, and far fewer specialised toolboxes exist (e.g. for EEG, MRI or MEG analysis). So the intention to do all analyses in R might not be realised for every single study, but it will be increasingly possible as more tools become available.

Analysis: Create a lab R toolbox

Some things need doing the same way every time, and it makes sense to write some robust functions to perform these operations. In R you can create a custom package and share it through GitHub with others in the lab (or elsewhere). I’ve already started putting one together and will post it online when the first iteration is finished.

Analysis: Level-up data visualisation

I’ve been working pretty hard on improving data visualisation over the past few years. I like including distributions wherever possible, and am a fan of using things like violin plots, raincloud plots and the like to replace bar graphs. Showing individual participant data is pretty standard in threshold psychophysics (where often N=3!), and I think this is generally worthwhile in whatever form is appropriate for a given data set. In many studies we measure some sort of function, either by parametrically varying an independent variable, or because measures are made at multiple time points. Superimposing each participant’s function in a single plot, along with the average (as is typical for grand mean ERPs), or showing individual data in a supplementary figure are both good ways to present data of this type. Of course sometimes there will be outliers, and sometimes data are noisy, but that’s OK!

Analysis: Use Bayesian statistics

Being from a visual psychophysics background, I probably use fewer traditional statistical tests than a lot of researchers. But I do still use them, and with them come all the well-established problems with false positives and an over-reliance on p-values. I think the Bayesian approach is more rational, and I’d like to use it more in the future. At the moment I’m only really comfortable using the basic features of packages like BayesFactor, but over time I’d like to learn how to create more complex Bayesian models to understand our results. So the plan for the moment is to use Bayesian versions of tests where possible, and to embrace the Bayesian philosophy and approach to data analysis.

Publication: Always post preprints

There’s just no reason not to do this anymore. Preprints increase citations and visibility of publications, and they’re a form of green open access. BioRxiv is good for neuroscience papers, PsyArXiv for more psychology-related work. Everything should be posted as a preprint before submission to a journal. A really neat idea I saw recently was to include the DOI of the preprint in the abstract of the submitted paper – that way there is a direct link to an open access version of the work in all versions of the abstract that get indexed by services like PubMed.

Publication: Always share raw data

At the same time as a preprint is posted, all data involved in the study will also be made available online. In the past I’ve sometimes posted data (and all studies with available data now have a link on the Publications page), but this has often been partly processed data – detection thresholds, or steady state amplitudes. For open data to be truly useful, it should be as comprehensive as possible. So for this reason, we will wherever possible post the raw data, along with the scripts used to perform the analyses reported in the study. There are several potential hurdles to this, most importantly the intention to make data available should be stated explicitly in the initial ethics proposal, as well as in all information and consent materials so that participants in the study are agreeing for their data to become public (albeit in anonymised form). Next the data should be stored in an open format, which is particularly problematic for EEG data, as EEG systems often use proprietary file formats. Solving this problem will be the topic of a future post. Finally, large data files need to be stored somewhere accessible online. Fortunately a number of sites such as the Open Science Framework and Figshare offer unlimited storage space for publicly available files in perpetuity. Meta-data should be included with all data files to make them comprehensible.

Publication: Aim for true open access

Where possible I’ll aim to publish in pure open access journals. There are lots of good examples of these, including eLife, Nature Communications, Scientific Reports, the PLoS journals, Journal of Vision, iPerception and Vision. Unfortunately there are also many dubious journals which should be avoided. Paying the article processing fees can get expensive, and so this might not always be possible, particularly for work that is not grant funded. In that situation, green OA is an acceptable substitute, particularly when a preprint has already been posted making the work available. But I find it morally dubious at best to pay gold OA charges for publishing in a subscription journal, so will try my hardest to avoid doing so.

Publication: Disseminate work through Twitter and blog posts

I think it’s important to try and disseminate work widely, so each new publication should be publicised on Twitter, and usually with its own blog post. This will allow us to link the final published article with all of the related resources, and also include an accessible summary of the work, and part of the story about how it came about. In the past when papers have generated media interest, this is also a useful place to collate links to the coverage.

Overall ethos: Aim for fewer, larger studies

As discussed in the sections on sample size and replication, going forward I’m planning to aim for much larger sample sizes. A lot of work these days seems to test ‘just enough’ participants to be publishable. But data sets are so much richer and more informative, and less likely to generate spurious findings, when the sample size is larger. The cost here is that bigger studies take more time and resources, and so this means that probably fewer experiments can be done in total. I feel OK about that though. I think I’ve reached a point in my career where I’ve published plenty of papers, and going forward I’d prefer to aim for quality over quantity (not to say that any of my existing work is of low quality!).

This will mean changing the way some studies are conducted. Traditional threshold psychophysics experiments involve testing a small number of participants on many conditions. It might be necessary to flip this paradigm around, and test many participants on a small subset of conditions each (which I’ve done in the past). For studies that can be run outside of a highly controlled lab setting, online recruitment tools will be worth investigating. Student projects can be run in groups and across multiple years to increase the sample size. And when writing grant proposals, funds can be requested to cover the costs of testing a larger sample. Many funders (for example the MRC) explicitly support reproducible, open science, and would rather fund something that will deliver a definitive answer, even if it costs a bit more.

Overall ethos: Pass on these habits to the next generation

All of the above activities should, in time, become part of the lab culture. This means that all project students, PhD students and postdocs working in the lab should pick up the general habit of openness, and take this with them to whatever they do next. I’ll also try to spread these practices to people I collaborate with, so hopefully this series of blog posts will help make the case for why it’s important and worth the effort!

Advertisements

3 Responses to A lab roadmap (an open science manifesto, part 2)

  1. […] I don’t post very often on Twitter, but I do read things that others post, especially over the past couple of years since I stopped using Facebook. I can’t remember when I first became aware of the open science ‘movement’ as it’s often called. I suppose that I read about the various components separately at different times. Much like religious extremists who ‘self-radicalise’ by reading material online and watching videos, over the past couple of years I’ve gradually come around to this point of view. I now intend to dramatically change how we do research in my lab, and have put together a ‘roadmap’, detailed in the companion post. […]

  2. Daniel,

    I’m very impressed by this roadmap and your justification of it. It’s one thing to ‘convert’ to the idea of Open Science, but quite another to make a detailed plan for changing the way you do things. There is a bit of a hysterical debate in the media about Plan S and it seems to be driven entirely by old, male, white, senior professors who seem to not actually understand what Open Access means or how it works. I can’t help but think that their “contributions” to the debate are mainly driven by not wanting to change the habits of a lifetime and get used to a new way of working. There is a lot of nonsense in the Norwegian press about how it is an attack on academic freedom, how all OA journals are predatory, how it is (I kid you not) “illegal”, ad nausium.

    I think we’ve been lucky to work in a field where we had a online-only, OA journal very, very early (JoV). In other fields such as economics and business… well. It’s like stepping back 20 years.

    Kudos to you for practicing what other people only preach.

    • bakerdh says:

      Hi Craig,

      Good to hear from you, and glad you liked the post. Indeed we are lucky to have had JoV as an example of a high quality OA journal (apparently conceived in the 1980s! http://dx.doi.org/10.1167/11.5.ii ). I think in other fields people still equate OA with low quality, predatory or ‘vanity’ type journals, and persistently confuse green and gold OA. Interestingly it’s seeming like there might be a loophole in Plan S whereby green OA preprints of papers in traditional journals would still be permitted, but that will still depend on the details of the copyright arrangements.

      Hope all’s well with you!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: