Marmite, and the spread of misinformation


Last week we published a study about Marmite affecting brain function in the Journal of Psychopharmacology. Perhaps unsurprisingly, this got a huge amount of media attention, with coverage on radio, television and in print. Anika and I did a range of interviews, which was an interesting and exhausting experience!

What was really striking was watching how the echo chamber of the internet handled the story. We were very careful in our press release and interviews not to name any specific diseases or disorders that might be affected by our intervention. What we think is happening is that the high levels of vitamin B12 in Marmite are stimulating the production of GABA in the brain, leading to a reduction of neural activity in response to visual stimuli. Now it happens that GABA deficits are implicated in a whole range of neurological diseases and disorders, but since we haven’t tested any patients we can’t say whether eating Marmite could be a good thing, a bad thing, or have no effect on any diseases at all.

But to the media, this somehow became a study about trying to prevent dementia! Headlines like “Marmite may boost brain and help stave off dementia” (Telegraph) were exactly what we wanted to avoid, particularly because of the risk that some patient somewhere might stop taking their medication and eat Marmite instead, which could be very dangerous. We even stated very clearly in our press release:

“Although GABA is involved in various diseases we can make no therapeutic recommendations based on these results, and individuals with a medical condition should always seek treatment from their GP.”

But these cautions were roundly ignored by most of the reporters who covered the piece (even those who interviewed us directly), as amusingly and irreverently explained in an article from Buzzfeed. I think a big part of the problem is that it is not routine practise for scientists whose work is covered in the media to give approval of the final version of a story before it is published (or even to get to see it). Maybe a mechanism by which authors can grant some sort of stamp of approval to a story needs to be developed to prevent this sort of thing and avoid the spread of misinformation. In the meantime, it’s been an amazing example of how, despite our best efforts, the media will just report whatever they want to, however tenuously it’s linked to the underlying findings.

The paper:
Smith, A.K., Wade, A.R., Penkman, K.E.H. & Baker, D.H. (2017). Dietary modulation of cortical excitation and inhibition. Journal of Psychopharmacology, in press, [DOI].

Repository version (open access)

University of York press release

A selection of media coverage:

The Independent
The Telegraph
The Times
Sky News
Sky News Facebook Live
The Mirror
The Express
The Sun
The Jersey Evening Post
The Daily Maverick
Japan Times
Yorkshire Post
Eagle FM
Stray FM
New Zealand Herald
Huffington Post
Science Focus
Science Media Centre
Neuroscience News
Daily Star
Boots WebMD
Pakistan Today
Washington Times
Men’s Health
South China Morning Post
Good Housekeeping
Medical News Today
Daily Mail
Daily Mail



Estimating Oculus Rift pixel density


A few months ago I bought an Oculus Rift DK2. Although these are designed for VR gaming, they’re actually pretty reasonable stereo displays. They have several desirable features, particularly that the OLED display is pulsed stroboscopically each frame to reduce motion blur. However, this also means that each pixel is updated at the same time, unlike on most LCD panels, meaning they can be used for timing sensitive applications. As of a recent update they are also supported by Psychtoolbox, which we use to run the majority of experiments in the lab. Lastly, they’re reasonably cheap, at about £300.

In starting to set up an experiment using the goggles I thought to check what their effective pixel resolution was in degrees of visual angle. Because the screens are a fixed distance from the wearer’s eye, I (foolishly) assumed that this would be a widely available value. Quite a few people simply took the monocular resolution (1080 x 1200) and divided this by the nominal field of view (110° vertically), producing an estimate of about 10.9 pixels per degree. As it turns out, this is pretty much bang on, but that wasn’t necessarily the case, because the lenses produce increasing levels of geometric distortion (bowing) at more eccentric locations. This might have the effect of concentrating more pixels in the centre of the display, increasing the number of pixels per degree.

Anyway, I decided it was worth verifying these figures myself. Taking a cue from methods we use to calibrate mirror stereoscopes, here’s what I did…

First I created two calibration images, consisting of a black background, and either one central square, or two lateralised squares. All the squares were 200 pixels wide (though this isn’t crucial), and the one with two squares was generated at the native resolution of the Oculus Rift (2160×1200). Here’s how the first one looks:


And here’s how the other one, with only one square looked:


These images were created with a few lines of Matlab code:

ORw = 2160; % full width of the oculus rift in pixels
ORh = 1200; % height of the oculus rift in pixels
CSw = 1440; % height of other computer's display in pixels
CSh = 900;  % width of other computer's display in pixels
ORs = 200;  % width of the squares shown on the rift
CSs = 200;  % width of the square shown on the computer's display

a = zeros(ORh,ORw);
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+ORw/4-ORs/2):(ORw/4+ORs/2)) = 1;
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+3*ORw/4-ORs/2):(3*ORw/4+ORs/2)) = 1;

a = zeros(CSh,CSw);
a((1+CSh/2-CSs/2):(CSh/2+CSs/2),(1+CSw/2-CSs/2):(CSw/2+CSs/2)) = 1;

I then plugged in the Rift, and displayed the two-square image on it, and the one-square image on an iPad (though in principle this could be any screen, or even a printout). Viewed through the Rift, each square goes to only one eye, and the binocular percept is of a single central square.

Now comes the clever bit. The rationale behind this method is that we match the perceived size of a square shown on the Rift with one shown on the iPad. We do this by holding the goggles up to one eye, with the other eye looking at the iPad. It’s necessary to do this at a bit of an angle, so the square gets rotated to be a diamond, but we can rotate the iPad too to match the orientation. I found it pretty straightforward to get the sizes equal by moving the iPad forwards and backwards, and using the pinch-to-zoom operation.

Once the squares appeared equal in size I put the Rift down, but kept the iPad position fixed. I then measured two things: the distance from the iPad to my eye, and the width of the square on the iPad screen. The rest is just basic maths:

The iPad square was 7.5cm wide, and matched the Rift square at 24cm from the eye. At that distance an object 1cm wide subtends 2.4° of visual angle (because at 57cm, 1cm=1°). [Note, for the uninitiated, the idea of degrees of visual angle is that you imagine a circle that goes all the way around your head, parallel to your eyes. You can divide this circle into 360 degrees, and each individual degree will be about the size of a thumbnail held at arm’s length. The reason people use this unit is that it can be calculated for a display at any distance, allowing straightforward comparison of experimental conditions across labs.] That means the square is 2.4*7.5=18° wide. Because this is matched with the square on the Rift, the Rift square is also 18° wide. We know the square on the Rift is 200 pixels wide, so that means 18° = 200 pix, and 1° = 11 pixels. So, the original estimates were correct, and the pixel density at the centre of the screen is indeed 11 pixels/deg.

This is actually quite a low resolution, which isn’t surprising since the screen is close to the eye, individual pixels are easily visible, and the whole point of the Rift is to provide a wide field of view rather than a high central resolution. But it’s sufficient for some applications, and its small size makes it a much more portable stereo display than either a 3D monitor or a stereoscope. I’m also pleased I was able to independently verify other people’s resolution estimates, and have developed a neat method for checking the resolution of displays that aren’t as physically accessible as normal monitors.

Conferences and various things


I’ve just finished a very busy term. It was my first term of proper teaching, which took up a lot more time than I was expecting. It seemed to go well though – I got some very positive feedback from the students, as well as some hilarious comments (“Iron your shirts!!!” being my favourite…).

I’ve also been planning two conferences that we’re holding at York this year. The AVA meeting is in a few weeks time on the 11th April. We’ve got some excellent talks lined up, and have just finalised the program. Also, the BACN meeting is taking place in September, so still plenty of time to submit abstracts for that.

Last week I was up in Glasgow, where I gave a talk in the optometry department at Glasgow Caledonian. I also went to an excellent gig at King Tut’s Wah Wah Hut, where my old band played a few times about 15 years ago. We saw two bands from the 90s: Republica and Space. It looked like this:

Space at King Tut's

Space at King Tut’s

Other than that, I’ve had a couple of papers out this year, and am working on some more. I’m also anxiously awaiting news of a BBSRC grant proposal I put in back in September. I got some very positive reviews in January, and should hear next month whether it’s been funded. Fingers crossed!

New job, first month


So, at the beginning of the month I started working at York. It’s been a busy few weeks, meeting lots of people and finding out how things work. The department is great, people have been very friendly and welcoming. So far I’ve mostly been walking in to work, and the winter mornings have been (occasionally) lovely:

Mist on the Ouse

Mist on the Ouse

My new computers showed up this week and are nearly set up the way I want them. I managed to install Grace under OSX Mountain Lion, which was rather easier than I was expecting. The lab setup is coming together. I’ve now got two very sturdy Headspot chin rests all the way from Houston and a couple of height adjustable tables are on their way.

Also, excitingly, I ordered a small EEG device (EEG-SMT) from a Bulgarian company called Olimex. It’s a very basic open source design, but should hopefully be good enough for measuring VEPs with. It arrived on Friday, and I’ve managed to stream data from it into Matlab using the USB port (and the Psychtoolbox IOPort command). Working out how to interpret that data might take a little work, and I’ll likely produce a blog post once I’ve cracked it. I think the device has lots of potential if it produces clean enough data, and is easily affordable (at well under £200) for anyone interested.

At the start of this month I went to the EPS meeting in London. It was the first one I’ve been to and I really enjoyed it. There was much more vision on the program than I was expecting, and I met some really interesting people. I’ll definitely start going to more of the meetings and maybe also join the society.

Last day at Aston


So, today is my last day on campus at Aston. I’m amazed at how quickly the last three and a half years have passed, it feels like no time at all since I was starting back here after postdoccing in Southampton. Still, it’s been a productive time, and on balance I’m glad I chose to come back here rather than do something else.

This morning I made a Wordle from the text of all the papers I’ve published. I might put it on my new website:


The other day we went for some leaving drinks at the Bull, which went like this:

Lots of people at the Bull to celebrate me leaving.

Lots of people at the Bull to celebrate me leaving.

At the moment our house is full of boxes. Laura is off work today to do some last minute packing (mostly of her craft materials) and hopefully sell her car. Then tomorrow some burly men will arrive and load everything into a lorry and take it to York. I’ll be there already (hopefully) with the cats to tell them where to put everything.

Next week I’m giving a talk at the AVA Christmas meeting in London, and then I’m back in London again just after New Year for the EPS meeting. It’s the first one I’ve been to, and I’m looking forward to going to a more general psychology conference. I’ll need to make my talk less geeky though!

Lastly, John Cass and I submitted our first collaborative paper together yesterday. We sent it to a journal that will probably reject it without even bothering to review, but hey, it doesn’t hurt to aim high!

Clearing the decks


In the last few weeks running up to Christmas there’s lots to get done. At the moment I’m trying to tie up lots of projects, and get data collected on a few experiments. One of those is a collaboration with John Cass, who came to visit the lab in September. We’re looking at consciousness during binocular rivalry, and it looks like it’s going to be an interesting study. I’ve got a few more subjects to run, and then we can start analysing and writing it up.  Here’s John savouring some beer:

John drinking beer

Our lab moved from Aston’s monolithic main building into the Vision Sciences building a few weeks ago. The new space is really great, with a big open area to have meetings and conversations in. We’ve had a few teething problems with the heating and lighting, but we’re getting things sorted now.  Here’s the open area on move-in day – it’s a bit tidier now!

A wide angle shot of the new lab space on move-in day.

We’ve also finally sorted out a place to live in York. It’s big, near the station, and they’re happy with our cats living there, so it fits all our criteria. The move should happen in about five weeks time, which means we’ll be moved in ready for Christmas. I’m really looking forward to starting the new job in January, I even have a temporary new website up in my new department.

In tying up lots of projects, we’ve had a few new papers published, some of which have been in the pipeline for quite a while. There are a couple in the final stages of review, and here are some that are already out:

This is the first thing we’ve published from Alex Baldwin‘s PhD. Alex measured sensitivity to small grating patches across the visual field, in much greater detail than people had attempted before. It turns out that sensitivity falls off as a bilinear function of eccentricity, which has important ramifications for models of spatial vision.

Another study by Wallis et al looked at the slope of the psychometric function for a range of different stimuli. We wanted to see if slopes varied with spatial frequency or pattern size (they don’t), and also to work out the most accurate method for estimating slopes over many sessions.

Finally, in a collaborative paper with Pi-Chun Huang and Robert Hess, we looked at the temporal properties of interocular suppression, in both normal and amblyopic subjects. We explain all of our findings with a simple model that assumes that signals are blurred and delayed slightly in time before they have a masking effect on the opposite eye. Surprisingly, the amblyopes don’t show greater suppression than the normal observers, once you take into account the difference in sensitivity between their eyes.

Thoughts on academic social media websites


In recent years there have been numerous social media websites introduced geared towards academics. Many of these are aiming to become a ‘Facebook for researchers’, with varying degrees of success. I find this interesting, so I tend to sign up for sites when I become aware of them. I thought I’d summarise my thoughts on all of the ones I’m aware of, partly because several people have asked me about them, but also to make it easier to keep track of them all myself!                                            Rating: 4/5 Worth a go

Probably the most widely used, and also the most similar to Facebook. You create a profile and populate it with your publication list, as well as job history, talks etc. You can ‘follow’ other researchers, and see their updates in a feed. I found the process of manually adding publications fairly straightforward, whereas the automated method was poor, probably because I have a fairly high frequency name. One unique feature is that it tells you if someone has searched Google and then clicked on your profile. This happens occasionally, and it’s interesting to see the search terms used and the searcher’s location.

Biomed Experts                                           Rating 3/5 OK

One of the earliest websites I signed up to, and still going. It is interesting, as it creates profiles by using an algorithm to group PubMed entries which appear to be by the same person. You can then claim your profile, and fix any errors. A huge advantage of this is that it automatically adds new publications as they come out, and it also suggests relevant papers you might find interesting (much more successfully than other websites). It makes pretty, if pointless, network diagrams of people who collaborate with each other. I sometimes find it hard to tell if other people have actually signed up, or if their profiles are just auto-generated.

ResearchGate                                               Rating: 2/5 Waste of time

Kind of a mixture of the previous two websites. It calculates ‘impact points’, which turn out to be just the impact factor (from 2009) of the journal an article was published in. It adds these up for a ‘total impact’, and averages them too. Not terribly useful, and doesn’t do anything better than an alternative.

IAmScientist                                                 Rating: 1/5 Not worth it

Again, similar to the above three sites, with little to make it stand out. I actually don’t know anyone else on this one, and there doesn’t seem to be a mechanisms for ‘connecting’ with other people. There used to be a horrendous bug in the publication searching feature where it would add thousands of unwanted papers, which were then difficult to remove. Hopefully they’ve sorted that out by now. Possibly created by a non-native English speaker, as there are various peculiar turns of phrase (such as “Manage you publications”).

SciLink                                                           Rating: 1/5 Gone anyway

A primitive version of the above websites. It seems to have disappeared entirely, so I assume the founders went bankrupt. Basically the same concept, but with virtually no subscribers (I knew no one else on there). Also the founder of the site had a weird habit of trawling through message boards and leaving asinine comments, presumably to try and ‘stimulate conversation’, or maybe to make it look like someone gave a shit. Seemed to be ad-funded. Can’t say I’ve missed it.

NeuroTree                                                     Rating: 4/5 Simple, original, cool

A clever and informative ‘family tree’ for neuroscience researchers, with around 36000 people added. It shows who your academic ‘parents’ (PhD and postdoc supervisors) and ‘children’ (students) are, and goes back several generations. It’s pretty heavily dominated by vision researchers. You can update your own, or other people’s information, and add new people too, alive or dead. There is a feature for calculating the distance between two people, or find your nearest Nobel prize winner. Similar things exist for other disciplines, such as the famous Erdös number for mathematicians.

ResearcherID                                                  Rating: 3/5 Alright

Not exactly a networking website, this is Thompson’s tool for keeping track of citations to all your publications. Originally it was invite-only, but now anyone can sign up. You locate your papers in Web of Knowledge/Web of Science, assuming you have an institutional subscription. Subsequent papers have to be added manually through the same process. It then counts your cites, produces a pretty graph and tells you your H-index. Used to be the best way of doing this, but has since been eclipsed by Google Scholar (see below).

There is a nice feature for producing a ‘badge’ to stick on your website, to ‘pimp your H’. My main dislike is the continuous signing in process that it seems to insist on. Often I just want to see the ‘public’ version of my profile, without signing in. However, if it detects that you have previously signed in on the machine you’re using, it forces you to go through the laborious (and often malfunctioning) Shibboleth/Athens sign in rigmarole. This is such a pain, I usually don’t bother.

Google scholar citations                                     Rating: 5/5 Great

Very similar to ResearcherID, but it works better. Setting the whole thing up took literally under a minute – it found all my papers with only one false positive, which was easily discarded. It automatically adds new papers within a few days of being published. It’s more inclusive for citations, as it indexes the whole of the internet, rather than Thomson’s more limited database. This means my H-index is slightly higher than in ResearcherID. You can add your co-authors if they’re signed up too, and because it’s Google it’s free, and doesn’t require endless sign ins. The only thing I can think of to improve it would be a facility to ‘follow’ other people’s citations and H-index. This sort of exists, but it’s for email notifications – I’d prefer a summary on a web page instead. Otherwise, it’s great – Google have really outdone themselves with this and their journal citation tool (see my previous post).

Microsoft Academic Search (beta)                    Rating: 2/5 Not worth it

Essentially a poor copy of the previous two sites. Records are auto-generated, and (in my case) wildly wrong. You don’t claim your own profile (as with BioMed Experts), instead you submit change requests on your, or anyone else’s, profile. These then have to be approved, so they take around a week to go live. Calculates H-index, but seems to miss about 50% of legitimate cites from what I can make out. Hopefully it will get better, but I don’t see what it adds to any of the others.                                                           Rating: 1/5 Pointless

I stumbled across this the other day. It appears to be just a list of publications, which is mostly correct for me but has a few missing. I don’t see what it’s for, there’s no way to ‘join’ or actually do anything with the information. Not even sure how to pronounce it – is it ‘ome’ to rhyme with ‘dome’, or a contracted version of “Lab of me”? Possibly an oblique scheme for hawking antibodies, as seems to be a company that sells the same thing as lots of other companies who spam me all the time. They are the science version of “Canadian pharmacies” selling viagra.


With lots of these websites trying to be like Facebook, part of me thinks, why bother? Facebook already exists, and virtually everyone who owns a computer already has an account. So, if you want to interact with your colleagues, maybe ask them something, why not do it on Facebook? In practise, this happens quite a lot these days – people post about conferences, plug their papers, ask each other questions. Google+ also seems to have a fair number of people I know through work (more than non-work friends actually), and so that’s had some work-related chat recently. Of course, there’s no way of showing off all your papers, but you can always add a link to your website for anyone interested.

I guess I’d be surprised if many of these websites are still around in 5 years. They are trying to fill a gap that doesn’t really exist, and most of them aren’t doing it particularly well.