Psychophysical meta-analysis

13/08/2018

Much of our understanding of sensory systems comes from psychophysical studies conducted over the past century. This work provides us with an enormous body of information that can guide contemporary research. Meta-analysis is a widely used method in biomedical research that aims to quantitatively summarise the effects from a collection of studies on a given topic, often producing an aggregate estimate of effect size. Yet whilst these tools are commonplace in some areas of psychology, they are rarely employed to understand sensory perception. This may be because psychophysics has some idiosyncratic properties that make generalisation difficult: many studies involve very few participants (frequently N<5), and most use esoteric methods and stimuli aimed at answering a single question. Here I suggest that in some domains, the tools of meta-analysis can be employed to overcome these problems to unlock the knowledge of the past.

In previous publications, I have occasionally aggregated data across previous studies to address a specific question. For example, in 2012 I published a paper that plotted the slope of the psychometric function with and without external noise, collated from 18 previous studies. This revealed a previously unreported effect of the dimensionality of the noise on the extent to which psychometric functions are linearised. Then in 2013 I aggregated contrast discrimination ‘dipper’ functions from 18 studies and 63 observers, to attempt to understand individual differences in detection threshold. This data set was also averaged to characterize discrimination performance in terms of the placement of the dip and the steepness of the handle.

These examples added value to the papers they were included in by reanalysing existing data in a novel way. But they are not traditional examples of meta-analysis, as they focussed on the (threshold and slope) data of individual participants from the studies included, instead of averaging measures of effect size across studies.

An excellent example of a study that collates effect size measures (Cohen’s d) across multiple psychophysical studies is an authoritative and detailed meta-analysis by Hedger et al. (2016). This paper investigates how visually threatening stimuli (such as fearful faces) are processed in the absence of awareness, when the stimuli were rendered invisible by manipulations such as masking and binocular rivalry. This is a heavily researched area, and the studies included contained a total of 2696 participants. Overall, this study concludes that masking paradigms produce convincing effects, binocular rivalry produces medium effects, and that effects are inconsistent using a continuous flash suppression paradigm. Additional analyses drill down into the specifics of each study, exploring how stimuli and experimental designs influence outcomes.

Inspired by this exemplary work, my collaborators and I recently undertook a meta-analysis of binocular summation – the improvement in contrast sensitivity when stimuli are viewed with two eyes instead of one. This is also a heavily investigated topic because of its clinical utility as an index of binocular health and function, and we included 65 studies with a total sample size of 716 participants. Our central question was whether the summation ratio (an index of the binocular advantage) significantly exceeded the canonical value of √2 first reported by Campbell and Green (1965). Many individual studies reported ratios higher than this, but sample sizes were often small (median N=5 across the 65 studies) meaning that individual variability could have a substantial effect. We averaged the mean summation ratios using three different weighting schemes (giving equal weight to studies, weighting by sample size, and weighting by the inverse variance). Regardless of weighting, the lower bound of the 95% confidence interval on the mean summation ratio always exceeded √2, conclusively overturning a long established psychophysical finding, with implications for our understanding of nonlinearities early in the visual system.

We also performed additional analyses to explore the effect of stimulus spatiotemporal frequency, and the difference in sensitivity across the eyes, confirming our findings with new data. This work reveals an effect of stimulus speed (the ratio of temporal to spatial frequency), suggesting that neural summation varies according to stimulus properties, and meaning that there is no ‘true’ value for binocular summation, rather a range of possible values between √2 and 2. Our analysis of monocular sensitivity differences leads to a deeper understanding of how best to analyse the data of future studies.

Although the summation meta-analysis was conducted using the summation ratio as the outcome variable, it is possible to convert the aggregate values to more traditional measures of effect size. Doing this revealed an unusually large effect size (Cohen’s d=31) for detecting the presence of binocular summation, and another large effect size (Cohen’s d=3.22) when comparing to the theoretical value of √2. These very large effects mean that even studies with very few participants (N=3) have substantial power (>0.95). In many ways, this can be considered a validation of the widespread psychophysical practice of extensively testing a small number of observers using very precise methods.

Overall, meta-analysis can reveal important psychophysical effects that were previously obscured by the limitations of individual studies. This provides opportunities to reveal findings involving large aggregate sample sizes, that will inspire new experiments and research directions. The binocular summation meta analysis is now available online, published in Psychological Bulletin [DOI].

Advertisements

Marmite, and the spread of misinformation

09/04/2017

Last week we published a study about Marmite affecting brain function in the Journal of Psychopharmacology. Perhaps unsurprisingly, this got a huge amount of media attention, with coverage on radio, television and in print. Anika and I did a range of interviews, which was an interesting and exhausting experience!

What was really striking was watching how the echo chamber of the internet handled the story. We were very careful in our press release and interviews not to name any specific diseases or disorders that might be affected by our intervention. What we think is happening is that the high levels of vitamin B12 in Marmite are stimulating the production of GABA in the brain, leading to a reduction of neural activity in response to visual stimuli. Now it happens that GABA deficits are implicated in a whole range of neurological diseases and disorders, but since we haven’t tested any patients we can’t say whether eating Marmite could be a good thing, a bad thing, or have no effect on any diseases at all.

But to the media, this somehow became a study about trying to prevent dementia! Headlines like “Marmite may boost brain and help stave off dementia” (Telegraph) were exactly what we wanted to avoid, particularly because of the risk that some patient somewhere might stop taking their medication and eat Marmite instead, which could be very dangerous. We even stated very clearly in our press release:

“Although GABA is involved in various diseases we can make no therapeutic recommendations based on these results, and individuals with a medical condition should always seek treatment from their GP.”

But these cautions were roundly ignored by most of the reporters who covered the piece (even those who interviewed us directly), as amusingly and irreverently explained in an article from Buzzfeed. I think a big part of the problem is that it is not routine practise for scientists whose work is covered in the media to give approval of the final version of a story before it is published (or even to get to see it). Maybe a mechanism by which authors can grant some sort of stamp of approval to a story needs to be developed to prevent this sort of thing and avoid the spread of misinformation. In the meantime, it’s been an amazing example of how, despite our best efforts, the media will just report whatever they want to, however tenuously it’s linked to the underlying findings.

The paper:
Smith, A.K., Wade, A.R., Penkman, K.E.H. & Baker, D.H. (2017). Dietary modulation of cortical excitation and inhibition. Journal of Psychopharmacology, in press, [DOI].

Repository version (open access)

University of York press release

A selection of media coverage:

The Independent
The Telegraph
The Times
Sky News
Sky News Facebook Live
Wired
The Mirror
The Express
Metro
The Sun
Netdoctor
SBS
The Jersey Evening Post
The Daily Maverick
Japan Times
Heart
Yorkshire Post
Eagle FM
Stray FM
ENCA
Buzzfeed
New Zealand Herald
Huffington Post
Science Focus
ITV
Nouse
Science Media Centre
Neuroscience News
iNews
NHS UK
Daily Star
Spectator
Boots WebMD
Pakistan Today
Grazia
Washington Times
Men’s Health
South China Morning Post
Good Housekeeping
Inverse
Medical News Today
Daily Mail
Daily Mail

 


Estimating Oculus Rift pixel density

14/10/2015

A few months ago I bought an Oculus Rift DK2. Although these are designed for VR gaming, they’re actually pretty reasonable stereo displays. They have several desirable features, particularly that the OLED display is pulsed stroboscopically each frame to reduce motion blur. However, this also means that each pixel is updated at the same time, unlike on most LCD panels, meaning they can be used for timing sensitive applications. As of a recent update they are also supported by Psychtoolbox, which we use to run the majority of experiments in the lab. Lastly, they’re reasonably cheap, at about £300.

In starting to set up an experiment using the goggles I thought to check what their effective pixel resolution was in degrees of visual angle. Because the screens are a fixed distance from the wearer’s eye, I (foolishly) assumed that this would be a widely available value. Quite a few people simply took the monocular resolution (1080 x 1200) and divided this by the nominal field of view (110° vertically), producing an estimate of about 10.9 pixels per degree. As it turns out, this is pretty much bang on, but that wasn’t necessarily the case, because the lenses produce increasing levels of geometric distortion (bowing) at more eccentric locations. This might have the effect of concentrating more pixels in the centre of the display, increasing the number of pixels per degree.

Anyway, I decided it was worth verifying these figures myself. Taking a cue from methods we use to calibrate mirror stereoscopes, here’s what I did…

First I created two calibration images, consisting of a black background, and either one central square, or two lateralised squares. All the squares were 200 pixels wide (though this isn’t crucial), and the one with two squares was generated at the native resolution of the Oculus Rift (2160×1200). Here’s how the first one looks:

ORimage

And here’s how the other one, with only one square looked:

CSimage

These images were created with a few lines of Matlab code:

ORw = 2160; % full width of the oculus rift in pixels
ORh = 1200; % height of the oculus rift in pixels
CSw = 1440; % height of other computer's display in pixels
CSh = 900;  % width of other computer's display in pixels
ORs = 200;  % width of the squares shown on the rift
CSs = 200;  % width of the square shown on the computer's display

a = zeros(ORh,ORw);
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+ORw/4-ORs/2):(ORw/4+ORs/2)) = 1;
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+3*ORw/4-ORs/2):(3*ORw/4+ORs/2)) = 1;
imwrite(a,'ORimage.jpg','jpeg')

a = zeros(CSh,CSw);
a((1+CSh/2-CSs/2):(CSh/2+CSs/2),(1+CSw/2-CSs/2):(CSw/2+CSs/2)) = 1;
imwrite(a,'CSimage.jpg','jpeg')

I then plugged in the Rift, and displayed the two-square image on it, and the one-square image on an iPad (though in principle this could be any screen, or even a printout). Viewed through the Rift, each square goes to only one eye, and the binocular percept is of a single central square.

Now comes the clever bit. The rationale behind this method is that we match the perceived size of a square shown on the Rift with one shown on the iPad. We do this by holding the goggles up to one eye, with the other eye looking at the iPad. It’s necessary to do this at a bit of an angle, so the square gets rotated to be a diamond, but we can rotate the iPad too to match the orientation. I found it pretty straightforward to get the sizes equal by moving the iPad forwards and backwards, and using the pinch-to-zoom operation.

Once the squares appeared equal in size I put the Rift down, but kept the iPad position fixed. I then measured two things: the distance from the iPad to my eye, and the width of the square on the iPad screen. The rest is just basic maths:

The iPad square was 7.5cm wide, and matched the Rift square at 24cm from the eye. At that distance an object 1cm wide subtends 2.4° of visual angle (because at 57cm, 1cm=1°). [Note, for the uninitiated, the idea of degrees of visual angle is that you imagine a circle that goes all the way around your head, parallel to your eyes. You can divide this circle into 360 degrees, and each individual degree will be about the size of a thumbnail held at arm’s length. The reason people use this unit is that it can be calculated for a display at any distance, allowing straightforward comparison of experimental conditions across labs.] That means the square is 2.4*7.5=18° wide. Because this is matched with the square on the Rift, the Rift square is also 18° wide. We know the square on the Rift is 200 pixels wide, so that means 18° = 200 pix, and 1° = 11 pixels. So, the original estimates were correct, and the pixel density at the centre of the screen is indeed 11 pixels/deg.

This is actually quite a low resolution, which isn’t surprising since the screen is close to the eye, individual pixels are easily visible, and the whole point of the Rift is to provide a wide field of view rather than a high central resolution. But it’s sufficient for some applications, and its small size makes it a much more portable stereo display than either a 3D monitor or a stereoscope. I’m also pleased I was able to independently verify other people’s resolution estimates, and have developed a neat method for checking the resolution of displays that aren’t as physically accessible as normal monitors.


Aesthetically pleasing, publication quality plots in R

19/07/2015

I spend a lot of my time making graphs. For a long time I used a Unix package called Grace. This had several advantages, including the ability to create grids of plots very easily. However it also had plenty of limitations, and because it is GUI-based, one had to create each plot from scratch. Although I use Matlab for most data analysis, I’ve always found its plotting capabilities disappointing, so a couple of years ago I bit the bullet and started learning R, using the RStudio interface.

There are several plotting packages for R, including things like ggplot2, which can automate the creation of some plots. Provided your data are in the correct format, this can make plotting really quick, and tends to produce decent results. However, for publication purposes I usually want to have more control over the precise appearance of a graph. So, I’ve found it most useful to construct graphs using the ‘plot’ command, but customising almost every aspect of the graph. There were several things that took me a while to work out, as many functions aren’t as well documented as they could be. So I thought it would be helpful to share my efforts. Below is some code (which you can also download here) that demonstrates several useful techniques for plotting, and should create something resembling the following plot when executed.

Example plot created by the script.

Example plot created by the script.

My intention is to use this script myself as a reminder of how to do different things (at the moment I always have to search through dozens of old scripts to find the last time I did something), and copy and paste chunks of code into new scripts each time I need to make a graph. Please feel free to use parts of it yourself, to help make the world a more beautiful place!

# this script contains examples of the following:
# outputting plots as pdf and eps files
# creating plots with custom tick mark positioning
# drawing points, bars, lines, errorbars, polygons, legends and text (including symbols)
# colour ramps, transparency, random numbers and density plots


# Code to output figures as either an eps or pdf file. Note that R’s eps files appear not to cope well with transparency, whereas pdfs are fine
outputplot <- 0
if(outputplot==1){postscript(“filename.eps”, horizontal = FALSE, onefile = FALSE, paper = “special”, height = 4.5, width = 4.5)}
if(outputplot==2){pdf(“filename.pdf”, bg=”transparent”, height = 5.5, width = 5.5)}
# all the code to create the plot goes here
if(outputplot>0){dev.off()}  # this line goes after you’ve finished plotting (to output the example below, move it to the bottom of the script)


# set up an empty plot with user-specified axis labels and tick marks
plotlims <- c(0,1,0,1)  # define the x and y limits of the plot (minx,maxx,miny,maxy)
ticklocsx <- (0:4)/4    # locations of tick marks on x axis
ticklocsy <- (0:5)/5    # locations of tick marks on y axis
ticklabelsx <- c(“0″,”0.25″,”0.5″,”0.75″,”1”)        # set labels for x ticks
ticklabelsy <- c(“0″,”0.2″,”0.4″,”0.6″,”0.8″,”1”)    # set labels for y ticks


par(pty=”s”)  # make axis square
plot(x=NULL,y=NULL,axes=FALSE, ann=FALSE, xlim=plotlims[1:2], ylim=plotlims[3:4])   # create an empty axis of the correct dimensions
axis(1, at=ticklocsx, tck=0.01, lab=F, lwd=2)     # plot tick marks (no labels)
axis(2, at=ticklocsy, tck=0.01, lab=F, lwd=2)
axis(3, at=ticklocsx, tck=0.01, lab=F, lwd=2)
axis(4, at=ticklocsy, tck=0.01, lab=F, lwd=2)
mtext(text = ticklabelsx, side = 1, at=ticklocsx)     # add the tick labels
mtext(text = ticklabelsy, side = 2, at=ticklocsy, line=0.2, las=1)  # the ‘line’ command moves away from the axis, the ‘las’ command rotates to vertical
box(lwd=2)      # draw a box around the graph
title(xlab=”X axis title”, col.lab=rgb(0,0,0), line=1.2, cex.lab=1.5)    # titles for axes
title(ylab=”Y axis title”, col.lab=rgb(0,0,0), line=1.5, cex.lab=1.5)


# create some synthetic data to plot as points and lines
datax <- sort(runif(10,min=0,max=1))
datay <- sort(runif(10,min=0.2,max=0.8))
SEdata <- runif(10,min=0,max=0.1)
lines(datax,datay, col=’red’, lwd=3, cex=0.5)     # draw a line connecting the points
arrows(datax,datay,x1=datax, y1=datay-SEdata, length=0.015, angle=90, lwd=2, col=’black’)  # add lower error bar
arrows(datax,datay,x1=datax, y1=datay+SEdata, length=0.015, angle=90, lwd=2, col=’black’)  # add upper error bar
points(datax,datay, pch = 21, col=’black’, bg=’cornflowerblue’, cex=1.6, lwd=3)   # draw the data points themselves


# create some more synthetic data to plot as bars
datax <- 0.1*(1:10)
datay <- runif(10,min=0,max=0.2)
SEdata <- runif(10,min=0,max=0.05)
ramp <- colorRamp(c(“indianred2”, “cornflowerblue”))  # create a ramp from one colour to another
colmatrix <- rgb(ramp(seq(0, 1, length = 10)), max = 255)   # index the ramp at ten points
barplot(datay, width=0.1, col=colmatrix, space=0, xlim=1, add=TRUE, axes=FALSE, ann=FALSE)  # add some bars to an existing plot
arrows(datax-0.05,datay,x1=datax-0.05, y1=datay-SEdata, length=0.015, angle=90, lwd=2, col=’black’)  # add lower error bar
arrows(datax-0.05,datay,x1=datax-0.05, y1=datay+SEdata, length=0.015, angle=90, lwd=2, col=’black’)  # add upper error bar


coltrans=rgb(1,0.5,0,alpha=0.3)             # create a semi-transparent colour (transparency is the alpha parameter, from 0-1)
a <- density(rnorm(100,mean=0.75,sd=0.1))   # make a density distribution from some random numbers
a$y <- 0.2*(a$y/max(a$y))                   # rescale the y values for plotting
polygon(a$x, 1-a$y, col=coltrans,border=NA) # plot upside down hanging from the top axis with our transparent colour


# create a legend that can contain lines, points, or both
legend(0, 1, c(“Lines”,”Points”,”Both”), cex=1, col=c(“darkgrey”,”black”,”black”), pt.cex=c(0,1.8,1.8), pt.bg=c(“black”,”violet”,”darkgreen”),lty=c(1,0,1), lwd=c(5,3,3), pch=21, pt.lwd=3, box.lwd=2)
# add text somewhere, featuring symbols and formatting
text(0.8,0.95,substitute(paste(italic(alpha), ” = 1″ )),cex=1.2,adj=0)


Why can some people’s brains see better than others’?

28/07/2013

On Friday I had a new paper published in the open access journal PLoS ONE. It addresses the question of why some people have better sensitivity to contrast (variations in light levels across an image) than others, sometimes by quite substantial amounts. Unlike differences in eyesight (acuity) that can usually be optically corrected, contrast sensitivity differences occur even for large (low frequency) stimuli that aren’t affected much by optical blur. Presumably then, the sensitivity differences are neural in origin. I was surprised that nobody had really tried to answer this question before, so thought I should give it a go.

The paper is divided into two parts. The first section uses an equivalent noise technique to assess whether sensitivity differences are due to different amounts of noise, or a difference in the efficiency with which stimuli are processed. Although I rule out the latter explanation, the noise masking method cannot tease apart a difference in internal noise from a difference in contrast gain. So, the second part of the study looks at a large corpus of contrast discrimination data, collated from 18 studies in the literature. By looking at the between-subject differences in discrimination performance, I conclude that individual differences at threshold are primarily a consequence of differences in contrast gain. Whether this is due to differences in structure, anatomy, neurotransmitter levels or developmental factors is unclear at the moment.

Since I spent quite a long time putting together all of the dipper function data, I thought I should make it available online. Most of the data were extracted from the published figures using the excellent GraphClick program. The data can be downloaded here in Matlab format. They are organised into a cell array, with each of the 22 cells containing data from one experiment. Each cell is further divided into separate cells for each individual observer, with the ‘data’ array containing the x- and y-values used to produce these plots. I hope these data become a useful resource for other researchers interested in basic visual processes.


A first look at the Olimex EEG-SMT

31/01/2013

Last week I ordered and received a small EEG device manufactured by a Bulgarian company called Olimex. Called the EEG-SMT, it is part of the OpenEEG project, and is a small USB device that looks like this:

The Olimex EEG device.

The Olimex EEG device.

It has five audio jacks for connecting custom electrodes. The ground electrode is passive, and the other four electrodes are active and comprise two bipolar channels. The system is very basic, and at around €150 (including the electrodes) is obviously not going to compete with high end multi-channel EEG rigs.  But, I’m interested in running some steady state VEP experiments that can be run with a single channel, and in principle are quite robust to lower signal to noise ratios from lower quality equipment. Given the price, I thought it was worth a shot.

Although there are several PC packages capable of reading data from the device, I ideally want to integrate EEG recording into the Matlab code I use for running experiments. So, I decided to try and directly poll the USB interface.

The first stage was to install a driver for the device. I’m using a Mac running OSX 10.8, so I went with the FDTI virtual COM port driver. I also found it useful to check the device was working with this serial port tool. The driver creates a virtual serial port, the location of which can be discovered by opening a Terminal window and entering:

    ls -l /dev/tty.*

On my machine this lists a couple of bluetooth devices, as well as the serial address of the Olimex device:

    /dev/tty.usbserial-A9014SQP

Matlab has its own tool for polling serial ports (Serial). I was able to read from the device this way, but I found it less flexible than the IOPort function that comes with Psychtoolbox 3. The rest of this post uses that function.

First we open the serial port and give it a handle:

    [h,e] = IOPort(‘OpenSerialPort’,’/dev/tty.usbserial-A9014SQP’);

Then we can set a few parameters, including the baud rate for data transmission, buffer size etc:

    IOPort(‘ConfigureSerialPort’,h,’BaudRate=57600′);
    IOPort(‘ConfigureSerialPort’,h,’BlockingBackgroundRead=0′);
    IOPort(‘ConfigureSerialPort’,h,’InputBufferSize=65536′);
    IOPort(‘ConfigureSerialPort’,h,’PollLatency=0.0039′);

To start recording, we purge the buffer and then send this command.

    IOPort(‘Purge’,h);
    IOPort(‘ConfigureSerialPort’,h,’StartBackgroundRead=1′);

We wait for a while, then we check how much data is waiting for us in the buffer and read it out into a vector:

    WaitSecs(3);
    bytestoget = IOPort(‘BytesAvailable’,h)
    [longdata,when,e] = IOPort(‘Read’,h,1,bytestoget);

Finally, we stop recording, purge the buffer and close the port:

    IOPort(‘ConfigureSerialPort’,h,’StopBackgroundRead’);
    IOPort(‘Purge’,h);
    IOPort(‘Close’,h);

I had some trouble initially streaming data from the device. If you forget to purge the buffer it can cause your entire system (not just Matlab) to hang and restart. This is very annoying, and slows development progress.

Now that we have some data, we need to process it. The vector is a stream of bytes in packets of 17. We can separate it out like this:

    for n = 1:17
        parseddata(n,:) = longdata(n:17:end);
    end

And plot each signal separately:

Outputs from the Olimex serial interface

Outputs from the Olimex serial interface

According to the device’s firmware, the first two plots are control lines that always output values of 165 and 90. This provides an anchor that lets us know the order of the signals. The next plot tells us the firmware version (version 2), and the fourth plot is a sample counter that increases by 1 each time the device samples the electrodes. The sampling happens at a fixed frequency of 256Hz, so 256 samples represent one second of activity. Plots 5-16 are the outputs of the electrodes (this is what we’re interested in), and I don’t really understand plot 17 yet.

Each channel gets 2 bytes (e.g. 16 bits), but only uses 10 of those bits. This means that to get the actual output, we need to combine the data from two adjacent bytes (paired by colour in the above plots). The data are in big-endian format, which means that the first byte contains the most significant bits, and the second byte the least significant. We can combine them by converting each byte to binary notation, sticking them together, and then converting back:

   for l = 1:6
    for m = 1:length(parseddata)
      trace(l,m)  = bin2dec(strcat(dec2bin(parseddata(lineID(l,1),m)),dec2bin(parseddata(lineID(l,2),m))))./1023;
    end
    end

We now have six ten bit signals, which we can plot as follows:

Channel outputs

Channel outputs

Although the waveforms look exciting, they aren’t very informative because most of what we’re seeing is an artefact from the ‘hum’ of AC mains electricity. We can see this if we examine the Fourier spectrum of one of our waveforms:

Example EEG fourier spectrum

Example EEG fourier spectrum

It is clear that much of the energy is concentrated at 0, and at 50Hz. We can remove these using a bandpass filter, that includes only frequencies between (approximately) 1 and 49Hz. Taking the inverse Fourier transform then gives us a more sensible waveform:

Bandpass filtered waveform

Bandpass filtered waveform

Actually though, I’m more interested in what is happening in the frequency domain. This is because I want to run experiments to measure the response of visual cortex to gratings flickering at a particular frequency. However, there are some problems to overcome first. Critically, I don’t understand how the four active electrodes on the device map onto the six channel outputs that I read over the serial connection. They all seem to produce a signal, and my initial thought was that the first four must be the outputs of individual electrodes, and the final two the differences between positive and negative electrodes for channels 1 & 2. As far as I can tell, that isn’t what’s actually happening though. I have posted on the OpenEEG mailing list, so hopefully someone with experience of using these devices will get back to me.

If anyone is interested, I have put a version of the code outlined above here (with a few extra bells and whistles). Note that it may require some modifications on your system, particularly the serial address of the device. You will also need to have Matlab (or maybe Octave), Psychtoolbox and the driver software installed. Finally, your system may hang if there are problems, and I hereby absolve myself of responsibility for any damage, loss, electrocution etc. that results in you using my code. However, I’d be very interested to hear from anyone else using one of these devices!


Last day at Aston

13/12/2012

So, today is my last day on campus at Aston. I’m amazed at how quickly the last three and a half years have passed, it feels like no time at all since I was starting back here after postdoccing in Southampton. Still, it’s been a productive time, and on balance I’m glad I chose to come back here rather than do something else.

This morning I made a Wordle from the text of all the papers I’ve published. I might put it on my new website:

paperswordle

The other day we went for some leaving drinks at the Bull, which went like this:

Lots of people at the Bull to celebrate me leaving.

Lots of people at the Bull to celebrate me leaving.

At the moment our house is full of boxes. Laura is off work today to do some last minute packing (mostly of her craft materials) and hopefully sell her car. Then tomorrow some burly men will arrive and load everything into a lorry and take it to York. I’ll be there already (hopefully) with the cats to tell them where to put everything.

Next week I’m giving a talk at the AVA Christmas meeting in London, and then I’m back in London again just after New Year for the EPS meeting. It’s the first one I’ve been to, and I’m looking forward to going to a more general psychology conference. I’ll need to make my talk less geeky though!

Lastly, John Cass and I submitted our first collaborative paper together yesterday. We sent it to a journal that will probably reject it without even bothering to review, but hey, it doesn’t hurt to aim high!