Estimating Oculus Rift pixel density

14/10/2015

A few months ago I bought an Oculus Rift DK2. Although these are designed for VR gaming, they’re actually pretty reasonable stereo displays. They have several desirable features, particularly that the OLED display is pulsed stroboscopically each frame to reduce motion blur. However, this also means that each pixel is updated at the same time, unlike on most LCD panels, meaning they can be used for timing sensitive applications. As of a recent update they are also supported by Psychtoolbox, which we use to run the majority of experiments in the lab. Lastly, they’re reasonably cheap, at about £300.

In starting to set up an experiment using the goggles I thought to check what their effective pixel resolution was in degrees of visual angle. Because the screens are a fixed distance from the wearer’s eye, I (foolishly) assumed that this would be a widely available value. Quite a few people simply took the monocular resolution (1080 x 1200) and divided this by the nominal field of view (110° vertically), producing an estimate of about 10.9 pixels per degree. As it turns out, this is pretty much bang on, but that wasn’t necessarily the case, because the lenses produce increasing levels of geometric distortion (bowing) at more eccentric locations. This might have the effect of concentrating more pixels in the centre of the display, increasing the number of pixels per degree.

Anyway, I decided it was worth verifying these figures myself. Taking a cue from methods we use to calibrate mirror stereoscopes, here’s what I did…

First I created two calibration images, consisting of a black background, and either one central square, or two lateralised squares. All the squares were 200 pixels wide (though this isn’t crucial), and the one with two squares was generated at the native resolution of the Oculus Rift (2160×1200). Here’s how the first one looks:

ORimage

And here’s how the other one, with only one square looked:

CSimage

These images were created with a few lines of Matlab code:

ORw = 2160; % full width of the oculus rift in pixels
ORh = 1200; % height of the oculus rift in pixels
CSw = 1440; % height of other computer's display in pixels
CSh = 900;  % width of other computer's display in pixels
ORs = 200;  % width of the squares shown on the rift
CSs = 200;  % width of the square shown on the computer's display

a = zeros(ORh,ORw);
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+ORw/4-ORs/2):(ORw/4+ORs/2)) = 1;
a((1+ORh/2-ORs/2):(ORh/2+ORs/2),(1+3*ORw/4-ORs/2):(3*ORw/4+ORs/2)) = 1;
imwrite(a,'ORimage.jpg','jpeg')

a = zeros(CSh,CSw);
a((1+CSh/2-CSs/2):(CSh/2+CSs/2),(1+CSw/2-CSs/2):(CSw/2+CSs/2)) = 1;
imwrite(a,'CSimage.jpg','jpeg')

I then plugged in the Rift, and displayed the two-square image on it, and the one-square image on an iPad (though in principle this could be any screen, or even a printout). Viewed through the Rift, each square goes to only one eye, and the binocular percept is of a single central square.

Now comes the clever bit. The rationale behind this method is that we match the perceived size of a square shown on the Rift with one shown on the iPad. We do this by holding the goggles up to one eye, with the other eye looking at the iPad. It’s necessary to do this at a bit of an angle, so the square gets rotated to be a diamond, but we can rotate the iPad too to match the orientation. I found it pretty straightforward to get the sizes equal by moving the iPad forwards and backwards, and using the pinch-to-zoom operation.

Once the squares appeared equal in size I put the Rift down, but kept the iPad position fixed. I then measured two things: the distance from the iPad to my eye, and the width of the square on the iPad screen. The rest is just basic maths:

The iPad square was 7.5cm wide, and matched the Rift square at 24cm from the eye. At that distance an object 1cm wide subtends 2.4° of visual angle (because at 57cm, 1cm=1°). [Note, for the uninitiated, the idea of degrees of visual angle is that you imagine a circle that goes all the way around your head, parallel to your eyes. You can divide this circle into 360 degrees, and each individual degree will be about the size of a thumbnail held at arm’s length. The reason people use this unit is that it can be calculated for a display at any distance, allowing straightforward comparison of experimental conditions across labs.] That means the square is 2.4*7.5=18° wide. Because this is matched with the square on the Rift, the Rift square is also 18° wide. We know the square on the Rift is 200 pixels wide, so that means 18° = 200 pix, and 1° = 11 pixels. So, the original estimates were correct, and the pixel density at the centre of the screen is indeed 11 pixels/deg.

This is actually quite a low resolution, which isn’t surprising since the screen is close to the eye, individual pixels are easily visible, and the whole point of the Rift is to provide a wide field of view rather than a high central resolution. But it’s sufficient for some applications, and its small size makes it a much more portable stereo display than either a 3D monitor or a stereoscope. I’m also pleased I was able to independently verify other people’s resolution estimates, and have developed a neat method for checking the resolution of displays that aren’t as physically accessible as normal monitors.


Why can some people’s brains see better than others’?

28/07/2013

On Friday I had a new paper published in the open access journal PLoS ONE. It addresses the question of why some people have better sensitivity to contrast (variations in light levels across an image) than others, sometimes by quite substantial amounts. Unlike differences in eyesight (acuity) that can usually be optically corrected, contrast sensitivity differences occur even for large (low frequency) stimuli that aren’t affected much by optical blur. Presumably then, the sensitivity differences are neural in origin. I was surprised that nobody had really tried to answer this question before, so thought I should give it a go.

The paper is divided into two parts. The first section uses an equivalent noise technique to assess whether sensitivity differences are due to different amounts of noise, or a difference in the efficiency with which stimuli are processed. Although I rule out the latter explanation, the noise masking method cannot tease apart a difference in internal noise from a difference in contrast gain. So, the second part of the study looks at a large corpus of contrast discrimination data, collated from 18 studies in the literature. By looking at the between-subject differences in discrimination performance, I conclude that individual differences at threshold are primarily a consequence of differences in contrast gain. Whether this is due to differences in structure, anatomy, neurotransmitter levels or developmental factors is unclear at the moment.

Since I spent quite a long time putting together all of the dipper function data, I thought I should make it available online. Most of the data were extracted from the published figures using the excellent GraphClick program. The data can be downloaded here in Matlab format. They are organised into a cell array, with each of the 22 cells containing data from one experiment. Each cell is further divided into separate cells for each individual observer, with the ‘data’ array containing the x- and y-values used to produce these plots. I hope these data become a useful resource for other researchers interested in basic visual processes.


Last day at Aston

13/12/2012

So, today is my last day on campus at Aston. I’m amazed at how quickly the last three and a half years have passed, it feels like no time at all since I was starting back here after postdoccing in Southampton. Still, it’s been a productive time, and on balance I’m glad I chose to come back here rather than do something else.

This morning I made a Wordle from the text of all the papers I’ve published. I might put it on my new website:

paperswordle

The other day we went for some leaving drinks at the Bull, which went like this:

Lots of people at the Bull to celebrate me leaving.

Lots of people at the Bull to celebrate me leaving.

At the moment our house is full of boxes. Laura is off work today to do some last minute packing (mostly of her craft materials) and hopefully sell her car. Then tomorrow some burly men will arrive and load everything into a lorry and take it to York. I’ll be there already (hopefully) with the cats to tell them where to put everything.

Next week I’m giving a talk at the AVA Christmas meeting in London, and then I’m back in London again just after New Year for the EPS meeting. It’s the first one I’ve been to, and I’m looking forward to going to a more general psychology conference. I’ll need to make my talk less geeky though!

Lastly, John Cass and I submitted our first collaborative paper together yesterday. We sent it to a journal that will probably reject it without even bothering to review, but hey, it doesn’t hurt to aim high!


Clearing the decks

10/11/2012

In the last few weeks running up to Christmas there’s lots to get done. At the moment I’m trying to tie up lots of projects, and get data collected on a few experiments. One of those is a collaboration with John Cass, who came to visit the lab in September. We’re looking at consciousness during binocular rivalry, and it looks like it’s going to be an interesting study. I’ve got a few more subjects to run, and then we can start analysing and writing it up.  Here’s John savouring some beer:

John drinking beer

Our lab moved from Aston’s monolithic main building into the Vision Sciences building a few weeks ago. The new space is really great, with a big open area to have meetings and conversations in. We’ve had a few teething problems with the heating and lighting, but we’re getting things sorted now.  Here’s the open area on move-in day – it’s a bit tidier now!

A wide angle shot of the new lab space on move-in day.

We’ve also finally sorted out a place to live in York. It’s big, near the station, and they’re happy with our cats living there, so it fits all our criteria. The move should happen in about five weeks time, which means we’ll be moved in ready for Christmas. I’m really looking forward to starting the new job in January, I even have a temporary new website up in my new department.

In tying up lots of projects, we’ve had a few new papers published, some of which have been in the pipeline for quite a while. There are a couple in the final stages of review, and here are some that are already out:

This is the first thing we’ve published from Alex Baldwin‘s PhD. Alex measured sensitivity to small grating patches across the visual field, in much greater detail than people had attempted before. It turns out that sensitivity falls off as a bilinear function of eccentricity, which has important ramifications for models of spatial vision.

Another study by Wallis et al looked at the slope of the psychometric function for a range of different stimuli. We wanted to see if slopes varied with spatial frequency or pattern size (they don’t), and also to work out the most accurate method for estimating slopes over many sessions.

Finally, in a collaborative paper with Pi-Chun Huang and Robert Hess, we looked at the temporal properties of interocular suppression, in both normal and amblyopic subjects. We explain all of our findings with a simple model that assumes that signals are blurred and delayed slightly in time before they have a masking effect on the opposite eye. Surprisingly, the amblyopes don’t show greater suppression than the normal observers, once you take into account the difference in sensitivity between their eyes.


All about noise masking

02/10/2012
So, a few years ago I got interested in noise masking. It’s a widely used paradigm, where you measure detection thresholds for targets buried in various amounts of external noise (kind of like the ‘snow’ on an untuned TV set).

Some 2D white noise, increasing in contrast from left to right. Stronger noise (right) will impede detection of a target.

The technique is useful because it allows sensitivity deficits, sometimes due to clinical conditions like amblyopia, to be characterised better. For example, it might show that internal noise is greater in some condition, and this in turn could inform new models, treatments or therapies.
For a long time people have sort of half realised that there are some funny things about noise masking. The data don’t always come out the way the dominant model of noise masking says they should, and different labs sometimes come up with quite different results. This has been noted in passing in several papers, but there was clearly a need for someone to really nail the problem, and get to the bottom of things.
After a few years of pilot experiments (this has never been my main focus, always an exciting side project), Tim and I came up with a series of experiments that we thought would shed some light on what’s going on in noise masking. Our hypothesis was that the white noise masks that people typically use might actually cause most of their masking through a process of suppression (and not by injecting noise into the decision variable like they’re supposed to). We came up with four tests for this, each of which appeared to support our conjecture.
What really convinced us though was when we developed a new type of noise, that was much ‘purer’, and didn’t produce any suppressive masking on our four tests. It’s actually a very simple idea that appears in the literature way back about 40 years ago, but in experiments on detecting luminance instead of contrast. All you do is take your target stimulus and add or subtract a little bit of its contrast, randomly, in both intervals of a 2IFC experiment. Here’s an example trial sequence:

Demo of 0D noise: Each column represents a trial, with the upper row showing the stimulus presented in the first interval, and the lower row the stimulus in the second interval. Each stimulus is a log-Gabor, with its contrast drawn from a zero-mean Gaussian distribution. When the contrast is positive, the stimulus has a bright bar in the centre. When it is negative it has a dark bar in the centre. The top row has also had a positive target contrast added (in this example, always 30%). An observer in the experiment selects the interval with the highest positive contrast, as this is the one most likely to contain the target. In the first column, this is Interval 1, as it has high positive contrast, whereas Interval 2 has high negative contrast. So, the observer’s choice is correct. However, in trial 2, the displayed contrast in Interval 1 is actually negative (despite the target contrast being added to it) whereas in Interval 2 it is positive. In this situation, the observer will pick the ‘wrong’ (e.g. non-target) interval, even though they are behaving sensibly. But, on average over many trials, they will tend to choose the target interval when the target contrast is high enough to overcome the variance of the ‘noise’.

From the perspective of a mechanism tuned for detecting the target, this ‘contrast jitter’ behaves in exactly the same way as the more widely used ‘snow’ type white noise. Crucially though, it doesn’t activate any other nearby mechanisms that might cause suppression. Because it has no spatial or temporal dimensions, we call this new stimulus zero-dimensional (0D) noise.
I think the 0D noise technique is a much cleaner way of doing noise masking experiments. What we’re saying is quite controversial though, so whether anyone else agrees remains to be seen! But at least the paper is out there now. It’s probably my favourite project, despite (or because of) the level of nerdiness involved in parts of it. It’s open access, and you can read the paper here:
Zero-dimensional noise

New paper in PLoS ONE

09/04/2012

Last week I had a new paper out in PLoS ONE. It’s about how perceived contrast depends on the phase relationship across the eyes. We were inspired to do this work by another paper which seemed to show that there was no relationship between phase and perceived contrast. We investigated a wider range of phase and contrast conditions, and find a clear effect, which we explain using a computational model:

A figure from the paper

This is the first paper I’ve published in a PLoS journal. Although the review process for PLoS ONE is supposed to focus on methodological soundness, I found it comparable to other places I’ve published, like Journal of Vision, which is also open access. Given the recent highlighting of all the bad things about for-profit publishers, I think it’s important to support not-for-profit open-access journals like these. It also has a decent impact factor, which doesn’t hurt.


Paper accepted

17/05/2011

Heard last night that we’ve had a paper accepted at iPerception, which is nice. Should be out in a few weeks, open access.

At the moment I’m working on a poster for a conference next week in Cardiff. Conference review and PDF of poster coming soon.