Much of our understanding of sensory systems comes from psychophysical studies conducted over the past century. This work provides us with an enormous body of information that can guide contemporary research. Meta-analysis is a widely used method in biomedical research that aims to quantitatively summarise the effects from a collection of studies on a given topic, often producing an aggregate estimate of effect size. Yet whilst these tools are commonplace in some areas of psychology, they are rarely employed to understand sensory perception. This may be because psychophysics has some idiosyncratic properties that make generalisation difficult: many studies involve very few participants (frequently N<5), and most use esoteric methods and stimuli aimed at answering a single question. Here I suggest that in some domains, the tools of meta-analysis can be employed to overcome these problems to unlock the knowledge of the past.
In previous publications, I have occasionally aggregated data across previous studies to address a specific question. For example, in 2012 I published a paper that plotted the slope of the psychometric function with and without external noise, collated from 18 previous studies. This revealed a previously unreported effect of the dimensionality of the noise on the extent to which psychometric functions are linearised. Then in 2013 I aggregated contrast discrimination ‘dipper’ functions from 18 studies and 63 observers, to attempt to understand individual differences in detection threshold. This data set was also averaged to characterize discrimination performance in terms of the placement of the dip and the steepness of the handle.
These examples added value to the papers they were included in by reanalysing existing data in a novel way. But they are not traditional examples of meta-analysis, as they focussed on the (threshold and slope) data of individual participants from the studies included, instead of averaging measures of effect size across studies.
An excellent example of a study that collates effect size measures (Cohen’s d) across multiple psychophysical studies is an authoritative and detailed meta-analysis by Hedger et al. (2016). This paper investigates how visually threatening stimuli (such as fearful faces) are processed in the absence of awareness, when the stimuli were rendered invisible by manipulations such as masking and binocular rivalry. This is a heavily researched area, and the studies included contained a total of 2696 participants. Overall, this study concludes that masking paradigms produce convincing effects, binocular rivalry produces medium effects, and that effects are inconsistent using a continuous flash suppression paradigm. Additional analyses drill down into the specifics of each study, exploring how stimuli and experimental designs influence outcomes.
Inspired by this exemplary work, my collaborators and I recently undertook a meta-analysis of binocular summation – the improvement in contrast sensitivity when stimuli are viewed with two eyes instead of one. This is also a heavily investigated topic because of its clinical utility as an index of binocular health and function, and we included 65 studies with a total sample size of 716 participants. Our central question was whether the summation ratio (an index of the binocular advantage) significantly exceeded the canonical value of √2 first reported by Campbell and Green (1965). Many individual studies reported ratios higher than this, but sample sizes were often small (median N=5 across the 65 studies) meaning that individual variability could have a substantial effect. We averaged the mean summation ratios using three different weighting schemes (giving equal weight to studies, weighting by sample size, and weighting by the inverse variance). Regardless of weighting, the lower bound of the 95% confidence interval on the mean summation ratio always exceeded √2, conclusively overturning a long established psychophysical finding, with implications for our understanding of nonlinearities early in the visual system.
We also performed additional analyses to explore the effect of stimulus spatiotemporal frequency, and the difference in sensitivity across the eyes, confirming our findings with new data. This work reveals an effect of stimulus speed (the ratio of temporal to spatial frequency), suggesting that neural summation varies according to stimulus properties, and meaning that there is no ‘true’ value for binocular summation, rather a range of possible values between √2 and 2. Our analysis of monocular sensitivity differences leads to a deeper understanding of how best to analyse the data of future studies.
Although the summation meta-analysis was conducted using the summation ratio as the outcome variable, it is possible to convert the aggregate values to more traditional measures of effect size. Doing this revealed an unusually large effect size (Cohen’s d=31) for detecting the presence of binocular summation, and another large effect size (Cohen’s d=3.22) when comparing to the theoretical value of √2. These very large effects mean that even studies with very few participants (N=3) have substantial power (>0.95). In many ways, this can be considered a validation of the widespread psychophysical practice of extensively testing a small number of observers using very precise methods.
Overall, meta-analysis can reveal important psychophysical effects that were previously obscured by the limitations of individual studies. This provides opportunities to reveal findings involving large aggregate sample sizes, that will inspire new experiments and research directions. The binocular summation meta analysis is now available online, published in Psychological Bulletin [DOI].