News in English

Toward the validation of crowdsourced experiments for lightness perception

by Emily N. Stark, Terece L. Turton, Jonah Miller, Elan Barenholtz, Sang Hong, Roxana Bujack

Crowdsource platforms have been used to study a range of perceptual stimuli such as the graphical perception of scatterplots and various aspects of human color perception. Given the lack of control over a crowdsourced participant’s experimental setup, there are valid concerns on the use of crowdsourcing for color studies as the perception of the stimuli is highly dependent on the stimulus presentation. Here, we propose that the error due to a crowdsourced experimental design can be effectively averaged out because the crowdsourced experiment can be accommodated by the Thurstonian model as the convolution of two normal distributions, one that is perceptual in nature and one that captures the error due to variability in stimulus presentation. Based on this, we provide a mathematical estimate for the sample size needed to produce a crowdsourced experiment with the same power as the corresponding in-person study. We tested this claim by replicating a large-scale, crowdsourced study of human lightness perception with a diverse sample with a highly controlled, in-person study with a sample taken from psychology undergraduates. Our claim was supported by the replication of the results from the latter. These findings suggest that, with sufficient sample size, color vision studies may be completed online, giving access to a larger and more representative sample. With this framework at hand, experimentalists have the validation that choosing either many online participants or few in person participants will not sacrifice the impact of their results.

Читайте на 123ru.net