The first results from the SpeedPerception challenge are in! We had over 5,000 sessions completed, with over 50,000 valid data points (77,000+ votes overall).
We tested three initial hypotheses, of which two were confirmed:
✔ No single existing webperf metric explains human choices with 90%+ accuracy
✔ Users did not wait until “visual complete” to make their choice
✗ Visual metrics did NOT perform better than non-visual/network metrics
For those of you unaware, SpeedPerception is a free, open-source, benchmark dataset of how people perceive above-the fold rendering and webpage loading processes which can be used to better understand the perceptual aspects of end-user web experience. The benchmark we’ve posted on Github can provide a quantitative basis to compare different algorithms. Our hope is that this data will spur computer scientists and other web performance engineers to make progress quantifying perceived web performance.
While we’ve posted the initial findings on Github, we will be releasing additional results. We appreciate feedback on both the study and results, and suggestions for next steps. If you want to analyze the data yourself and test your own hypotheses, the data and code is all available on Github. Please do share any results and conclusions with us.
And, if you were one of the over 5,000 challenge participants, a thank you.
Parvez Ahammad, Clark Gao, Prasenjit Dey, Estelle Weyl, Pat Meenan.