Towards a Visualization Learning and Benchmarking Repository


Researchers currently rely on ad hoc datasets to train automated visualization tools and evaluate the effectiveness of visualization designs. These exemplars often lack the characteristics of real-world datasets, and their one-off nature makes it difficult to compare different techniques. In this paper, we present VizNet: a large-scale corpus of over 31 million datasets compiled from open data repositories and online visualization galleries. On average, these datasets comprise 17 records over 3 dimensions and across the corpus, we find 51% of the dimensions record categorical data, 44% quantitative, and only 5% temporal. VizNet provides the necessary common baseline for comparing visualization design techniques, and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNet's utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the influence of user task and data distribution on visual encoding effectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual effectiveness can be learned from experimental results, and show its predictive power across test datasets.



Kevin Hu, Neil Gaikwad, Michiel Bakker, Madelon Hulsebos, Emanuel Zgraggen, César Hidalgo, Tim Kraska, Guoliang Li, Arvind Satyanarayan, and Çağatay Demiralp. 2019. VizNet: Towards a large-scale visualization learning and bench-marking repository. In Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI). ACM.
Plain Text
@inproceedings{viznet, title={VizNet: {T}owards a large-scale visualization learning and benchmarking repository}, author={Hu, Kevin and Gaikwad, Neil and Bakker, Michiel and Hulsebos, Madelon and Zgraggen, Emanuel and Hidalgo, C\'{e}sar and Kraska, Tim and Li, Guoliang and Satyanarayan, Arvind and Demiralp, {\c{C}}a{\u{g}}atay}, booktitle={Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI)}, year={2019}, publisher={ACM} }


Kevin HuNeil GaikwadMichiel BakkerMadelon HulsebosEmanuel ZgraggenCésar HidalgoTim KraskaGuoliang LiArvind SatyanarayanÇağatay Demiralp