Datasaurus dozen


The Datasaurus dozen comprises thirteen data sets that have nearly identical simple descriptive statistics to two decimal places, yet have very different distributions and appear very different when graphed. It was inspired by the smaller Anscombe's quartet that was created in 1973.

Data

The following table contains summary statistics for all thirteen data sets.
PropertyValueAccuracy
Number of elements142exact
Mean of x54.26to 2 decimal places
Sample variance of x: s16.76to 2 decimal places
Mean of y47.83to 2 decimal places
Sample variance of y: s26.93to 2 decimal places
Correlation between x and y−0.06to 3 decimal places
Linear regression liney = 53 − 0.1xto 0 and 1 decimal places, respectively
Coefficient of determination of the linear regression:0.004to 3 decimal places

The thirteen data sets were labeled as the following:
  • away
  • bullseye
  • circle
  • dino
  • dots
  • h_lines
  • high_lines
  • slant_down
  • slant_up
  • star
  • v_line
  • wide_lines
  • x_shape
Similar to Anscombe's quartet, the Datasaurus dozen was designed to further illustrate the importance of looking at a set of data graphically before starting to analyze according to a particular type of relationship, and the inadequacy of basic statistic properties for describing realistic data sets.

Creation

The first data set, in the shape of a Tyrannosaurus, that inspired the rest of the "datasaurus" data set was constructed in 2016 by Alberto Cairo. It was proposed by Maarten Lambrechts that this data set also be called "Anscombosaurus".
This data set was then accompanied by twelve other data sets that were created by Justin Matejka and George Fitzmaurice at Autodesk. Unlike the Anscombe's quartet, where it is not known how the data set was generated, the authors used simulated annealing to make these data sets. They made small, random, and biased changes to each point towards the desired shape. Each shape took 200,000 iterations of perturbations to complete.
The pseudocode for this algorithm is as follows:

current_ds ← initial_ds
for x iterations, do:
test_ds ← perturb
if similar_enough:
current_ds ← test_ds
function perturb:
loop:
test ← move_random_points
if fit > fit or temp > random:
return test

whereinitial_ds is the seed data setcurrent_ds is the latest version of the data setfit is a function used to check whether moving the points gets closer to the desired shapetemp is the temperature of the simulated annealing algorithmsimilar_enough is a function that checks whether the statistics for the two given data sets are similar enoughmove_random_points is a function that randomly moves data points