ImageNet


The ImageNet project is a large visual database designed for use in visual object recognition software research. More than 14 million images have been hand-annotated by the project to indicate what objects are pictured and in at least one million of the images, bounding boxes are also provided. ImageNet contains more than 20,000 categories, with a typical category, such as "balloon" or "strawberry", consisting of several hundred images. The database of annotations of third-party image URLs is freely available directly from ImageNet, though the actual images are not owned by ImageNet. Since 2010, the ImageNet project runs an annual software contest, the ImageNet Large Scale Visual Recognition Challenge, where software programs compete to correctly classify and detect objects and scenes. The challenge uses a "trimmed" list of one thousand non-overlapping classes.

History

AI researcher Fei-Fei Li began working on the idea for ImageNet in 2006. At a time when most AI research focused on models and algorithms, Li wanted to expand and improve the data available to train AI algorithms. In 2007, Li met with Princeton professor Christiane Fellbaum, one of the creators of WordNet, to discuss the project. As a result of this meeting, Li went on to build ImageNet starting from the roughly 22,000 nouns of WordNet and using many of its features. She was also inspired by a 1987 estimate that the average person recognizes roughly 30,000 different kinds of objects.
As an assistant professor at Princeton, Li assembled a team of researchers to work on the ImageNet project. They used Amazon Mechanical Turk to help with the classification of images. Labeling started in July 2008 and ended in April 2010. It took 49K workers from 167 countries filtering and labeling over 160M candidate images. They had enough budget to have each of the 14 million images labelled three times.
The original plan called for 10,000 images per category, for 40,000 categories at 400 million images, each verified 3 times. They found that humans can classify at most 2 images/sec. At this rate, it was estimated to take 19 human-years of labor.
They presented their database for the first time as a poster at the 2009 Conference on Computer Vision and Pattern Recognition in Florida, titled "ImageNet: A Preview of a Large-scale Hierarchical Dataset". The poster was reused at Vision Sciences Society 2009.
In 2009, Alex Berg suggested adding object localization as a task. Li approached contest in 2009 for a collaboration. It resulted in the subsequent ImageNet Large Scale Visual Recognition Challenge starting in 2010, which has 1000 classes and object localization, as compared to which had just 20 classes and 19,737 images.

Significance for deep learning

On 30 September 2012, a convolutional neural network called AlexNet achieved a top-5 error of 15.3% in the ImageNet 2012 Challenge, more than 10.8 percentage points lower than that of the runner-up. Using convolutional neural networks was feasible due to the use of graphics processing units during training, an essential ingredient of the deep learning revolution. According to The Economist, "Suddenly people started to pay attention, not just within the AI community but across the technology industry as a whole."
In 2015, AlexNet was outperformed by Microsoft's very deep CNN with over 100 layers, which won the ImageNet 2015 contest, having 3.57% error on the test set.
Andrej Karpathy estimated in 2014 that with concentrated effort, he could reach 5.1% error rate, and ~10 people from his lab reached ~12-13% with less effort. It was estimated that with maximal effort, a human could reach 2.4%.

Dataset

ImageNet crowdsources its annotation process. Image-level annotations indicate the presence or absence of an object class in an image, such as "there are tigers in this image" or "there are no tigers in this image". Object-level annotations provide a bounding box around the indicated object. ImageNet uses a variant of the broad WordNet schema to categorize objects, augmented with 120 categories of dog breeds to showcase fine-grained classification.
In 2012, ImageNet was the world's largest academic user of Mechanical Turk. The average worker identified 50 images per minute.
The original plan of the full ImageNet would have roughly 50M clean, diverse and full resolution images spread over approximately 50K synsets. This was not achieved.
The summary statistics given on April 30, 2010:
  • Total number of non-empty synsets: 21841
  • Total number of images: 14,197,122
  • Number of images with bounding box annotations: 1,034,908
  • Number of synsets with SIFT features: 1000
  • Number of images with SIFT features: 1.2 million

    Categories

The categories of ImageNet were filtered from the WordNet concepts. Each concept, since it can contain multiple synonyms, so each concept is called a "synonym set" or "synset". There were more than 100,000 synsets in WordNet 3.0, majority of them are nouns. The ImageNet dataset filtered these to 21,841 synsets that are countable nouns that can be visually illustrated.
Each synset in WordNet 3.0 has a "WordNet ID", which is a concatenation of part of speech and an "offset". Every wnid starts with "n" because ImageNet only includes nouns. For example, the wnid of synset "dog, domestic dog, Canis familiaris" is "n02084071".
The categories in ImageNet fall into 9 levels, from level 1 to level 9.

Image format

The images were scraped from online image search using synonyms in multiple languages. For example: German shepherd, German police dog, German shepherd dog, Alsatian, ovejero alemán, pastore tedesco, 德国牧羊犬.
ImageNet consists of images in RGB format with varying resolutions. For example, in ImageNet 2012, "fish" category, the resolution ranges from 4288 x 2848 to 75 x 56. In machine learning, these are typically preprocessed into a standard constant resolution, and whitened, before further processing by neural networks.
For example, in PyTorch, ImageNet images are by default normalized by dividing the pixel values so that they fall between 0 and 1, then subtracting by , then dividing by . These are the mean and standard deviations for ImageNet, so this whitens the input data.

Labels and annotations

Each image is labelled with exactly one wnid.
Dense SIFT features for ImageNet-1K were available for download, designed for bag of visual words.
The bounding boxes of objects were available for about 3000 popular synsets with on average 150 images in each synset.
Furthermore, some images have attributes. They released 25 attributes for ~400 popular synsets:
  • Color: black, blue, brown, gray, green, orange, pink, red, violet, white, yellow
  • Pattern: spotted, striped
  • Shape: long, round, rectangular, square
  • Texture: furry, smooth, rough, shiny, metallic, vegetation, wooden, wet

    ImageNet-21K

The full original dataset is referred to as ImageNet-21K. ImageNet-21k contains 14,197,122 images divided into 21,841 classes. Some papers round this up and name it ImageNet-22k.
The full ImageNet-21k was released in Fall of 2011, as fall11_whole.tar. There is no official train-validation-test split for ImageNet-21k. Some classes contain only 1-10 samples, while others contain thousands.

ImageNet-1K

There are various subsets of the ImageNet dataset used in various context, sometimes referred to as "versions".
One of the most highly used subsets of ImageNet is the "ImageNet Large Scale Visual Recognition Challenge 2012–2017 image classification and localization dataset". This is also referred to in the research literature as ImageNet-1K or ILSVRC2017, reflecting the original ILSVRC challenge that involved 1,000 classes. ImageNet-1K contains 1,281,167 training images, 50,000 validation images and 100,000 test images.
Each category in ImageNet-1K is a leaf category, meaning that there are no child nodes below it, unlike ImageNet-21K. For example, in ImageNet-21K, there are some images categorized as simply "mammal", whereas in ImageNet-1K, there are only images categorized as things like "German shepherd", since there are no child-words below "German shepherd".

Later developments

In the WordNet they built ImageNet on, there were 2832 synsets in the "person" subtree. During 2018--2020 period, they removed the download of the ImageNet-21k as they went through extensive filtering in these person synsets. Out of these 2832 synsets, 1593 were deemed "potentially offensive". Out of the remaining 1239, 1081 were deemed not really "visual". The result was that only 158 synsets remained. Of these, only 139 contained more than 100 images for "further exploration".
In 2021 winter, ImageNet-21k was updated. 2702 categories in the "person" subtree were removed to prevent "problematic behaviors" in a trained model. The result was that only 130 synsets in "person" subtree remained. Furthermore, in 2021, ImageNet-1k was updated by blurring out faces appearing in the 997 non-person categories. They found, out of all 1,431,093 images in ImageNet-1k, 243,198 images contain at least one face. And the total number of faces adds up to 562,626. They found training models on the dataset with these faces blurred caused minimal loss in performance.
ImageNet-C is an adversarially perturbed version of ImageNet constructed in 2019.
ImageNetV2 was a new dataset containing three test sets with 10,000 each, constructed by the same methodology as the original ImageNet.
ImageNet-21K-P was a filtered and cleaned subset of ImageNet-21K, with 12,358,688 images from 11,221 categories. All Images were resized to 224 x 224px.
NamePublishedClassesTrainingValidationTestSize
PASCAL VOC200520
ImageNet-1K20091,0001,281,16750,000100,000130 GB
ImageNet-21K201121,84114,197,1221.31 TB
ImageNetV2201930,000
ImageNet-21K-P202111,22111,797,632561,052250 GB