LeNet


LeNet is a series of convolutional neural network architectures created by a research group in AT&T Bell Laboratories during the 1988 to 1998 period, centered around Yann LeCun. They were designed for reading small grayscale images of handwritten digits and letters, and were used in ATM for reading cheques.
Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing. LeNet-5 was one of the earliest convolutional neural networks and was historically important during the development of deep learning.
In general, when LeNet is referred to without a number, it refers to the 1998 version, the most well-known version. It is also sometimes called LeNet-5.

Development history

In 1988, LeCun joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, headed by Lawrence D. Jackel.In 1988, LeCun et al. published a neural network design that recognize handwritten zip code. However, its convolutional kernels were hand-designed.
In 1989, Yann LeCun et al. at Bell Labs first applied the backpropagation algorithm to practical applications, and believed that the ability to learn network generalization could be greatly enhanced by providing constraints from the task's domain. He combined a convolutional neural network trained by backpropagation algorithms to read handwritten numbers and successfully applied it in identifying handwritten zip code numbers provided by the US Postal Service. This was the prototype of what later came to be called LeNet-1. In the same year, LeCun described a small handwritten digit recognition problem in another paper, and showed that even though the problem is linearly separable, single-layer networks exhibited poor generalization capabilities. When using shift-invariant feature detectors on a multi-layered, constrained network, the model could perform very well. He believed that these results proved that minimizing the number of free parameters in the neural network could enhance the generalization ability of the neural network.
In 1990, their paper described the application of backpropagation networks in handwritten digit recognition again. They only performed minimal preprocessing on the data, and the model was carefully designed for this task and it was highly constrained. The input data consisted of images, each containing a number, and the test results on the postal code digital data provided by the US Postal Service showed that the model had an error rate of only 1% and a rejection rate of about 9%.
Their research continued for the next four years, and in 1994 the MNIST database was developed, for which LeNet-1 was too small, hence a new LeNet-4 was trained on it.
A year later the AT&T Bell Labs collective reviewed various methods on handwritten character recognition in paper, using standard handwritten digits to identify benchmark tasks. These models were compared and the results showed that the latest network outperformed other models.
By 1998 Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner were able to provide examples of practical applications of neural networks, such as two systems for recognizing handwritten characters online and models that could read millions of checks per day, which includes a description of LeNet-5.
The research achieved great success and aroused the interest of scholars in the study of neural networks. While the architecture of the best performing neural networks today are not the same as that of LeNet, the network was the starting point for a large number of neural network architectures, and also brought inspiration to the field.
1989Yann LeCun et al. proposed the original form of LeNet
1989Yann LeCun demonstrates that minimizing the number of free parameters in neural networks can enhance the generalization ability of neural networks.
1990Application of backpropagation to LeNet-1 in handwritten digit recognition.
1994MNIST database and LeNet-4 developed
1995Various methods applied to handwritten character recognition reviewed and compared with standard handwritten digit recognition benchmarks. The results show that convolutional neural networks outperform all other models.
1998LeNet-5 presented in a paper about practical applications

Architecture

LeNet has several common motifs of modern convolutional neural networks, such as convolutional layer, pooling layer and full connection layer.
  • Every convolutional layer includes three parts: convolution, pooling, and nonlinear activation functions
  • Using convolution to extract spatial features
  • Subsampling average pooling layer
  • tanh activation function
  • fully connected layers in the final layers for classification
  • Sparse connection between layers to reduce the complexity of computation
In 1989, LeCun et al. published a report, which contained "Net-1" to "Net-5". There were many subsequent refinements, up to 1998, and the naming is inconsistent. Generally, when people speak of "LeNet" they refer to the 1998 LeNet, also known as "LeNet-5".
LeNet-1, 4, 5 had been referred to in, but it is unclear what LeNet-2, LeNet-3 might refer to.

1988 Net

The first neural network published by the LeCun research group was in 1988. It was a hybrid approach. The first stage scaled, deskewed, and skeletonized the input image. The second stage was a convolutional layer with 18 hand-designed kernels. The third stage was a fully connected network with one hidden layer.
The dataset was a collection of handwritten digit images extracted from actual U.S. Mail, which was the same dataset used in the famed 1989 report.

Net-1 to Net-5

Net-1 to Net-5 were published in a 1989 report. The last layer of all of them were fully connected. The original paper does not explain the padding strategy. All cells have an independent bias, including the output cells of convolutional layers.
  • Net-1: No hidden layer. Fully connected..
  • Net-2: One hidden fully connected layer with 12 hidden units..
  • Net-3: Two hidden convolutional layers.. Both are locally connected layers with input shape and stride 2.
  • Net-4: Two hidden layers, the first is a convolution, the second is locally connected.. The convolution layer has 2 kernels of shape and stride 2. The locally connected layer has input shape and stride 1.
  • Net-5: Two convolutional hidden layers.. The first convolution layer has 2 kernels of shape and stride 2. The second convolutional layer has 4 kernels of shape and stride 1.
The dataset contained 480 binary images, each sized 16×16 pixels. Originally, 12 examples of each digit were hand-drawn on a 16×13 bitmap using a mouse, resulting in 120 images. Then, each image was shifted horizontally in 4 consecutive positions to generate a 16×16 version, yielding the 480 images.
From these, 320 images were randomly selected for training and the remaining 160 images were used for testing. Performance on training set is 100% for all networks, but they differ in test set performance.
NameConnectionsIndependent parameters% correct
Net-12570257080.0
Net-23214321487.0
Net-31226122688.5
Net-42266113294.0
Net-55194106098.4

1989 LeNet

The LeNet published in 1989 has 3 hidden layers and an output layer. It has 1256 units, 64660 connections, and 9760 independent parameters.
  • H1 : with kernels.
  • H2 : with kernels.
  • H3: 30 units fully connected to H2.
  • Output: 10 units fully connected to H3, representing the 10 digit classes.
The connection pattern between H1 and H2 is described in. There were no separate pooling layer, as it was deemed too computationally expensive.
The dataset was called the "US Postal Service database", and it was 9298 grayscale images of resolution 16×16, digitized from handwritten zip codes that appeared on U.S. mail passing through the Buffalo, New York post office. The training set had 7291 data points, and test set had 2007. Both training and test set contained ambiguous, unclassifiable, and misclassified data. The task is rather difficult. On the test set, two humans made errors at an average rate of 2.5%.
Training took 3 days on a Sun-4/260 using a diagonal Hessian approximation of Newton's method. It was implemented in the SN Neural Network Simulator. It took 23 epochs over the training set.
Compared to the previous 1988 architecture, there was no skeletonization, and the convolutional kernels were learned automatically by backpropagation.

1990 LeNet

A later version of 1989 LeNet has four hidden layers and an output layer. It takes a 28x28 pixel image as input, though the active region is 16x16 to avoid boundary effects.
  • H1 : with kernels. This layer has trainable parameters.
  • H2 : by average pooling.
  • H3 : with kernels. Some kernels take input from 1 feature map, while others take inputs from 2 feature maps.
  • H4 : by average pooling.
  • Output: 10 units fully connected to H4, representing the 10 digit classes.
The network 4635 units, 98442 connections, and 2578 trainable parameters. Its architecture was designed by beginning with the 1989 LeNet, then pruning the parameter count by 4x via Optimal Brain Damage. One forward pass requires about 140,000 multiply-add operations. Its size is 50 kB in memory. It was also called LeNet-1. On a SPARCstation 10, it took 0.5 weeks to train, and 0.015 seconds to classify one image.