Region Based Convolutional Neural Networks


Region-based Convolutional Neural Networks are a family of machine learning models for computer vision, and specifically object detection and localization. The original goal of R-CNN was to take an input image and produce a set of bounding boxes as output, where each bounding box contains an object and also the category of the object. In general, R-CNN architectures perform selective search over feature maps outputted by a CNN.
R-CNN has been extended to perform other computer vision tasks, such as: tracking objects from a drone-mounted camera, locating text in an image, and enabling object detection in Google Lens.
Mask R-CNN is also one of seven tasks in the MLPerf Training Benchmark, which is a competition to speed up the training of neural networks.

History

The following covers some of the versions of R-CNN that have been developed.
  • November 2013: R-CNN.
  • April 2015: Fast R-CNN.
  • June 2015: Faster R-CNN.
  • March 2017: Mask R-CNN.
  • December 2017: Cascade R-CNN is trained with increasing Intersection over Union thresholds, making each stage more selective against nearby false positives.
  • June 2019: Mesh R-CNN adds the ability to generate a 3D mesh from a 2D image.

    Architecture

For review articles see.

Selective search

Given an image, selective search first segments the image by the algorithm in, then performs the following:
Input: image
Output: Set of object location hypotheses L

Segment image into initial regions R = using Felzenszwalb and Huttenlocher
Initialise similarity set S = ∅
foreach Neighbouring region pair do
Calculate similarity s
S = S ∪ s
while S ≠ ∅ do
Get highest similarity s = max
Merge corresponding regions rt = ri ∪ rj
Remove similarities regarding ri: S = S \ s
Remove similarities regarding rj: S = S \ s
Calculate similarity set St between rt and its neighbours
S = S ∪ St
R = R ∪ rt
Extract object location boxes L from all regions in R

R-CNN

Given an input image, R-CNN begins by applying selective search to extract regions of interest, where each ROI is a rectangle that may represent the boundary of an object in image. Depending on the scenario, there may be as many as ROIs. After that, each ROI is fed through a neural network to produce output features. For each ROI's output features, an ensemble of support-vector machine classifiers is used to determine what type of object is contained within the ROI.

Fast R-CNN

While the original R-CNN independently computed the neural network features on each of as many as two thousand regions of interest, Fast R-CNN runs the neural network once on the whole image.
At the end of the network is a ROIPooling module, which slices out each ROI from the network's output tensor, reshapes it, and classifies it. As in the original R-CNN, the Fast R-CNN uses selective search to generate its region proposals.

Faster R-CNN

While Fast R-CNN used selective search to generate ROIs, Faster R-CNN integrates the ROI generation into the neural network itself.

Mask R-CNN

While previous versions of R-CNN focused on object detections, Mask R-CNN adds instance segmentation. Mask R-CNN also replaced ROIPooling with a new method called ROIAlign, which can represent fractions of a pixel.