Partition (database)
A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning refers to intentionally breaking a large database into smaller ones for scalability purposes, distinct from network partitions which are a type of network fault between nodes. In a partitioned database, each piece of data belongs to exactly one partition, effectively making each partition a small database of its own. Database partitioning is normally done for manageability, performance or availability reasons, or for load balancing. It is popular in distributed database management systems, where each partition may be spread over multiple nodes, with users at the node performing local transactions on the partition. This increases performance for sites that have regular transactions involving certain views of data, whilst maintaining availability and security.
Partitioning enables distribution of datasets across multiple disks and query loads across multiple processors. For queries that operate on a single partition, each node executes queries independently on its local partition, enabling linear scaling of query throughput with additional nodes. More complex queries can be parallelized across multiple nodes, though this presents additional challenges.
History
Database partitioning emerged in the 1980s with systems like Teradata and NonStop SQL. The approach was later adopted by NoSQL databases and Hadoop-based data warehouses. While implementations vary between transactional and analytical workloads, the core principles of partitioning remain consistent across both use cases.Terminology
Different databases use varying terminology for partitioning:- Shard in MongoDB, Elasticsearch, and SolrCloud
- Region in HBase
- Tablet in Bigtable
- vnode in Cassandra and Riak
- vBucket in Couchbase
Partitioning and Replication
Load Balancing and Hot Spots
Partitioning aims to distribute data and query load evenly across nodes. With ideal distribution, system capacity scales linearly with added nodes—ten nodes should process ten times the data and throughput of a single node. Uneven distribution, termed skew, reduces partitioning efficiency. Partitions with excessive load are called hot spots.Several strategies address hot spots:
- Random record assignment to nodes, at the cost of retrieval complexity
- Key-range partitioning with optimized boundaries
- Hash-based partitioning for even load distribution
Partitioning criteria
- Range partitioning: assigns continuous key ranges to partitions, analogous to encyclopedia volumes. Known range boundaries enable direct request routing. Boundaries can be set manually or automatically for balanced distribution. While this enables efficient range scans, certain access patterns create hot spots. For instance, in sensor networks using timestamp keys, writes concentrate in the current time period's partition. Using compound keys—such as prefixing timestamps with sensor identifiers—can distribute this load. An example could be a partition for all rows where the "zipcode" column has a value between 70000 and 79999.
- List partitioning: a partition is assigned a list of values. If the partitioning key has one of these values, the partition is chosen. For example, all rows where the column
Countryis eitherIceland,Norway,Sweden,FinlandorDenmarkcould build a partition for the Nordic countries. - Composite partitioning: allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning. Consistent hashing could be considered a composite of hash and list partitioning where the hash reduces the key space to a size that can be listed.
- Round-robin partitioning: the simplest strategy, it ensures uniform data distribution. With
npartitions, theith tuple in insertion order is assigned to partition. This strategy enables the sequential access to a relation to be done in parallel. However, the direct access to individual tuples, based on a predicate, requires accessing the entire relation. - Hash partitioning: applies a hash function to convert skewed data into uniform distributions for even load distribution across partitions. While this effectively prevents hot spots, it sacrifices range query efficiency as adjacent keys scatter across partitions. Common implementations include MD5 in Cassandra and MongoDB. Some systems, like Cassandra, combine approaches using compound primary keys: hashing the first component for partitioning while maintaining sort order for remaining components within partitions.