Computer data storage


Computer data storage or digital data storage is the retention of digital data via technology consisting of computer components and recording media. Digital data storage is a core function and fundamental component of computers.
Generally, the faster and volatile storage components are referred to as "memory", while slower persistent components are referred to as "storage". This distinction was extended in the Von Neumann architecture, where the central processing unit consists of two main parts: The control unit and the arithmetic logic unit. The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. In practice, almost all computers use a memory hierarchy, which puts memory close to the CPU and storage further away.
In modern computers, hard disk drives or solid-state drives are usually used as storage.

Data

A modern digital computer represents data using the binary numeral system. The memory cell is the fundamental building block of computer memory, storing stores one bit of binary information that can be set to store a 1, reset to store a 0, and accessed by reading the cell.
Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. Digital data comprises the binary representation of a piece of information, often being encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding.

Encryption

For security reasons, certain types of data may be encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Encryption in transit protects data as it is being transmitted.

Compression

methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed. This utilizes substantially less storage for many types of data at the cost of more computation. Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.

Vulnerability and reliability

Distinct types of data storage have different points of failure and various methods of predictive failure analysis. Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.

Redundancy

allows the computer to detect errors in coded data and correct them based on mathematical algorithms. The cyclic redundancy check method is typically used in communications and storage for error detection. Redundancy solutions include storage replication, disk mirroring and RAID.

Error detection

Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed. The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.

Architecture

Without a significant amount of memory, a computer would only be able to perform fixed operations and immediately output the result, thus requiring hardware reconfiguration for a new program to be run. This is often used in devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which operating instructions and data are stored, such that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions. They also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.

Storage and memory

In contemporary usage, the term "storage" typically refers to a subset of computer data storage that comprises storage devices and their media not directly accessible by the CPU, that is, secondary or tertiary storage. Common forms of storage include hard disk drives, optical disc drives, and non-volatile devices. On the other hand, the term "memory" is used to refer to semiconductor read-write data storage, typically dynamic random-access memory. Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed; static RAM is similar to DRAM, albeit it never needs to be refreshed as long as power is applied.
In contemporary usage, the memory hierarchy of primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.

Primary

Primary storage, often referred to simply as memory, is storage directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner. Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory, which is small-sized, light, and relatively expensive. RAM used for primary storage is volatile, meaning that it loses the information when not powered for a specific time. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage. Besides main large-capacity RAM, there are two more sub-layers of primary storage:
  • Processor registers are the fastest of all forms of data storage, being located inside the processor, with each register typically holding a word of data. CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data.
  • Processor cache is an intermediate stage between faster registers and slower main memory, being faster than main memory but with much less capacity. Multi-level hierarchical cache setup is also commonly used, such that primary cache is the smallest and fastest, while secondary cache is larger and slower.
Primary storage, including ROM, EEPROM, NOR flash, and RAM, is usually byte-addressable. Such memory is directly or indirectly connected to the central processing unit via a memory bus, comprising an address bus and a data bus. The CPU firstly sends a number called the memory address through the address bus that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit is a small device between CPU and RAM recalculating the actual memory address. Memory management units allow for memory management; they may, for example, provide an abstraction of virtual memory or other tasks.
BIOS
containing a small startup program is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called read-only memory. Most types of "ROM" are not literally read only but are difficult and slow to write to. Some embedded systems run programs directly from ROM, because such programs are rarely changed. Standard computers largely do not store many programs in ROM, apart from firmware, and use large capacities of secondary storage.

Secondary

Secondary storage differs from primary storage in that it is not directly accessible by the CPU. Computers use input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile, retaining data when its power is shut off. Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives or solid-state drives are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds, while the access time per byte for primary storage is measured in nanoseconds. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
To reduce the seek time and rotational latency, secondary storage, including HDD, ODD and SSD, are transferred to and from disks in large contiguous blocks. Secondary storage is addressable by block; once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory, for example, using RAID.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks to a swap file or page file on secondary storage, retrieving them later when needed.