String (computer science)


In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed. A string is often implemented as an array data structure of bytes that stores a sequence of elements, typically characters, using some character encoding. More general, string may also denote a sequence of data other than just characters.
Depending on the programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements.
When a string appears literally in source code, it is known as a string literal or an anonymous string.
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet.

Purpose

A primary purpose of strings is to store human-readable text, like words and sentences. Strings are used to communicate information from a computer program to the user of the program. A program may also accept string input from its user. Further, strings may store data expressed as characters yet not intended for human reading.
Example strings and their purposes:
  • A message like "file upload complete" is a string that software shows to end users. In the program's source code, this message would likely appear as a string literal.
  • User-entered text, like "I got a new job today" as a status update on a social media service. Instead of a string literal, the software would likely store this string in a database.
  • Alphabetical data, like "AGATGCCGT" representing nucleic acid sequences of DNA.
  • Computer settings or parameters, like "?action=edit" as a URL query string. Often these are intended to be somewhat human-readable, though their primary purpose is to communicate to computers.
The term string may also designate a sequence of data or computer records other than characters like a "string of bits" but when used without qualification it refers to strings of characters.

History

Use of the word "string" to mean any items arranged in a line, series or succession dates back centuries. In 19th-century typesetting, compositors used the term "string" to denote a length of type printed on paper; the string would be measured to determine the compositor's pay.
Use of the word "string" to mean "a sequence of symbols or linguistic elements in a definite order" emerged from mathematics, symbolic logic, and linguistic theory to speak about the formal behavior of symbolic systems, setting aside the symbols' meaning.
For example, logician C. I. Lewis wrote in 1918:

A mathematical system is any set of strings of recognisable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks. That a system should consist of 'marks' instead of sounds or odours is immaterial.

According to Jean E. Sammet, "the first realistic string handling and pattern matching language" for computers was COMIT in the 1950s, followed by the SNOBOL language of the early 1960s.

String datatypes

A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal.

String length

Although formal strings can have an arbitrary finite length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed-length strings, which have a fixed maximum length to be determined at compile time and which use the same amount of memory whether this maximum is needed or not, and variable-length strings, whose length is not arbitrarily fixed and which can use varying amounts of memory depending on the actual requirements at run time. Most strings in modern programming languages are variable-length strings. Of course, even variable-length strings are limited in length by the amount of available memory. The string length can be stored as a separate integer or implicitly through a termination character, usually a character value with all bits zero such as in C programming language. See also "Null-terminated" below.

Character encoding

String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this, since characters a program treated specially were in the same place in all the encodings a program would encounter. These character sets were typically based on ASCII or EBCDIC. If text in one encoding was displayed on a system using a different encoding, text was often mangled, though often somewhat readable and some computer users learned to read the mangled text.
Logographic languages such as Chinese, Japanese, and Korean need far more than 256 characters for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string.
Unicode has simplified the picture somewhat. Most programming languages now have a datatype for Unicode strings. Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. UTF-8, UTF-16 and UTF-32 require the programmer to know that the fixed-size code units are different from the "characters", the main difficulty currently is incorrectly designed APIs that attempt to hide this difference.

Implementations

Some languages, such as C++, Perl and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java, JavaScript, Lua, Python, and Go, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings. Some of these languages with immutable strings also provide another type that is mutable, such as Java and.NET's, the thread-safe Java, and the Cocoa NSMutableString. Immutability brings advantages and disadvantages: while immutable strings may require inefficiently creating many copies, they are simpler and fully thread-safe.
Strings are typically implemented as arrays of bytes, characters, or code units, to allow fast access to individual units or substrings, including characters when they have a fixed length. A few languages such as Haskell implement them as linked lists instead.
Many high-level languages provide strings as a primitive data type, such as JavaScript and PHP, while most others provide them as a composite data type, some with special language support in writing literals, for example, Java and C#.
Some languages, such as C, Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes. Even in programming languages having a dedicated string type, strings can usually be iterated as a sequence of character codes, like lists of integers or other values.

Representations

Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16.
The term byte string usually indicates a general-purpose string of bytes, rather than strings of only characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value.
Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens, for example, with UTF-8, where single codes can take anywhere from one to four bytes, and single characters can take an arbitrary number of codes. In these cases, the logical length of the string differs from the physical length of the array. UTF-32 avoids the first part of the problem.

Dope vectors

The length of a string can be stored in a dope vector, separate from the storage holding the actual characters. The IBM PL/I compiler used a string dope vector for variable-length strings and for passing string parameters. The SDV contains a current length and a maximum length, and is not adjacent to the string proper. After PL/I, IBM dropped the SDV in favor of length-prefixed strings.