Character (computing)

Source: Wikipedia, the free encyclopedia.

Diagram of String data in computing. Shows the word "example" with each letter in a separate box. The word "String" is above, referring to the entire sentence. The label "Character" is below and points to an individual box.
A string of seven characters

In computer and machine-based telecommunications terminology, a character is a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.[1]

Examples of characters include

printers
or other devices that display or otherwise process text.

Characters are typically combined into strings.

Historically, the term character was used to denote a specific number of contiguous

code units to define a "code point" and Unicode
uses varying number of those to define a "character".

Encoding

Computers and communication equipment represent characters using a

stored or transmitted through a network. Two examples of usual encodings are ASCII and the UTF-8 encoding for Unicode. While most character encodings map characters to numbers and/or bit sequences, Morse code
instead represents characters using a series of electrical impulses of varying length.

Terminology

Historically, the term character has been widely used by industry professionals to refer to an encoded character, often as defined by the programming language or

API. Likewise, character set has been widely used to refer to a specific repertoire of characters that have been mapped to specific bit sequences or numerical codes. The term glyph is used to describe a particular visual appearance of a character. Many computer fonts
consist of glyphs that are indexed by the numerical code of the corresponding character.

With the advent and widespread acceptance of Unicode

separation of presentation and content
.

For example, the

Japanese texts than it does in Chinese texts, and local typefaces
may reflect this. But nonetheless in Unicode they are considered the same character, and share the same code point.

The Unicode standard also differentiates between these abstract characters and coded characters or encoded characters that have been paired with numeric codes that facilitate their representation in computers.

Combining character

The combining character is also addressed by Unicode. For instance, Unicode allocates a code point to each of

  • 'i ' (U+0069),
  • the combining diaeresis (U+0308), and
  • 'ï' (U+00EF).

This makes it possible to code the middle character of the word 'naïve' either as a single character 'ï' or as a combination of the character 'i ' with the combining diaeresis: (U+0069 LATIN SMALL LETTER I + U+0308 COMBINING DIAERESIS); this is also rendered as 'ï '.

These are considered canonically equivalent by the Unicode standard.

char

A char in the C programming language is a data type with the size of exactly one byte,[6][7] which in turn is defined to be large enough to contain any member of the "basic execution character set". The exact number of bits can be checked via CHAR_BIT macro. By far the most common size is 8 bits, and the POSIX standard requires it to be 8 bits.[8] In newer C standards char is required to hold UTF-8 code units[6][7] which requires a minimum size of 8 bits.

A Unicode code point may require as many as 21 bits.[9] This will not fit in a char on most systems, so more than one is used for some of them, as in the variable-length encoding UTF-8 where each code point takes 1 to 4 bytes. Furthermore, a "character" may require more than one code point (for instance with combining characters), depending on what is meant by the word "character".

The fact that a character was historically stored in a single byte led to the two terms ("char" and "character") being used interchangeably in most documentation. This often makes the documentation confusing or misleading when multibyte encodings such as UTF-8 are used, and has led to inefficient and incorrect implementations of string manipulation functions (such as computing the "length" of a string as a count of code units rather than bytes). Modern POSIX documentation attempts to fix this, defining "character" as a sequence of one or more bytes representing a single graphic symbol or control code, and attempts to use "byte" when referring to char data.[10][11] However it still contains errors such as defining an array of char as a character array (rather than a byte array).[12]

Unicode can also be stored in strings made up of code units that are larger than char. These are called "

wchar_t. Due to some platforms defining wchar_t as 16 bits and others defining it as 32 bits, recent versions have added char16_t, char32_t. Even then the objects being stored might not be characters, for instance the variable-length UTF-16
is often stored in arrays of char16_t.

Other languages also have a char type. Some such as C++ use at least 8 bits like C.[7] Others such as Java use 16 bits for char in order to represent UTF-16 values.

See also

References

  1. ^ "Definition of CHARACTER". Merriam-Webster. Retrieved 2018-04-01.
  2. Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
    The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60 [fr
    ]
    is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. […]
  3. catena was coined for this purpose by the designers of the Bull GAMMA 60 [fr] computer.)
    Block
    refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. […]
  4. Intel Corporation. December 1973. pp. v, 2-6. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02. […] Bit - The smallest unit of information which can be represented. (A bit may be in one of two states I 0 or 1). […] Byte - A group of 8 contiguous bits occupying a single memory location. […] Character - A group of 4 contiguous bits of data. […] (NB. This Intel 4004 manual uses the term character referring to 4-bit rather than 8-bit data entities. Intel switched to use the more common term nibble for 4-bit entities in their documentation for the succeeding processor 4040
    in 1974 already.)
  5. ^ Davis, Mark (2008-05-05). "Moving to Unicode 5.1". Google Blog. Retrieved 2008-09-28.
  6. ^ a b "§5.2.4.2.1 Sizes of integer types <limits.h> / §6.2.5 Types / §6.5.3.4 The sizeof and _Alignof operators". ISO/IEC 9899:2018 - Information technology -- Programming languages -- C. {{cite book}}: |website= ignored (help)
  7. ^ a b c "§1.7 The C++ memory model / §5.3.3 Sizeof". ISO/IEC 14882:2011.
  8. ^ "<limits.h>". pubs.opengroup.org. Retrieved 2018-04-01.
  9. ^ "Glossary of Unicode Terms – Code Point". Retrieved 2019-05-14.
  10. ^ "POSIX definition of Character".
  11. ^ "POSIX strlen reference".
  12. ^ "POSIX definition of Character Array".

External links