Bit

Source: Wikipedia, the free encyclopedia.
(Redirected from
Bit (computing)
)

The bit is the most basic

portmanteau of binary digit.[1] The bit represents a logical state with one of two possible values
. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/ are also widely used.

The relation between these values and the physical states of the underlying

device is a matter of convention, and different assignments may be used even within the same device or program
. It may be physically implemented with a two-state device.

A contiguous group of binary digits is commonly called a

bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array
. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined.[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble.

In

Claude E. Shannon
.

The symbol for the binary digit is either "bit", per the

IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002
standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte.

History

The encoding of data by discrete bits was used in the

stock ticker machines
(1870).

John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit".[8]

Physical representation

A bit can be stored by a digital device or other physical system that exists in either of two possible distinct

polarization, the orientation of reversible double stranded DNA
, etc.

Bits can be implemented in several forms. In most modern computing devices, a bit is usually represented by an electrical voltage or current pulse, or by the electrical state of a flip-flop circuit.

For devices using

positive logic, a digit value of 1 (or a logical value of true) is represented by a more positive voltage relative to the representation of 0. Different logic families require different voltages, and variations are allowed to account for component aging and noise immunity. For example, in transistor–transistor logic
(TTL) and compatible circuits, digit values 0 and 1 at the output of a device are represented by no higher than 0.4 volts and no lower than 2.6 volts, respectively; while TTL inputs are specified to recognize 0.8 volts or below as 0 and 2.2 volts or above as 1.

Transmission and processing

Bits are transmitted one at a time in

bit per second
(bit/s), such as kbit/s.

Storage

In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's

photolithographic
techniques.

In the 1950s and 1960s, these methods were largely supplanted by

magnetic strip items such as metro tickets and some credit cards
.

In modern

bar codes
, bits are encoded as the thickness of alternating black and white lines.

Unit and symbol

The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit.[11] However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte.

Decimal
Value Metric
1000 kbit kilobit
10002 Mbit megabit
10003 Gbit gigabit
10004 Tbit terabit
10005 Pbit petabit
10006 Ebit exabit
10007 Zbit zettabit
10008 Ybit yottabit
10009 Rbit ronnabit
100010 Qbit quettabit
Binary
Value
IEC
Memory
1024 Kibit kibibit Kbit Kb kilobit
10242 Mibit mebibit Mbit Mb megabit
10243 Gibit gibibit Gbit Gb gigabit
10244 Tibit tebibit
10245 Pibit pebibit
10246 Eibit exbibit
10247 Zibit zebibit
10248 Yibit yobibit
Orders of magnitude of data

Multiple bits

Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer[2][12][13][14][15] and for this reason it was used as the basic addressable element in many computer architectures. The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today.[as of?] However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.

Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits.

The

yottabit
(Ybit).

Information capacity and information compression

When the information capacity of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a

data compression
technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information is more compressed—the same bucket can hold more.

For example, it is estimated that the combined technological capacity of the world to store information provides 1,300

information entropy.[16]

Bit-based computing

Certain bitwise computer processor instructions (such as bit set) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.

In the 1980s, when

bit block transfer
instructions to set or copy the bits that corresponded to a given rectangular area on the screen.

In most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the

least significant bit
depending on the context.

Other information units

Similar to torque and energy in physics; information-theoretic information and data storage size have the same dimensionality of units of measurement, but there is in general no meaning to adding, subtracting or otherwise combining the units mathematically, although one may act as a bound on the other.

Units of information used in information theory include the shannon (Sh), the natural unit of information (nat) and the hartley (Hart). One shannon is the maximum amount of information needed to specify the state of one bit of storage. These are related by 1 Sh ≈ 0.693 nat ≈ 0.301 Hart.

Some authors also define a binit as an arbitrary information unit equivalent to some fixed but unspecified number of bits.[18]

See also

References

  1. (PDF) from the original on May 26, 2016. Retrieved August 25, 2019.
  2. ^
    IBM 360
    used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. […]
  3. ^ Anderson, John B.; Johnnesson, Rolf (2006), Understanding Information Transmission
  4. ^ Haykin, Simon (2006), Digital Communications
  5. ^ IEEE Std 260.1-2004
  6. ^ "Units: B". Archived from the original on 2016-05-04.
  7. McGraw-Hill
    .
  8. ^
    J. W. Tukey
    .
  9. .
  10. (PDF) on 1998-07-15.
  11. ^ National Institute of Standards and Technology (2008), Guide for the Use of the International System of Units. Online version. Archived 3 June 2016 at the Wayback Machine
  12. alphanumeric
    work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. […]
  13. System/360
    took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. […]
  14. LCCN 61-10466, archived from the original
    (PDF) on 2017-04-03, retrieved 2017-04-03
  15. .
  16. ^ a b Information in small bits Information in Small Bits is a book produced as part of a non-profit outreach project of the IEEE Information Theory Society. The book introduces Claude Shannon and basic concepts of Information Theory to children 8 and older using relatable cartoon stories and problem-solving activities.
  17. ^ "The World's Technological Capacity to Store, Communicate, and Compute Information" Archived 2013-07-27 at the Wayback Machine, especially Supporting online material Archived 2011-05-31 at the Wayback Machine, Martin Hilbert and Priscila López (2011), Science, 332(6025), 60-65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html
  18. from the original on 2017-03-27.

External links

  • Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte
  • BitXByteConverter – a tool for computing file sizes, storage capacity, and digital information in various units