Show
Recommended textbook solutions
Fundamentals of Engineering Economic Analysis1st EditionDavid Besanko, Mark Shanley, Scott Schaefer 215 solutions
Chemistry for Engineering Students2nd EditionLawrence S. Brown, Thomas A. Holme 945 solutions Chemical Reaction Engineering3rd EditionOctave Levenspiel 228 solutions Advanced Engineering Mathematics10th EditionErwin Kreyszig 4,134 solutions "tribit" redirects here. Not to be confused with tibit or trit. In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure information contained in messages and the entropy of random variables. The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes). Primary units[edit]Comparison of units of information: bit, trit, nat, ban. Quantity of information is the height of bars. Dark green level is the "nat" unit. In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted logb N. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the shannon, equal to the information content of one "bit" (a portmanteau of binary digit[2]). A system with 8 possible states, for example, can store up to log2 8 = 3 bits of information. Other units that have been named include: Base b = 3the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[3]Base b = 10the unit is called decimal digit, hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.[1][4][5][6]Base b = e, the base of natural logarithmsthe unit is called a nat, nit, or nepit (from Neperian), and is worth log2 e (≈ 1.443) bits.[1]The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases. Units derived from bit[edit]Several conventional names are used for collections or groups of bits. Byte[edit]Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits – that is, an octet. A byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French,[nb 1] but also allows "B" in English, which is what is actually being used). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits. Nibble[edit]A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit.[7] Crumb[edit]A group of two bits or a quarter byte was called a crumb,[8] often used in early 8-bit computing (see Atari 2600, ZX Spectrum).[citation needed] It is now largely defunct. Word, block, and page[edit]Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72[9] bits or others. Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad"). Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines. Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages. Systematic multiples[edit]Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (as in kilobit or kbit), mega = 106 = 1000000 (as in megabit or Mbit) and giga = 109 = 1000000000 (as in gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (1 kB = 8000 bit), megabyte (1 MB = 8000000bit), and gigabyte (1 GB = 8000000000bit). However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often repurposed the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of 228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.
In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied. On the other hand, for external storage systems (such as optical discs), the SI prefixes are commonly used with their decimal values (powers of 10). There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission (IEC) issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix:[10]
The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage.[11] Size examples[edit]
Obsolete and unusual units[edit]Several other units of information storage have been named:
Some of these names are jargon, obsolete, or used only in very restricted contexts. See also[edit]
Notes[edit]
References[edit]
External links[edit]
Is the fundamental unit of information?The basic unit of information is called bit. It's a short form for binary digit. It takes only two values, 0 or 1. All other units of information are derived from the bit.
What is the most common fundamental unit of digital data?The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits.
Is the bit a fundamental unit?A binary digit (or bit) is the fundamental unit of data storage, and will have a value of 0 or 1. A group of eight bits is called a byte. Four-bit numbers are called a nibble. Historically, storage capacity was expressed using the metric prefixes of kilo (1,000), mega (1,000,000), etc.
What is the fundamental unit of computing in cloud computing a physical server B v C Block D subnet?1. What is the fundamental unit of computing in cloud computing? physical server underlies VMs, but the resources of a physical server are allocated to VMs. Blocks and subnets are not relevant to the fundamental unit of computing.
|