Bytes >< bits


Definition: A byte is a sequence of bits. In computer networking, some network protocols send and receive data in the form of byte sequences. These are called byte-oriented protocols. Examples of byte-oriented protocols include TCP/IP and telnet.

The order in which bytes are sequenced in a byte-oriented network protocol is called the network byte order. The maximum size of a single unit of transmission for these protocols, the Maximum Transmission Unit (MTU), is also measured in bytes. Network programmers routinely work both with network byte ordering and MTUs.

Bytes are used not only in networking, but also for computer disks, memory, and central processing units (CPUs). In all modern network protocols, a byte contains eight bits. A few (generally obsolete) computers may use bytes of different sizes for other purposes.

The sequence of bytes in other parts of the computer may not follow the network byte order. Part of the job of the networking subsystem of a computer is to convert between the host byte order and network byte order when needed.


Definition: A bit is the smallest unit of data transfer on a computer network. Bits represent the two binary values "on" or "off." Bits are often stored on computers as the digital numbers '1' and '0', but in networking, bits can also be "encoded" by electrical signals and pulses of light.

In computer networking, some network protocols send and receive data in the form of bit sequences. These are called bit-oriented protocols. Examples of bit-oriented protocols include PPP.

Though sometimes written in decimal or byte form, network addresses like IP addresses and MAC addresses are ultimately represented as bits in network communications.

Finally, special digital numbers called "keys" are often used to encrypt data on computer networks. The length of these keys is expressed in terms of number of bits. The greater the number of bits, the relatively more effective that key is in protecting data. In wireless network security, for example, 40-bit WEP keys proved to be relatively insecure but the 128-bit or larger WEP keys used today are much more effective.

Also Known As: binary digit

The terms bit and byte are common in computer networking. Both terms refer to digital data transmitted over a network connection. For example, bits and bytes both may represent network addresses or port numbers.

A bit, short for binary digit, is the smallest unit of measurement used for information storage in computers. A bit is represented by a 1 or a 0 with a value of true or false, sometimes expressed as on or off. Eight bits form a single byte of information, also known as an octet. Thus, the difference between a bit and a byte is size, or the amount of information stored.

A bit is a single numeric value, either '1' or '0', that encodes a single unit of digital information. A byte is a sequence of bits; usually eight bits equal one byte.

For example, in Internet Protocol (IP) networking, IP addresses contain 32 bits or 4 bytes. The bits encode the network address so that it can be shared on the network. The bytes divide the bits into groups.

The IP address 192.168.0.1, for instance, is encoded with the following bits and bytes:

    11000000 10101000 00000000 00000001
Bits are grouped into bytes to, generally speaking, increase the efficiency of computer hardware, including network equipment, disks and memory.

For example, it takes eight bits (1 byte) to store a single character. The capital letter “A” is expressed digitally as 01000001. A small case “a” is represented in binary code as 01100001. Notice the third bit is different in each octet. By rearranging the bits within the octet, a byte is capable of producing 256 unique combinations to form letters, numbers, special characters and symbols.

It can get confusing keeping units of storage straight, but if you have trouble remembering which is a bit and a byte, note that the smaller word is the smaller unit of storage. Once the difference between a bit and a byte is understood, this helps to remember the difference between greater units such as the kilobit and kilobyte.

A kilobit is 1000 bits, though in the binary system it is designated as 1024 bits due to the amount of space required to store a kilobit using common operating systems and storage schemes. For simplicity, however, you can think of kilo as referring to 1000 to more easily remember what a kilobit is. A kilobyte then, would be 1000 bytes.

Knowing the difference between a bit and a byte helps to understand megabits, megabytes, gigabits and gigabytes. For example, 1000 kilobits is 1 megabit, and 1000 kilobytes is 1 megabyte. Since we know a bit is 8x smaller than a byte, we know that one megabyte is 8x larger than 1 megabit. Following this pattern, 1000 megabits is 1 gigabit, and 1000 megabytes 1 gigabyte.

Internet connection speeds are expressed in terms of data transfer rates in both directions (uploading and downloading), as bits or bytes per second. Abbreviations are unfortunately not standardized, making it easy for customers or potential clients to confuse a bit and a byte when trying to determine bang for the buck. For example, a speed of “750 kbps” might be misinterpreted by a customer as meaning 750 kilobytes per second – or 8x faster than what the provider means.

In our earlier example, we used the binary number 10101110. This number has 8 binary digits, or 8 bits. This is not a coincidence, because if you take a group of 8 bits, you have a byte. The reason we have to group bits is that if you are being fed a continuous stream of bits you have no idea where one piece of information ends and the next one starts. But if you receive groups of defined length, e.g. a byte containing 8 bits, it's easy to interpret them. A good example is how ASCII code (American Standard Code for Information Interchange) works. Any character you type on your keyboard is interpreted by your computer as a byte, an 8 digit binary number. For example, the letter "A" is expressed as the ASCII code 65. But 65 is a decimal number, so if you convert it to a binary number, you get 01000001. These 8 digits, or one byte, are known to your computer as the letter "A".

It is very important to know the difference between a bit and a byte because these two can get easily confused. One good example is monitoring data transfer speed. When you download a file from the Internet, you probably have noticed that your browser indicates the transfer rate in KBps. Please note that the letter "B" is capitalized. This means the transfer rate is shown in Kilo Bytes per second. For example, your download might arrive at a rate of 3.5 KBps. Now, if you are using a 56K modem, why is that rate so low, shouldn't you see something closer to 56? No, because 56K is short for 56 Kbps. Note the lower case "b", it means Kilo Bits per second. And since we know that 8 bits equal one byte, we divide 56 by 8 and get a theoretical maximum of 7 Kilo Bytes per second.

A rule of thumb generally reliable is that small case abbreviations typically refer to bits, while a capital letter typically refers to bytes. Hence kilobits per second would be “kbps” and kilobytes would be “KBps,” or “kBps.” The same holds for megabits (mbps), and megabytes per second (MBps). Bits might also be expressed as “Kbit,” “Mbit,” or “Gbit.”

A GigaByte does not necessarily equal a GigaByte

Here's another situation where you can look really smart if you paid attention and know the difference between the binary and the decimal system. This question is asked over and over again:

"I bought a 8.4GB hard drive, but when I formatted it, Windows tells me that I have only 7.82 GB available. Did they sell me the wrong drive?"

No, they sold you the correct drive, but it was not labeled quite correctly. The marketing department of the drive manufacturer doesn't know too much about bits and bytes and the binary system. To make it easier to calculate, they assume that 1 KB is 1000 Bytes, 1 MB is 1000 KB, etc. which is wrong. So when they have a drive that can hold 8,400,000,000 Bytes, they just call it 8.4 GB and say that's close enough for government work. Not so.The multiplication factor is not 1000 since we're not using the decimal system, it is 1024 instead (2 to the power of 10).

To figure out the correct size of that drive, divide 8,400,000,000 Bytes by 1024 and you'll get 8,203,125 KB. Divide that by 1024, and you get 8,010 MB. Divide that by 1024 and you get 7.82 GB which is the actual size of your hard drive in GB as reported by your operating system.

Of Pits and Grooves

A CD stores data using the binary system in the following matter: When writing data to a CD, the laser does nothing else than following a spiral-shaped "guide groove" while burning the data as a long string of pits into a reflective dye layer on the CD. When the CD is read later, a laser follows the same spiral and reads either a pit where the light does not get reflected very well, or a smooth area called land where the light reflects very well. This system is nothing else but our good old binary system, using only two possible states, pit or land, to read and record data in binary format, 0 or 1.

Magnetic storage

Storing data on a magnetic media, such as a hard drive, also uses the binary system. In a very simplified manner of speaking, each data bit gets stored on a drive as a tiny magnetic field. Each magnetic field has two poles, North and South. When the disk spins around and data is read by the read head, the magnetic field either has the North or the South pole aligned first. Again, only two possible stages, North or South, 0 or 1.

baud

Definition: A baud is a unit of measure for analog signaling. At a minimum, one baud corresponds to one bit per second (bps) although at higher signaling speeds, multiple bits can be transfered in a single baud.

The term baud was popular in the early days of modems, but it is no longer relevant in mainstream computer networking terminology.

Definition: The data rate of a computer network connection is normally measured in units of bits per second (bps). Network equipment makers typically rate their products using related, larger units of Kbps, Mbps and Gbps.
  • one kilobit per second (Kbps) equals 1000 bits per second (bps). (Note: Kbps is sometimes also written as “kbps” - both carry the same meaning.)
  • one megabit per second (Mbps) equals 1000 Kbps or one million bps.
  • one gigabit per second (Gbps) equals 1000 Mbps or one million Kbps or one billion bps.

Data rates for non-network equipment are sometimes shown in bytes per second (Bps) rather than bits per second. In those cases,

  • one KBps equals one kilobyte per second
  • one MBps equals one megabyte per second, and
  • one GBps equals one gigabyte per second

Finally, one kilobyte per second equals 8 kilobits per second.

Also Known As: Kb/sec, Kb/s, Mb/sec, Mb/s, Gb/sec, Gb/s
Examples:
Old V.90 modems supported data rates up to 56 Kbps. Traditional Ethernet supported data rates up to 10 Mbps while Fast Ethernet provides 100 Mbps and Gigabit Ethernet supports 1000 Mbps (1 Gbps).


What is the difference between bps and Bps?

Q. "Is there any difference between bps (small 'b') and Bps (capital 'b')?"

A. The term "bps" specifies network bandwidth in bits per second. The term "Bps" specifies network bandwidth in bytes per second.

bandwidth

Definition: Bandwidth in computer networking refers to the data rate supported by a network connection or interface. One most commonly expresses bandwidth in terms of bits per second (bps). The term comes from the field of electrical engineering, where bandwidth represents the total distance or range between the highest and lowest signals on the communication channel (band).

Bandwidth represents the capacity of the connection. The greater the capacity, the more likely that greater performance will follow, though overall performance also depends on other factors, such as latency.

Also Known As: throughput
Examples:
A V.90 modem supports a maximum theoretical bandwidth of 56 Kbps. Fast Ethernet supports a theoretical maximum bandwidth of 100 Mbps.


0 comments:

Post a Comment