Which of the following determines the number of bits that the computer can transmit?

Wireless Networks

Jean Walrand, Pravin Varaiya, in High-Performance Communication Networks (Second Edition), 2000

Channel coding adds redundant bits to the transmitted bit stream, which are used by the receiver to correct errors introduced by the channel. This allows for a reduction in transmit power to achieve a given target BER and also prevents packet retransmissions if all the bit errors in a packet can be corrected. Conventional forward error-correction (FEC) codes use block or convolutional code designs to produce the redundant bits for FEC: the error-correction capabilities of these code designs are obtained at the expense of increased signal bandwidth or a lower data rate. Trellis codes, invented in the early 1980s, use a joint design of the channel code and the modulation to provide FEC without bandwidth expansion or rate reduction.

The latest advance in coding technology is the family of Turbo codes, invented in 1993. Turbo codes, which achieve within a fraction of a dB of the Shannon capacity on certain channels, are complex code designs that combine an encoded version of the original bit stream with an encoded version of one or more permutations of this original bit stream. The optimal decoder for the resulting code is very complex, but Turbo codes use an iterative technique to approximate the optimal decoder with reasonable complexity. While Turbo codes exhibit dramatically improved performance over previous coding techniques and can be used with either binary or multilevel modulation, they generally have high complexity and large delays, which makes them unsuitable for many wireless applications.

Another way to compensate for the channel errors prevalent in wireless systems is to implement link layer retransmission using the Automatic Repeat Request or ARQ protocol. In ARQ data is collected into packets, and each packet is encoded with a checksum. The receiver uses the checksum to determine if one or more bits in the packet were received in error. If an error is detected, the receiver requests a retransmission of the entire packet. Link layer retransmission is wasteful, since each retransmission requires additional power and bandwidth and also interferes with other users. In addition, packet retransmissions can cause data to be delivered to the receiver out of order as well as triggering duplicate acknowledgments or end-to-end retransmissions at the transport layer, further burdening the network. While ARQ has disadvantages, in applications that require error-free packet delivery at the link layer, FEC is not sufficient and so ARQ is the only alternative.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080508030500125

Modems

William Buchanan BSc (Hons), CEng, PhD, in Computer Busses, 2000

15.11.1 Modified Huffman coding

Group III compression uses modified Huffman code to compress the transmitted bit stream. It uses a table of codes in which the most frequent run lengths are coded with a short code. Typically, documents contain long runs of white or black. A compression ratio of over 10:1 is easily achievable (thus a single-page document can be sent in under 20 s, for a 9600 bps transmission rate). Table 15.5 shows some code runs of white and Table 15.6 shows some codes for runs of black. The transmitted code always starts on white code. The codes range from 0 to 63. Values from 64 to 2560 use two codes. The first gives the multiple of 64 followed by the normally coded remainder.

Table 15.5. White run length coding

Run lengthCodingRun lengthCodingRun lengthCoding
0 00110101 1 000111 2 0111
3 1000 4 1011 5 1100
6 1110 7 1111 8 10011
9 10100 10 00111 11 01000
12 001000 13 000011 14 110100
15 110101 16 101010 17 1010111
8 0100111 19 0001100 61 00110010
62 00110011 63 00110100 EOL 00000000001

Table 15.6. Black run-length coding

Run CodingCodingRun lengthCodingRun lengthCoding
0 0000110111 1 010 2 11
3 10 4 011 5 0011
6 0010 7 00011 8 000101
9 000100 10 0000100 11 0000101
12 0000111 13 00000100 14 0000011
15 0000110001 16 0000010111 17 0000011000
18 0000001000 19 00001100111 61 000001011010
62 0000001100110 63 000001100111 EOL 00000000001

For example, if the data to be encoded is:

16 white, 4 black, 16 white, 2 black, 63 white, 10 black, 63 white

it would be coded as:

101010 011 101010 11 00110100 0000100 00110100

This would take 40 bits to transmit the coding, whereas it would take 304 bits without coding (i.e. 16 + 4 + 16 + 2 + 128 + 10 + 128). This results in a compression ratio of 7.6: 1.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780340740767500158

Networking in CCTV

Vlado Damjanovski, in CCTV (Third Edition), 2014

The Data link layer describes the logical organization of data bits transmitted on a particular medium. This layer defines the framing, addressing, and check-summing of Ethernet packets. The main task of the Data link layer is to transform a raw transmission facility into a line that appears free of transmission errors in the Network layer. It accomplishes this task by having the sender break the input data up into data frames (typically, a few hundred bytes), transmit the frames sequentially, and process the acknowledgment frames sent back by the receiver. Since the Physical layer merely accepts and transmits a stream of bits without any regard to meaning of structure, it is up to the Data link layer to create and recognize frame boundaries. This can be accomplished by attaching special bit patterns to the beginning and end of the frame. If there is a chance that these bit patterns might occur in the data, special care must be taken to avoid confusion. The Data link layer should provide error control between adjacent nodes.

Which of the following determines the number of bits that the computer can transmit?

Another issue that arises in the Data link layer (and most of the higher layers as well) is how to keep a fast transmitter from “drowning” a slow receiver in data. Some traffic regulation mechanism must be employed in order to let the transmitter know how much buffer space the receiver has at the moment. Frequently, flow regulation and error handling are integrated for convenience.

If the line can be used to transmit data in both directions, this introduces a new complication for the Data link layer software. The problem is that the acknowledgment frames for A to B traffic compete for use of the line with data frames for the B to A traffic. A clever solution in the form of piggybacking has been devised.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124045576500112

Ethernet

William Buchanan BSc (Hons), CEng, PhD, in Computer Busses, 2000

26.12.2 Manchester decoder

Manchester coding has the advantage of embedding timing (clock) information within the transmitted bits. A positively edged pulse (low to high) represents a 1 and a negatively edged pulse (high to low) a 0, as shown in Figure 26.17. Another advantage of this coding method is that the average voltage is always zero when used with equal positive and negative voltage levels.

Which of the following determines the number of bits that the computer can transmit?

Figure 26.17. Manchester encoding

Figure 26.18 is an example of transmitted bits using Manchester encoding. The receiver passes the received Manchester-encoded bits through a low-pass filter. This extracts the lowest frequency in the received bit stream, i.e. the clock frequency. With this clock the receiver can then determine the transmitted bit pattern.

Which of the following determines the number of bits that the computer can transmit?

Figure 26.18. Example of Manchester coding

For Manchester decoding, the Manchester-encoded signal is first synchronised to the receiver (called bit synchronisation). A transition in the middle of each bit cell is used by a clock recovery circuit to produce a clock pulse in the center of the second half of the bit cell. In Ethernet the bit synchronisation is achieved by deriving the clock from the preamble field of the frame using a clock and data recovery circuit. Many Ethernet decoders use the SEEQ 8020 Manchester code converter, which uses a phase-locked loop (PLL) to recover the clock. The PLL is designed to lock onto the preamble of the incoming signal within 12-bit cells. Figure 26.19 shows a circuit schematic of bit synchronisation using Manchester decoding and a PLL.

Which of the following determines the number of bits that the computer can transmit?

Figure 26.19. Manchester decoding with bit synchronization

The PLL is a feedback circuit which is commonly used for the synchronisation of digital signals. It consists of a phase detector (such as an EXOR gate) and a voltage-controlled oscillator (VCO) which uses a crystal oscillator as a clock source. The frequency of the crystal is twice the frequency of the received signal. It is so constant that it only needs irregular and small adjustments to be synchronised to the received signal. The function of the phase detector is to find irregularities between the two signals and adjust the VCO to minimise the error. This is accomplished by comparing the received signals and the output from the VCO. When the signals have the same frequency and phase the PLL is locked. Figure 26.20 shows the PLL components and the function of the EXOR.

Which of the following determines the number of bits that the computer can transmit?

Figure 26.20. PLL and example waveform for the phase detector

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780340740767500262

Guidelines and criteria for selecting the optimal low-power wide-area network technology

Guillermo del Campo, ... Asuncion Santamaria, in LPWAN Technologies for IoT and M2M Applications, 2020

13.2.1.2 Modulation method

Achieving long range requires lowering of the modulation rate to put more energy in each transmitted bit (or symbol), easing decoding duties of receivers. Accordingly, LPWAN modulation techniques can be divided into two families: narrowband (NB) and spread spectrum (SS) techniques.

Narrowband techniques concentrate the signal energy within a very narrowband (below 25 kHz), minimizing the noise level, reducing the transceiver complexity, and allowing to share the spectrum between multiple links. Some LPWAN technologies reduce the width of the carriers to ultra narrow band-UNB (down to 100 Hz), lowering the noise and incrementing the number of supported devices. At the same time, UNB reduces the data rate and requires higher transmission times.

On the other hand, spread spectrum techniques expand the same signal energy over a wider frequency band. Received signals are usually below the noise floor, preventing interferences and eavesdropping, but requiring transceivers that are more complex.

The majority of the LPWAN technologies use narrowband with different modulation techniques: NB-IoT (QPSK—quadrature phase-shift keying), LTE-CatM (16QAM—quadrature amplitude modulation), Weightless (GMSK—Gaussian minimum-shift keying), SNOW (BPSK—binary phase-shift keying), GSM-IoT (GMSK, 8PSK—phase-shift keying), IQRF (GFSK—Gaussian frequency-shift keying), DASH7 (GFSK). SigFox (DBPSK—differential BPSK, GFSK), MIOTY (GMSK), and Telensa (2-FSK) adopt UNB modulation techniques. Different spread spectrum techniques are used by LoRA (CSS—chirp spread spectrum), and Ingenu-RPMA (DSSS—direct sequence spread spectrum), while Wi-SUN uses both (DSSS, FSK).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128188804000144

Advanced PIC18 Projects

Dogan Ibrahim, in PIC Microcontroller Projects in C (Second Edition), 2014

Nominal Bit Timing

The CAN bus nominal bit rate (NMR) is defined as the number of bits transmitted every second without resynchronization. The inverse of the NMR is the nominal bit time. All devices on the CAN bus must use the same bit rate, even though each device can have its own different clock frequency. One message bit consists of four nonoverlapping time segments:

Synchronization segment (Sync_Seg),

Propagation time segment (Prop_Seg),

Phase buffer segment 1 (Phase_Seg1),

Phase buffer segment 2 (Phase_Seg2).

The Sync_Seg segment is used to synchronize various nodes on the bus, and an edge is expected to lie within this segment. The Prop_Seg segment compensates for the physical delay times within the network. The Phase_Seg1 and Phase_Seg2 segments are used to compensate for edge phase errors. These segments can be lengthened or shortened by synchronization. The sample point is the point in time where the actual bit value is located. The sample point is at the end of Phase_Seg1. A CAN controller can be configured to sample three times and use a majority function to determine the actual bit value.

Each segment is divided into units known as Time Quantum, or TQ. A desired bit timing can be set by adjusting the number of TQ's that comprise one message bit and the number of TQ's that comprise each segment in it. The TQ is a fixed unit derived from the oscillator period and the time quantum of each segment can vary from 1 to 8. The length of each time segment is

Sync_Seg is one time quantum long.

Prop_Seg is programmable to be one to eight time quanta long.

Phase_Seg1 is programmable to be one to eight time quanta long.

Phase_Seg2 is programmable to be two to eight time quanta long.

By setting the bit timing, it is possible to set a sampling point so that multiple units on the bus can sample messages with the same timing.

The nominal bit time is programmable from a minimum of eight time quanta to a maximum of 25 time quanta. By definition, the minimum nominal bit time is 1 μs corresponding to a maximum 1 Mb/s rate. The nominal bit time (TBIT) is given by

(7.1)TBIT=TQ∗( Sync_Seg+Prop_Seg+Phase_Seg1+Phase_Seg2)

and the NMR is

NBR=1/T BIT.

The time quantum is derived from the oscillator frequency and the programmable baud rate prescaler, with integer values from 1 to 64. The time quantum can be expressed as

TQ=2∗(BRP+1)/FOSC,

Where

TQ is in microseconds, and FOSC is in megahertz, and BRP is the baud rate prescaler (0–63).

We can also write

TQ=2∗(BRP+1)∗TOSC,

Where TOSC is in microseconds.

An example is given below for the calculation of the NMR.

Example 7.1

Assuming a clock frequency of 20 MHz, a baud rate prescaler value of 1, and a nominal bit time of TBIT = 8 ∗ TQ, determine the NMR.

Solution 7.1

From the above equations,

TQ=2∗(1+1)/20=0.2 μs.

Also,

TBIT=8∗TQ=8∗0.2=1.6μs

and

NBR=1/TBIT=1/ 1.6μs=625,000bits/sor625Kb/s.

To compensate for phase shifts between the oscillator frequencies of the nodes on the bus, each CAN controller must synchronize to the relevant signal edge of the received signal. Two types of synchronization are defined: hard synchronization and resynchronization. Hard synchronization is done only at the beginning of a message frame, when each CAN node aligns the Sync_Seg of its current bit time to the recessive or dominant edge of the transmitted SOF. According to the rules of synchronization, if a hard synchronization occurs, there will not be a resynchronization within that bit time.

With resynchronization, the Phase_Seg1 may be lengthened or Phase_Seg2 may be shortened. The amount of change of the phase buffer segments has an upper bound given by the Synchronization Jump Width (SJW). The SJW is programmable between 1 and 4, and its value is added to Phase_Seg1, or subtracted from Phase_Seg2.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080999241000071

Applications

Ali N. Akansu, Richard A. Haddad, in Multiresolution Signal Decomposition (Second Edition), 2001

Subband Image Codec

The multiresolution or scaleability feature for visual signals is a desirable one that generated significant research and development on subband image coding. Scaleability allows the transmitted bit stream to be decoded in different spatial resolutions for different transmission channel properties or application requirements. Digital image/video libraries, on-demand image/video retrieval over the Internet, and real-time video conferencing are three examples that naturally benefit from a scaleable bit stream.

Separable 2D subband decomposition basically employs 1D filter bank operations back-to-back, in both horizontal and vertical dimensions. Figure 7.4 displays images of a single-stage (four band) subband image encoder that first decomposes an input image into four image subband signals; xLL(n), xLH(n), xhl(n) and xhh(n). The available bit budget for quantization (entropy reduction) purposes is distributed among these subband images based on their energies. The quantized subband images go through an entropy encoder like a Huffmann coder, and four bit streams, namely {bLL}, {bLH}> {bHL}, {bHH} are obtained. Note that all these bit streams are required at the decoder in order to reconstruct the compressed version of the original image. On the other hand, only the {bLL} bit stream is needed if one desires to reconstruct only a quarter-size version of the compressed image at the receiver. Hence, the compressed bit stream is scaleable.

Which of the following determines the number of bits that the computer can transmit?

Figure 7.4. (a) Original input LENA image; (b) L and H subbands (Horizontal); (c) LL, LH, HL and HH decomposed subband images.

A practical subband image encoder repeatedly uses a four-band (single stage) analysis filter bank cell for further spatial resolutions and improved compression efficiency. In most cases, the low-pass band goes through additional decompositions since significant image energy resides in that region of spectrum.

The purpose of this section is to connect subband theory with subband coding applications. We provided only broad discussions of fundamental issues in this application area, without the rigor which is beyond the scope of this book.

The literature is full of excellent books, book chapters, and technical papers on subband image and video coding. The reader is referred to Nosratinia et al. (1999); Girod, Hartung, and Horn (1996); Clarke (1995); and Woods (1991) for further studies.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780120471416500070

Digital Modulation

Walter Ciciora, ... Michael Adams, in Modern Cable Television Technology (Second Edition), 2004

4.2.4 Introduction to Bit Error Rate

An important figure of merit for any data communications system is the bit error rate (BER) of the system. This represents the proportion of the bits transmitted that are received incorrectly. If, on the average, two bits are in error for every million (106) bits transmitted, then the bit error rate is

(4.1)BER=number ofbitsreceivedinerrortotalnumberofbitstransmitted=2106=2×10-6

The BER is a function of the type of modulation, the characteristics of the modulator and demodulator, and the quality of the transmission channel. A common way to compare different modulation systems is to plot the BER against the signal-to-noise ratio (data communications practitioners tend not to differentiate between “signal” and “carrier” to noise). This is a convenient way to compare the capabilities of different modulation techniques and to predict performance in the presence of noise. This topic is covered in Section 4.2.12. Sometimes BER is plotted against signal-to-noise ratio in the presence of certain other impairments, such as group delay distortion, interference, and phase noise.

Obtaining an equation relating carrier-to-noise ratio to BER for an FSK system is difficult due to the nonlinear nature of the frequency modulation process. The literature contains only a few attempts, and contradictions exist. Within the parameters commonly used in cable television work (very wide deviation compared with the bit rate), the BER can be quite good at very low C/N. The penalty is that a significant amount of bandwidth is consumed, compared with that required for other modulation methods operating at the same bit rate.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608283500060

Expanding the Centronic, RS232 and game ports

Pei An BSc PhD, in PC Interfacing, 1998

4.7.2 Operation of the l2C bus

Before any data is transmitted on the bus, the device which should respond is addressed first. This is carried out with the 7-bit address byte plus R/-W bit transmitted after a start condition. A typical address byte has the following format:

Fixed address bits + Programmable address bits + R/-W bit(in total 8 bits)

The fixed address depends on the IC and it can not be changed. The programmable address bits can be set using the address pins on the chip. The last bit is the read/write bit which indicates the direction of data flow. The byte following the address byte is the control byte which depends on the IC used. Following the control byte are the data bytes. The serial data has the format shown in Figure 4.21.

Which of the following determines the number of bits that the computer can transmit?

Figure 4.21. Serial data format on the bus

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750636377500067

Modulation and Demodulation

Rajiv Ramaswami, ... Galen H. Sasaki, in Optical Networks (Third Edition), 2010

4.4.2 A Practical Direct Detection Receiver

As we have seen in Section 3.6 (see Figure 3.61), the optical signal at the receiver is first photodetected to convert it into an electrical current. The main complication in recovering the transmitted bit is that in addition to the photocurrent due to the signal there are usually three other additional noise currents. The first is the thermal noise current due to the random motion of electrons that is always present at any finite temperature. The second is the shot noise current due to the random distribution of the electrons generated by the photodetection process even when the input light intensity is constant. The shot noise current, unlike the thermal noise current, is not added to the generated photocurrent but is merely a convenient representation of the variability in the generated photocurrent as a separate component. The third source of noise is the spontaneous emission due to optical amplifiers that may be used between the source and the photodetector. The amplifier noise currents are treated in Section 4.4.5 and Appendix I. In this section, we will consider only the thermal noise and shot noise currents.

The thermal noise current in a resistor R at temperature T can be modeled as a Gaussian random process with zero mean and autocorrelation function (4kBT/R)Δ(r). Here kB is Boltzmann's constant and has the value 1.38 × 10−23 J/°K, and Δ(τ) is the Dirac delta function, defined as Δ(τ) = 0,τ ≠ 0 and ∫−∞∞δ(τ)dτ=1. Thus the noise is white, and in a bandwidth or frequency range Be, the thermal noise current has the variance

σthermal2=(4kBT/R)Be.

This value can be expressed as It2Be, where Ii, is the parameter used to specify the current standard deviation in units of pA/Hz. Typical values are of the order of 1 pA/Hz.

The electrical bandwidth of the receiver, Be, is chosen based on the bit rate of the signal. In practice, Be varies from 1/2T to 1/T, where T is the bit period. We will also be using the parameter B0 to denote the optical bandwidth seen by the receiver. The optical bandwidth of the receiver itself is very large, but the value of B0 is usually determined by filters placed in the optical path between the transmitter and receiver. By convention, we will measure Be in baseband units and B0 in passband units. Therefore, the minimum possible value of B0 = 2Be, to prevent signal distortion.

As we saw in the previous section, the photon arrivals are accurately modeled by a Poisson random process. The photocurrent can thus be modeled as a stream of electronic charge impulses, each generated whenever a photon arrives at the photodetector. For signal powers that are usually encountered in optical communication systems, the photocurrent can be modeled as

I=I¯+is,

where Ī is a constant current, and is is a Gaussian random process with mean zero and autocorrelation σshot2δ(τ). For pin diodes, σshot 2=2eI¯. This is derived in Appendix I. The constant current Ī = RP, where R is the responsivity of the photodetector, which was discussed in Section 3.6. Here, we are assuming that the dark current, which is the photocurrent that is present in the absence of an input optical signal, is negligible. Thus the shot noise current is also white and in a bandwidth Be has the variance

(4.2)σshot2=2eI¯Be.

If we denote the load resistor of the photodetector by RL, the total current in this resistor can be written as

I=I¯+is+i t,

where it has the variance σthermal2=(4kBT/RL)Be. The shot noise and thermal noise currents are assumed to be independent so that, if Be is the bandwidth of the receiver, this current can be modeled as a Gaussian random process with mean Ī and variance

σ2=σshot2+σthermal2.

Note that both the shot noise and thermal noise variances are proportional to the bandwidth Be of the receiver. Thus there is a trade-off between the bandwidth of a receiver and its noise performance. A receiver is usually designed so as to have just sufficient bandwidth to accommodate the desired bit rate so that its noise performance is optimized. In most practical direct detection receivers, the variance of the thermal noise component is much larger than the variance of the shot noise and determines the performance of the receiver.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123740922500126

What is the number of bits the processor can interpret?

size is the number of bits the processor can interpret and execute at a given time.

What is the number of bits transferred or received per unit of time?

Bandwidth is measured in terms of bit rate (or data rate), the number of bits transferred or received per unit of time.

What refers to the number of bits per second that can be transmitted over a communications medium?

Bandwidth. The number of bits per second that can be transmitted over the communications medium. Twisted-pair cable. Used for wired telephone connections.

Which of the following allows the processor to communicate with peripheral devices?

Basically peripheral device communicate with processor using PCI(peripheral component interconnect), it a type of bus that connect the device directly to the processor.