0% found this document useful (0 votes)
77 views

Chapter 3 Yearwise Marking

There are two main types of data link layer framing mechanisms: fixed size framing and variable size framing. Fixed size framing uses frames of a consistent length, while variable size framing requires defining the start and end of frames as their length can vary. Common variable size framing techniques include using a length field or flag bytes with bit or byte stuffing to mark the beginning and end of frames.

Uploaded by

karan subedi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

Chapter 3 Yearwise Marking

There are two main types of data link layer framing mechanisms: fixed size framing and variable size framing. Fixed size framing uses frames of a consistent length, while variable size framing requires defining the start and end of frames as their length can vary. Common variable size framing techniques include using a length field or flag bytes with bit or byte stuffing to mark the beginning and end of frames.

Uploaded by

karan subedi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Explain the different types of Data link layer framing mechanisms.

[5,5,6,8]

Frames are the units of digital transmission particularly in computer networks and
telecommunications. Frames are comparable to packets of energy called photons in case of light
energy. Frame is continuously used in Time Division Multiplexing process.

Framing is a point-to-point connection between two computers or devices consisting of a wire in


which data is transmitted as a stream of bits. However, these bits must be framed into discernible
blocks of information. Framing is a function of the data link layer. It provides a way for a sender
to transmit a set of bits that are meaningful to the receiver. Ethernet, token ring, frame relay, and
other data link layer technologies have their own frame structures. Frames have headers that
contain information such as error-checking codes.
A frame has the following parts:
 Frame Header: It contains the source and destination addresses of the frame.
 Payload Field: It contains the message to be delivered.
 Trailer: It contains the error detection and error correction bits.
 Flag: It marks the beginning and end of the frame.

There are two types of framing techniques:


1. Fixed Size:
The frame is of fixed size and there is no need to provide boundaries to the frame,
length of the frame itself acts as delimiter. Frames can be of fixed or variable size.
In fixed-size framing, there is no need for defining the boundaries of the frames; the
size itself can be used as a delimiter. An example of this type of framing is the ATM
wide-area network, which uses frames of fixed size called cells.
2. Variable Size:
In this there is need to define end of frame as well as beginning of next frame to
distinguish. In variable-size framing, we need a way to define the end of the frame
and the beginning of the next. Historically, two approaches were used for this
purpose: a character-oriented approach and a bit-oriented approach. The defining of
end and beginning of the frame can be done in two ways:
1. Length Field: We can introduce a length field in the frame to indicate the
length of the frame. it is used in Ethernet. The problem with this is that
sometimes the length field might get corrupted.
2. End Delimiter (ED): We can introduce an ED (pattern) to indicate the end of
the frame. it is used in Token Ring. The problem with this is that ED can
occur in the data. This can be solved by : Character/Byte Stuffing and Bit
Stuffing.
What are the major functions of data link layer? [4,3,3]
The major functions of data link layer are:
Data link layer does many tasks on behalf of upper layer. These are:
Framing
Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then, it
sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals from
hardware and assembles them into frames.
Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed
to be unique on the link. It is encoded into hardware at the time of manufacturing.
Synchronization
When data frames are sent on the link, both machines must be synchronized in order to transfer
to take place.
Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped. These
errors are detected and attempted to recover actual data bits. It also provides error reporting
mechanism to the sender.
Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow control
that enables both machines to exchange data on same speed.
Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of collision. Data-
link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media
among multiple Systems.
Reliable delivery
When a link-layer protocol provides reliable delivery service, it guarantees to move each network-
layer datagram across the link without error.

What are the functions of LIC and MAC sub-layer?[2+2]

The function of LIC sub-layer are as follows:


i. To communicates with upper layers of the OSI model.
ii. To transition the packet to the lower layers for delivery.
iii. To gets the network protocol data, which is usually an IPv4 packet.
iv. To add the control information to help deliver the packet to the destination.

And functions of MAC sub-layer are as follows:


i. It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
ii. It is responsible for encapsulating frames so that they are suitable for transmission via the
physical medium.
iii. It resolves the addressing of source station as well as the destination station, or groups of
destination stations.
iv. It performs multiple access resolutions when more than one data frame is to be transmitted.
It determines the channel access methods for transmission.

Discuss different framing approaches used in data link layer.[6]

The different type of framing approaches that are used in data link layer are as follows:

1. Character Count
It uses a field in the header to count the number of characters in the frame. On receiver end, the
layer sees the character count and knows how many character forms a frame. Count can be changed
due to transmission error that results in receiver to be out of synchronization.

2. Flag bytes with byte stuffing


It provides a special byte called FLAG at the beginning and the end of each frame. Two consecutive
flag bytes indicate the end of one frame and start of next frame. If error is detected, it looks for
flag byte to determine end of current frame. When binary data, object program or numbers are
transmitted, the flag byte pattern may occur in the data. It can be resolved by adding special escape
byte (ESC) by the sender before every accidental flag byte in the data.

3.Flag with bit stuffing


Each frame begins and ends with a special bit pattern called flag byte. Eg: 01111110.If sender’s
data link layer encounters five consecutive 1’s in the data, it automatically stuffs a 0 bit into
outgoing stream. If receiver sees 0 bit following five consecutive 1 bits, it automatically removes
0 bit. This process is called bit stuffing.

How data transfer occurs in Ethernet network. Explain? [6]

Modern Ethernet is based on twisted pair wiring. Commonly either CAT5 or CAT6 cabling
standards. Twisted pair is nice because it is pretty much immune to noise and allows us to run up
to gigabit speeds without too much of a concern. Alternately and for higher speeds up to 100Gb/s,
there’s fiber (or multi-fiber cables).
Modern Ethernet is point-to-point, with hosts typically connected to switches in a star topology.
Each link is used full-duplex, so that each end can transmit at rate without collisions. Switches
may be interconnected in a mesh, with loops eliminated by Spanning Tree Protocol or something
more clever.

Each host on an Ethernet has an address, known as a MAC Address. This is globally unique, based
on manufacturing. Any two hosts on an Ethernet can talk directly to one another just by using the
right MAC address. Switches implement multicast and broadcast by flooding the packets across
the Ethernet in a loop free manner.

At the link layer, each packet on an Ethernet is prefaced by a preamble. This is a fixed electrical
pattern that is used to help the receiver learn what the transmitter’s clock looks like. This is
followed by an Ethernet frame: a destination MAC address, a source MAC address, an Ethertype,
and then the L3 packet body. This is then followed by a CRC-32 checksum of the frame for error
detection.

The Ethertype is a 2 byte code that indicates what type of L3 packet is being carried.

Discuss how CSMA works? Differentiate it with CSMA-CD. [2+2]

CSMA works on the principle that only one device can transmit signals on the network, otherwise
a collision will occur resulting in the loss of data packets or frames. CSMA works when a device
needs to initiate or transfer data over the network. Before transferring, each CSMA must check or
listen to the network for any other transmissions that may be in progress. If it senses a transmission,
the device will wait for it to end. Once the transmission is completed, the waiting device can
transmit its data/signals. However, if multiple devices access it simultaneously and a collision
occurs, they both have to wait for a specific time before reinitiating the transmission process.

In CSMA-CD first, the station monitors the transmission medium. As long as this is occupied, the
monitoring will continue. Only when the medium is free and for a certain time (in interframe
spacing), will the station send a data packet. Meanwhile, the transmitter continues to monitor the
transmission medium to see if it detects any data collisions. If no other participant tries to send its
data via the medium by the end of transmission, and no collision occurs, the transmission has been
a success.

ALOHA [4,4(short notes)]

ALOHA is a system for coordinating and arbitrating access to a shared communication Networks
channel. It was developed in the 1970s by Norman Abramson and his colleagues at the University
of Hawaii. The original system used for ground-based radio broadcasting, but the system has been
implemented in satellite communication systems. A shared communication system like ALOHA
requires a method of handling collisions that occur when two or more systems attempt to transmit
on the channel at the same time. In the ALOHA system, a node transmits whenever data is available
to send. If another node transmits at the same time, a collision occurs, and the frames that were
transmitted are lost. However, a node can listen to broadcasts on the medium, even its own, and
determine whether the frames were transmitted.
There are two different types of ALOHA:
i. Pure ALOHA
In pure ALOHA, the stations transmit frames whenever they have data to send. When
two or more stations transmit simultaneously, there is collision and the frames are
destroyed. In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver. If acknowledgement is not received within the
specified time, the station assumes that the frame (or acknowledgement) has been
destroyed. If the frame is destroyed because of collision the station waits for a random
amount of time and sends it again. This waiting time must be random otherwise same
frames will collide again and again. Therefore, pure ALOHA dictates that when time-
out period passes, each station must wait for a random amount of time before resending
its frame. this randomness will help avoid more collisions.

ii. Slotted ALOHA


Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of
collision in pure ALOHA are very high. In slotted ALOHA, the time of the shared
channel is divided into discrete intervals called slots. The stations can send a frame
only at the beginning of the slot and only one frame is sent in each slot. In slotted
ALOHA, if any station is not able to place the frame onto the channel at the beginning
of the slot. i.e. it misses the time slot then the station has to wait until the beginning of
the next time slot. There is still chances of collision if two stations try to send at the
beginning of the same time slot as shown in figure below. Slotted ALOHA still has an
edge over pure ALOHA as chances of collision are reduced to one-half.
What are the differences between the error correcting and error detection process? A bit
string 01111011111011111110 needs to be transmitted at the data link layer, what is the string
actually transmitted after bit suffering, if flag patterns is 0111110.
Ans:
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy Check
(CRC). In both cases, few extra bits are sent along with actual data to confirm that bits received at
other end are same as they were sent. If the counter-check at receiver’ end fails, the bits are
considered corrupted
Method of detection of error are:
Parity Check
Cyclic Redundancy Check (CRC)
Error Correction
In correction we need to know the exact number of bits that are corrupted and their location in the
message.
Method of correcting of error are:
 Backward Error Correction
 Forward Error Correction

What are multiple access protocols? [2+6]


The multiple access protocols are of three types:
1. Random Access Protocol: All stations have same superiority that is no station has more
priority than another station and any station can send data depending on medium’s state.
2. Controlled Access Protocol: Here, the data is sent by that station which is approved by all
other stations.
3. Channelization protocol: The available bandwidth of the link is shared in time, frequency
and code to multiple stations to access channel simultaneously.

Explain how multiple access is achieved in IEEE 802.5.[6]


The foundation of a token ring is the IEEE 802.5 network. It uses a special three-byte frame
called a "token" that travels around a logical "ring" of workstations or servers and whichever
node grabs that token will have right to transmit the data.

Whenever a station wants to transmit a frame it inverts a single bit of the 3-byte token which
instantaneously changes it into a normal data packet. Because there is only one token, there can
at most be one transmission at a time. Since the token rotates in the ring it is guaranteed that
every node gets the token with in some specified time. So there is an upper bound on the time of
waiting to grab the token so that starvation is avoided. There is also an upper limit of 250 on the
number of nodes in the network. To distinguish the normal data packets from token (control
packet) a special sequence is assigned to the token packet. When any node gets the token it first
sends the data it wants to send, then re-circulates the token.

Figure: Token ring

If a node transmits the token and nobody wants to send the data the token comes back to the
sender. If the first bit of the token reaches the sender before the transmission of the last bit, then
error situation arises. So to avoid this we should have:
Propagation delay + transmission of n-bits (1-bit delay in each node) > transmission of the token
time
A station may hold the token for the token-holding time which is 10 ms unless the installation
sets a different value. If there is enough time left after the first frame has been transmitted to send
more frames, then these frames may be sent as well. After all pending frames have been
transmitted or the transmission frame would exceed the token-holding time, the station
regenerates the 3-byte token frame and puts it back on the ring.
There are three modes of operation:
 Listen Mode: In this mode the node listens to the data and transmits the data to the next node.
In this mode there is a one-bit delay associated with the transmission.
 Transmit Mode: In this mode the node just discards the any data and puts the data onto the
network.
 By-pass Mode: In this mode reached when the node is down. Any data is just bypassed.
There is no one-bit delay in this mode.

Explain the channel allocation problem with example.[5]

Channel Allocation Problem


Channel allocation is a process in which a single channel is divided and allotted to multiple
users in order to carry user specific tasks. There are user’s quantity may vary every time the process
takes place. If there are N number of users and channel is divided into N equal-sized sub channels,
Each user is assigned one portion. If the number of users are small and don’t vary at times, than
Frequency Division Multiplexing can be used as it is a simple and efficient channel bandwidth
allocating technique.

Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.

These are explained as following below.


Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among multiple
competing users Frequency Division Multiplexing (FDM). If there are N users, the bandwidth is
divided into N equal sized portions each user being assigned one portion. Since each user has a
private frequency band, there is no interface between users.
It is not efficient to divide into fixed number of chunks.
T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
Where,
T = mean time delay,
C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
Dynamic Channel Allocation:
Possible assumptions include:
Station Model:
Assumes that each of N stations independently produce frames. The probability of producing a
packet in the interval IDt where I is the constant arrival rate of new frames.
Single Channel Assumption:
In this allocation all stations are equivalent and can send and receive on that channel.
Collision Assumption:
If two frames overlap in time-wise, then that’s collision. Any collision is an error, and both
frames must re-transmitted. Collisions are only possible error.
1. Time can be divided into Slotted or Continuous.
2. Stations can sense a channel is busy before they try it.
Protocol Assumption:
 N independent stations.
 A station is blocked until its generated frame is transmitted.
 Probability of a frame being generated in a period of length Dt is IDt where I is the arrival
rate of frames.
 Only a single Channel available.
 Time can be either: Continuous or slotted.
 Carrier Sense: A station can sense if a channel is already busy before transmission.
 No Carrier Sense: Time out used to sense loss data.

Why do you think that static channel assignment is not efficient? [2]

Static channel assignment isn’t efficient as in most real-life network situations due to following
reasons:

i) There are variable number of users, usually large in number with busty traffic. If
the value of N is very large, the bandwidth available for each user will be very less.
This will reduce the throughput if the user needs to send a large volume of data
once in a while.
ii) Since all of the user are allocated fixed bandwidths, the bandwidth allocated to non-
communicating users lies wasted.
iii) If the number of users is more than N, then some of them will be denied service,
even if there are unused frequencies

The CSMA/CD creates an efficiency gain compared to other techniques random access because
there are immediate collision detection and interruption of current transmission. Issuer’s couplers
recognize a collision by comparing the transmitted signal with the passing on the line. The
collisions are no longer recognized by absence of acknowledgment but by detecting interference.
This conflict detection method is relatively simple, but it requires sufficient performance coding
techniques to easily recognize a superposition signal. It is generally used for this differential coding
technique, such as differential Manchester
What is meant by byte stuffing techniques? What is piggy backing? Suppose a bit string,
0111101111101111110 needs to be transmitted at data link layer. What string actually
transmitted after the bit stuffing?

In framing ,a byte is stuffed in the message to differentiate from the delimiter. This is also called
byte stuffing technique. That special character is ESC character. ESC character is added just in
front of any conflicting character in the data stream.
In two-way communication, wherever a frame is received, the receiver waits and does not send the
control frame (acknowledgement or ACK) back to the sender immediately. The receiver waits until
its network layer passes in the next data packet. The delayed acknowledgement is then attached to
this outgoing data frame. This technique of temporarily delaying the acknowledgement so that it
can be hooked with next outgoing data frame is known as piggybacking.
For the input 0111101111101111110 the actual bit transmitted is 011110111110011111010
What do you mean by Media Access Control? What is its significance in data link layer?
Explain why token bus is also called as the token ring.

A media access control is a network data transfer policy that determines how data is transmitted
between two computer terminals through a network cable.
Framing: Data-link layer takes packets from Network Layer and encapsulates them into Frames.
Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up
signals from hardware and assembles them into frames.
Addressing: Data-link layer provides layer-2 hardware addressing mechanism. Hardware address
is assumed to be unique on the link. It is encoded into hardware at the time of manufacturing.
Synchronization: When data frames are sent on the link, both machines must be synchronized in
order to transfer to take place.
Error Control: Sometimes signals may have encountered problem in transition and the bits are
flipped. These errors are detected and attempted to recover actual data bits. It also provides error
reporting mechanism to the sender.
Flow Control: Stations on same link may have different speed or capacity. Data-link layer ensures
flow control that enables both machines to exchange data on same speed.
Multi-Access: When host on the shared link tries to transfer the data, it has a high probability of
collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing
a shared media among multiple Systems.
Reliable delivery: When a link-layer protocol provides reliable delivery service, it guarantees to
move each network-layer datagram across the link without error.

Token Bus is described in the IEEE 802.4 specification, and is a Local Area Network (LAN) in
which the stations on the bus or tree form a logical ring. Each station is assigned a place in an
ordered sequence, with the last station in the sequence being followed by the first, as shown below.
Each station knows the address of the station to its "left" and "right" in the sequence. This type of
network, like a Token Ring network, employs a small data frame only a few bytes in size, known
as a token, to grant individual stations exclusive access to the network transmission medium.
Token-passing networks are deterministic in the way that they control access to the network, with
each node playing an active role in the process. When a station acqires control of the token, it is
allowed to transmit one or more data frames, depending on the time limit imposed by the network.
When the station has finished using the token to transmit data, or the time limit has expired, it
relinquishes control of the token, which is then available to the next station in the logical sequence.
When the ring is initialized, the station with the highest number in the sequence has control of the
token.

HDLC
A high-level data link control (HDLC) is a protocol that is a bit-oriented synchronous data link
layer. HDLC ensures the error-free transmission of data to the proper destinations and controls the
data transmission speed. HDLCs can provide both connection-oriented and connectionless
services.
A high-level data link control defines rules for transmitting data between network points. Data
in an HDLC is organized into units called frames and is sent across networks to specified
destinations. HDLC also manages the pace at which data is transmitted. HDLC is commonly
used in the open systems interconnection (OSI) model's layer. HDLC frames are transmitted
over synchronous links or asynchronous links, which do not mark the start and end of frames.
This is done using a frame delimiter or flag, which contains unique sequence of bits that are
not visible inside a frame. There are three types of HDLC frames:
• Information frames/User data (I-frames)
• Supervisory frames/Control data (S-frames)
• Unnumbered frames (U-frames)
The common fields within an HDLC frame are:
• Flag
• Address
• Control information
• Frame check sequence
Flag field : It is a 8-bit sequence with the bit pattern 01111110 that identifies both the beginning
and the end of a frame and serves as a synchronization pattern for the receiver.
Address field: It contains the address of the secondary station. If a primary station created the
frame, it contains a ‘to’ address. If a secondary creates the frame, it contains a ‘from’ address.
An address field can be 1 byte or several bytes long, depending on the needs of the network.
If the address field is only 1 byte, the last bit is always a 1. If the address is more than 1 byte,
all bytes but the last one will end with 0; only the last will end with 1. Ending each intermediate
byte with 0 indicates to the receiver that there are more address bytes to come.
Control field: The control field is a 1- or 2-byte segment of the frame used for flow and error
control.
Information field: The information field contains the user's data from the network layer or
management information
FCS field: The frame check sequence (FCS) is the HDLC error detection field.

FDDI [4,3(features of FDDI)]

Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for
transmission of data in a local area network (LAN) over fiber optic cables. It is
applicable in large LANs that can extend up to 200 kilometers in diameter.

Features
● FDDI uses optical fiber as its physical medium.
● It operates in the physical and medium access control (MAC layer) of the Open
Systems Interconnection (OSI) network model.
● It provides high data rate of 100 Mbps and can support thousands of users.
● It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
● It uses ring based token passing mechanism and is derived from IEEE 802.4 token
bus standard.
● It contains two token rings, a primary ring for data and token transmission and a
● secondary ring that provides backup if the primary ring fails.
● FDDI technology can also be used as a backbone for a wide area network (WAN).

The following diagram shows FDDI −

Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following
diagram −
The fields of an FDDI frame are −

● Preamble: 1 byte for synchronization.

● Start Delimiter: 1 byte that marks the beginning of the frame.

● Frame Control: 1 byte that specifies whether this is a data frame or control frame.

● Destination Address: 2-6 bytes that specifies address of destination station.

● Source Address: 2-6 bytes that specifies address of source station.

● Payload: A variable length field that carries the data from the network layer.

● Checksum: 4 bytes frame check sequence for error detection.

● End Delimiter: 1 byte that marks the end of the frame.

State the various design issues for the data link layer. What is piggybacking? A bit string
01111011111101111110 needs to be transmitted at the data link layer. What is the string
actually transmitted after bit stuffing?

Ans: The design issues of data link layer are

 Providing services to the network layer


 Framing
 Error Control
 Flow Control
Services to the Network Layer
In the OSI Model, each layer uses the services of the layer below it and provides services to the
layer above it. The data link layer uses the services offered by the physical layer.The primary
function of this layer is to provide a well defined service interface to network layer above it.

The types of services provided can be of three types −

 Unacknowledged connectionless service


 Acknowledged connectionless service
 Acknowledged connection - oriented service
Framing
The data link layer encapsulates each data packet from the network layer into frames that are then
transmitted.
A frame has three parts, namely −

 Frame Header
 Payload field that contains the data packet from network layer
 Trailer
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to with respect
to error control are −

 Dealing with transmission errors


 Sending acknowledgement frames in reliable connections
 Retransmitting lost frames
 Identifying duplicate frames and deleting them
 Controlling access to shared channels in case of broadcasting
Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow receiver.
When the sender sends frames at very high speeds, a slow receiver may not be able to handle it.
There will be frame losses even if the transmission is error-free. The two common approaches for
flow control are −

 Feedback based flow control


 Rate based flow control

A technique called piggybacking is used to improve the efficiency of the bidirectional protocols.
When a frame is carrying data from A to B, it can also carry control information about arrived (or
lost) frames from B; when a frame is carrying data from B to A, it can also carry control
information about the arrived (or lost) frames from A.

 The major advantage of piggybacking is better use of available channel bandwidth.


 The major disadvantage of piggybacking Additional complexity and If the data link layer
waits too long before transmitting the acknowledgement, then re-transmission of frame
would take place.
Here, the given string is as below:
01111011111101111110
After bit stuffing, the transmitted bit is as below:
0111101111011011110110

Flow control in DLL [4]


The mechanisms for flow control in DLL are:
a. Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to stop
anwait until the acknowledgement of the data-frame sent is received.

Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of dataframes
after which the acknowledgement should be sent. As we learnt, stop and wait flow control
mechanism wastes resources, this protocol tries to make use of underlying resources as much as
possible.

Why do you think that static channel assignment is not efficient?[2]

Ans: Static channel allocation is a traditional method of channel allocation in which a fixed portion
of the frequency channel is allotted to each user, who may be base stations, access points or
terminal equipment. In case more static data space is declared than needed, there is waste of space.
In case less static space is declared than needed,
then it becomes impossible to expand this fixed size during run time. Hence, static channel
assignment is not efficient.

What is meant by byte stuffing technique? What is piggy backing? [3+3]


Ans: Byte stuffing is a process that transforms a sequence of data bytes that may
contain 'illegal' or 'reserved' values (such as packet delimiter) into a potentially longer
sequence that contains no occurrences of those values. In this method, start and end of the frame
are recognized with the help of flag bytes. Each frame starts and end with a flag byte. Two
consecutive flag bytes indicate the end of one frame and start of the next one. This framing method
in only applicable in 8-bit character codes.

Piggybacking data is a bit different from Sliding Protocol used in the OSI model. In the data frame
itself, we incorporate one additional field for acknowledgment (called ACK). Whenever party A
wants to send data to party B, it will carry additional ACK information in the PUSH as well. Three
rules govern the piggybacking data transfer:

If the station A wants to send both data and acknowledgement, it keeps both fields there.
If station A wants to send just the acknowledgement, then a separate ACK is sent.
If station A wants to send just the data, then the previous acknowledgement field is sent along with
the data. Station B simply ignores this duplicate ACK frame upon receiving.

Suppose a bit string, 0111101111101111110, needs to be transmitted at the data link layer.
What is the string actually transmitted after bit stuffing?

Bit before transmission: 0111101111101111110

Bit after bit stuffing: 01111011111011111010 (Transmitted Bit)

Stuffed Bit
In case of 6 consecutive 1’s, after 5 1’s a 0 is placed.

Why do we think that there arised the need of classless IP address although class-based IP
address was in used? Show the classless IP with an example?
Classful Addressing, introduced in 1981, with classful routing, IP v4 addresses were
divided into 5 classes (A to E).

Disadvantage of Classful Addressing:

 Class A with a mask of 255.0.0.0 can support 16, 777, 214 addresses
 Class B with a mask of 255.255.0.0 can support 65, 534 addresses
 Class C with a mask of 255.255.255.0 can support 254 addresses

But what if someone requires 2000 addresses?


One way to address this situation would be to provide the person with class Bnetwork. But that
would result in a waste of so many addresses. Another possible way is to provide multiple class
C networks, but that too can cause a problem as there would be too many networks to handle.

To resolve problems like the one mentioned above Classless Inter-Domain Routing (CIDR)
was introduced. It allows the user to use Variable Length Subnet Masks.

Framing Techniques In Data link framing mechanism [5,5,,6,8]

Various framing techniques at data link layer are:


1.Encoding Violations
2. Bit stuffing
3. Byte Stuffing
4. Character Count
*Encoding Violations:

This Framing Method is used only in those networks in which Encoding on the Physical
Medium contains some redundancy. Some LANs encode each bit of data by using two
Physical Bits i.e. Manchester coding is Used. Here, Bit 1 is encoded into high-low (10)
pair and Bit 0 is encoded into low-high (01) pair. The scheme means that every data bit has
a transition in the middle, making it easy for the receiver to locate the bit boundaries. The
combinations high-high and low-low are not used for data but are used for delimiting
frames in some protocols.
*Bit Stuffing:

3. Allows frame to contain arbitrary number of bits and arbitrary character size. The frames
are separated by separating flag.
4. Each frame begins and ends with a special bit pattern, 01111110 called a flag byte. When
five consecutive l's are encountered in the data, it automatically stuffs a '0' bit into outgoing
bit stream.
 In this method, frames contain an arbitrary number of bits and allow character codes with
an arbitrary number of bits per character. In his case, each frame starts and ends with a
special bit pattern, 01111110.
 In the data a 0 bit is automatically stuffed into the outgoing bit stream whenever the
sender's data link layer finds five consecutive 1s.
 This bit stuffing is similar to byte stuffing, in which an escape byte is stuffed into the
outgoing character stream before a flag byte in the data.
 When the receiver sees five consecutive incoming i bits, followed by a 0 bit, it
automatically de stuffs (i.e., deletes) the 0 bit. Bit Stuffing is completely transparent to
network layer as byte stuffing. The figure1 below gives an example of bit stuffing.
 This method of framing finds its application in networks in which the change of data into
code on the physical medium contains some repeated or duplicate data. For example, some
LANs encodes bit of data by using 2 physical bits.
*Byte Stuffing:

3. In this method, start and end of frame are recognized with the help of flag bytes. Each
frames starts with and ends with a flag byte. Two consecutive flag bytes indicate the end
of one frame and start of the next one. The flag bytes used in the figure 2 used is named as
“ESC” flag byte.
4. A frame delimited by flag bytes. This framing method is only applicable in 8-bit character
codes which are a major disadvantage of this method as not all character codes use 8-bit
characters e.g. Unicode.
*Character Count:

Each frame starts with the ASCII character sequence DLE STX and ends with the sequence DLE
ETX. (Where DLE is Data Link Escape, STX is Start of Text and ETX is End of Text.) This method
overcomes the drawbacks of the character count method. If the destination ever loses
synchronization, it only has to look for DLE STX and DLE ETX characters. If however, binary
data is being transmitted then there exists a possibility of the characters DLE STX and DLE ETX
occurring in the data. Since this can interfere with the framing, a technique called character stuffing
is used. The sender's data link layer inserts an ASCII DLE character just before the DLE character
in the data. The receiver's data link layer removes this DLE before this data is given to the network
layer. However character stuffing is closely associated with 8-bit characters and this is a major
hurdle in transmitting arbitrary sized characters.

What do you understand by Media Access Control Explain why token bus is also called as
the token ring.

A media access control is a network data transfer policy that determines how data is transmitted
between two computer terminals through a network cable. The media access control policy
involves sub-layers of the data link layer 2 in the OSI reference model. MAC describes the process
that is employed to control the basis on which devices can access the shared network. Some level
of control is required to ensure the ability of all devices to access the network within a reasonable
period of time, thereby resulting in acceptable access and response times.

This network channel through which data is transmitted between terminal nodes to avoid collision
has three various ways of accomplishing this purpose. They include:

 Carrier sense multiple access with collision avoidance (CSMA/CA)


 Carrier sense multiple access with collision detection (CSMA/CD)
 Demand priority
 Token passing

The essence of the MAC protocol is to ensure non-collision and eases the transfer of data packets
between two computer terminals. The basic function of MAC is to provide an addressing
mechanism and channel access so that each node available on a network can communicate with
other nodes available on the same or other networks. Sometimes people refer to this as the MAC
layer.

Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs. The
physical media has a bus or a tree topology and uses coaxial cables. A virtual ring is created with
the nodes/stations and the token is passed from one node to the next in a sequence along this virtual
ring. Each node knows the address of its preceding station and its succeeding station. A station can
only transmit data when it has the token. The working principle of token bus is similar to Token
Ring.
Token Passing Mechanism in Token Bus
A token is a small message that circulates among the stations of a computer network providing
permission to the stations for transmission. If a station has data to transmit when it receives a token,
it sends the data and then passes the token to the next station; otherwise, it simply passes the token
to the next station. This is depicted in the following diagram −
Explain the working principle of CSMA/CD with appropriate figure. [4,6]

Ans: CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection, with collision
detection being an extension of the CSMA protocol. This creates a procedure that regulates how
communication must take place in a network with a shared transmission medium. The extension
also regulates how to proceed if collisions occur i.e. when two or more nodes try to send data
packets via the transmission medium (bus) simultaneously and they interfere with one other.

Working principle of CSMA/CD:


Step 1: Check if the sender is ready for transmitting data packets.
Step 2: Check if the transmission link is idle?
Sender has to keep on checking if the transmission link/medium is idle. For this it continuously
senses transmissions from other nodes. Sender sends dummy data on the link. If it does not receive
any collision signal, this means the link is idle at the moment. If it senses that the carrier is free
and there are no collisions, it sends the data. Otherwise it refrains from sending data.
Step 3: Transmit the data & check for collisions.
Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system. It checks
for the successful and unsuccessful transmissions through collision signals. During transmission,
if collision signal is received by the node, transmission is stopped. The station then transmits a jam
signal onto the link and waits for random time interval before it resends the frame. After some
random time, it again attempts to transfer the data and repeats above process.
Step 4: If no collision was detected in propagation, the sender completes its frame transmission
and resets the counters.
Institute of Engineering has six departments having 16, 32, 61, 8, 6 and 24 computers. Use
192.168.1.0/24 to distribute the network. Find the network address, broadcast address,
usable IP range and subnet mask in each department.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy