Chapter 3 Yearwise Marking
Chapter 3 Yearwise Marking
[5,5,6,8]
Frames are the units of digital transmission particularly in computer networks and
telecommunications. Frames are comparable to packets of energy called photons in case of light
energy. Frame is continuously used in Time Division Multiplexing process.
The different type of framing approaches that are used in data link layer are as follows:
1. Character Count
It uses a field in the header to count the number of characters in the frame. On receiver end, the
layer sees the character count and knows how many character forms a frame. Count can be changed
due to transmission error that results in receiver to be out of synchronization.
Modern Ethernet is based on twisted pair wiring. Commonly either CAT5 or CAT6 cabling
standards. Twisted pair is nice because it is pretty much immune to noise and allows us to run up
to gigabit speeds without too much of a concern. Alternately and for higher speeds up to 100Gb/s,
there’s fiber (or multi-fiber cables).
Modern Ethernet is point-to-point, with hosts typically connected to switches in a star topology.
Each link is used full-duplex, so that each end can transmit at rate without collisions. Switches
may be interconnected in a mesh, with loops eliminated by Spanning Tree Protocol or something
more clever.
Each host on an Ethernet has an address, known as a MAC Address. This is globally unique, based
on manufacturing. Any two hosts on an Ethernet can talk directly to one another just by using the
right MAC address. Switches implement multicast and broadcast by flooding the packets across
the Ethernet in a loop free manner.
At the link layer, each packet on an Ethernet is prefaced by a preamble. This is a fixed electrical
pattern that is used to help the receiver learn what the transmitter’s clock looks like. This is
followed by an Ethernet frame: a destination MAC address, a source MAC address, an Ethertype,
and then the L3 packet body. This is then followed by a CRC-32 checksum of the frame for error
detection.
The Ethertype is a 2 byte code that indicates what type of L3 packet is being carried.
CSMA works on the principle that only one device can transmit signals on the network, otherwise
a collision will occur resulting in the loss of data packets or frames. CSMA works when a device
needs to initiate or transfer data over the network. Before transferring, each CSMA must check or
listen to the network for any other transmissions that may be in progress. If it senses a transmission,
the device will wait for it to end. Once the transmission is completed, the waiting device can
transmit its data/signals. However, if multiple devices access it simultaneously and a collision
occurs, they both have to wait for a specific time before reinitiating the transmission process.
In CSMA-CD first, the station monitors the transmission medium. As long as this is occupied, the
monitoring will continue. Only when the medium is free and for a certain time (in interframe
spacing), will the station send a data packet. Meanwhile, the transmitter continues to monitor the
transmission medium to see if it detects any data collisions. If no other participant tries to send its
data via the medium by the end of transmission, and no collision occurs, the transmission has been
a success.
ALOHA is a system for coordinating and arbitrating access to a shared communication Networks
channel. It was developed in the 1970s by Norman Abramson and his colleagues at the University
of Hawaii. The original system used for ground-based radio broadcasting, but the system has been
implemented in satellite communication systems. A shared communication system like ALOHA
requires a method of handling collisions that occur when two or more systems attempt to transmit
on the channel at the same time. In the ALOHA system, a node transmits whenever data is available
to send. If another node transmits at the same time, a collision occurs, and the frames that were
transmitted are lost. However, a node can listen to broadcasts on the medium, even its own, and
determine whether the frames were transmitted.
There are two different types of ALOHA:
i. Pure ALOHA
In pure ALOHA, the stations transmit frames whenever they have data to send. When
two or more stations transmit simultaneously, there is collision and the frames are
destroyed. In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver. If acknowledgement is not received within the
specified time, the station assumes that the frame (or acknowledgement) has been
destroyed. If the frame is destroyed because of collision the station waits for a random
amount of time and sends it again. This waiting time must be random otherwise same
frames will collide again and again. Therefore, pure ALOHA dictates that when time-
out period passes, each station must wait for a random amount of time before resending
its frame. this randomness will help avoid more collisions.
Whenever a station wants to transmit a frame it inverts a single bit of the 3-byte token which
instantaneously changes it into a normal data packet. Because there is only one token, there can
at most be one transmission at a time. Since the token rotates in the ring it is guaranteed that
every node gets the token with in some specified time. So there is an upper bound on the time of
waiting to grab the token so that starvation is avoided. There is also an upper limit of 250 on the
number of nodes in the network. To distinguish the normal data packets from token (control
packet) a special sequence is assigned to the token packet. When any node gets the token it first
sends the data it wants to send, then re-circulates the token.
If a node transmits the token and nobody wants to send the data the token comes back to the
sender. If the first bit of the token reaches the sender before the transmission of the last bit, then
error situation arises. So to avoid this we should have:
Propagation delay + transmission of n-bits (1-bit delay in each node) > transmission of the token
time
A station may hold the token for the token-holding time which is 10 ms unless the installation
sets a different value. If there is enough time left after the first frame has been transmitted to send
more frames, then these frames may be sent as well. After all pending frames have been
transmitted or the transmission frame would exceed the token-holding time, the station
regenerates the 3-byte token frame and puts it back on the ring.
There are three modes of operation:
Listen Mode: In this mode the node listens to the data and transmits the data to the next node.
In this mode there is a one-bit delay associated with the transmission.
Transmit Mode: In this mode the node just discards the any data and puts the data onto the
network.
By-pass Mode: In this mode reached when the node is down. Any data is just bypassed.
There is no one-bit delay in this mode.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
T = mean time delay,
C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
Dynamic Channel Allocation:
Possible assumptions include:
Station Model:
Assumes that each of N stations independently produce frames. The probability of producing a
packet in the interval IDt where I is the constant arrival rate of new frames.
Single Channel Assumption:
In this allocation all stations are equivalent and can send and receive on that channel.
Collision Assumption:
If two frames overlap in time-wise, then that’s collision. Any collision is an error, and both
frames must re-transmitted. Collisions are only possible error.
1. Time can be divided into Slotted or Continuous.
2. Stations can sense a channel is busy before they try it.
Protocol Assumption:
N independent stations.
A station is blocked until its generated frame is transmitted.
Probability of a frame being generated in a period of length Dt is IDt where I is the arrival
rate of frames.
Only a single Channel available.
Time can be either: Continuous or slotted.
Carrier Sense: A station can sense if a channel is already busy before transmission.
No Carrier Sense: Time out used to sense loss data.
Why do you think that static channel assignment is not efficient? [2]
Static channel assignment isn’t efficient as in most real-life network situations due to following
reasons:
i) There are variable number of users, usually large in number with busty traffic. If
the value of N is very large, the bandwidth available for each user will be very less.
This will reduce the throughput if the user needs to send a large volume of data
once in a while.
ii) Since all of the user are allocated fixed bandwidths, the bandwidth allocated to non-
communicating users lies wasted.
iii) If the number of users is more than N, then some of them will be denied service,
even if there are unused frequencies
The CSMA/CD creates an efficiency gain compared to other techniques random access because
there are immediate collision detection and interruption of current transmission. Issuer’s couplers
recognize a collision by comparing the transmitted signal with the passing on the line. The
collisions are no longer recognized by absence of acknowledgment but by detecting interference.
This conflict detection method is relatively simple, but it requires sufficient performance coding
techniques to easily recognize a superposition signal. It is generally used for this differential coding
technique, such as differential Manchester
What is meant by byte stuffing techniques? What is piggy backing? Suppose a bit string,
0111101111101111110 needs to be transmitted at data link layer. What string actually
transmitted after the bit stuffing?
In framing ,a byte is stuffed in the message to differentiate from the delimiter. This is also called
byte stuffing technique. That special character is ESC character. ESC character is added just in
front of any conflicting character in the data stream.
In two-way communication, wherever a frame is received, the receiver waits and does not send the
control frame (acknowledgement or ACK) back to the sender immediately. The receiver waits until
its network layer passes in the next data packet. The delayed acknowledgement is then attached to
this outgoing data frame. This technique of temporarily delaying the acknowledgement so that it
can be hooked with next outgoing data frame is known as piggybacking.
For the input 0111101111101111110 the actual bit transmitted is 011110111110011111010
What do you mean by Media Access Control? What is its significance in data link layer?
Explain why token bus is also called as the token ring.
A media access control is a network data transfer policy that determines how data is transmitted
between two computer terminals through a network cable.
Framing: Data-link layer takes packets from Network Layer and encapsulates them into Frames.
Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up
signals from hardware and assembles them into frames.
Addressing: Data-link layer provides layer-2 hardware addressing mechanism. Hardware address
is assumed to be unique on the link. It is encoded into hardware at the time of manufacturing.
Synchronization: When data frames are sent on the link, both machines must be synchronized in
order to transfer to take place.
Error Control: Sometimes signals may have encountered problem in transition and the bits are
flipped. These errors are detected and attempted to recover actual data bits. It also provides error
reporting mechanism to the sender.
Flow Control: Stations on same link may have different speed or capacity. Data-link layer ensures
flow control that enables both machines to exchange data on same speed.
Multi-Access: When host on the shared link tries to transfer the data, it has a high probability of
collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing
a shared media among multiple Systems.
Reliable delivery: When a link-layer protocol provides reliable delivery service, it guarantees to
move each network-layer datagram across the link without error.
Token Bus is described in the IEEE 802.4 specification, and is a Local Area Network (LAN) in
which the stations on the bus or tree form a logical ring. Each station is assigned a place in an
ordered sequence, with the last station in the sequence being followed by the first, as shown below.
Each station knows the address of the station to its "left" and "right" in the sequence. This type of
network, like a Token Ring network, employs a small data frame only a few bytes in size, known
as a token, to grant individual stations exclusive access to the network transmission medium.
Token-passing networks are deterministic in the way that they control access to the network, with
each node playing an active role in the process. When a station acqires control of the token, it is
allowed to transmit one or more data frames, depending on the time limit imposed by the network.
When the station has finished using the token to transmit data, or the time limit has expired, it
relinquishes control of the token, which is then available to the next station in the logical sequence.
When the ring is initialized, the station with the highest number in the sequence has control of the
token.
HDLC
A high-level data link control (HDLC) is a protocol that is a bit-oriented synchronous data link
layer. HDLC ensures the error-free transmission of data to the proper destinations and controls the
data transmission speed. HDLCs can provide both connection-oriented and connectionless
services.
A high-level data link control defines rules for transmitting data between network points. Data
in an HDLC is organized into units called frames and is sent across networks to specified
destinations. HDLC also manages the pace at which data is transmitted. HDLC is commonly
used in the open systems interconnection (OSI) model's layer. HDLC frames are transmitted
over synchronous links or asynchronous links, which do not mark the start and end of frames.
This is done using a frame delimiter or flag, which contains unique sequence of bits that are
not visible inside a frame. There are three types of HDLC frames:
• Information frames/User data (I-frames)
• Supervisory frames/Control data (S-frames)
• Unnumbered frames (U-frames)
The common fields within an HDLC frame are:
• Flag
• Address
• Control information
• Frame check sequence
Flag field : It is a 8-bit sequence with the bit pattern 01111110 that identifies both the beginning
and the end of a frame and serves as a synchronization pattern for the receiver.
Address field: It contains the address of the secondary station. If a primary station created the
frame, it contains a ‘to’ address. If a secondary creates the frame, it contains a ‘from’ address.
An address field can be 1 byte or several bytes long, depending on the needs of the network.
If the address field is only 1 byte, the last bit is always a 1. If the address is more than 1 byte,
all bytes but the last one will end with 0; only the last will end with 1. Ending each intermediate
byte with 0 indicates to the receiver that there are more address bytes to come.
Control field: The control field is a 1- or 2-byte segment of the frame used for flow and error
control.
Information field: The information field contains the user's data from the network layer or
management information
FCS field: The frame check sequence (FCS) is the HDLC error detection field.
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for
transmission of data in a local area network (LAN) over fiber optic cables. It is
applicable in large LANs that can extend up to 200 kilometers in diameter.
Features
● FDDI uses optical fiber as its physical medium.
● It operates in the physical and medium access control (MAC layer) of the Open
Systems Interconnection (OSI) network model.
● It provides high data rate of 100 Mbps and can support thousands of users.
● It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
● It uses ring based token passing mechanism and is derived from IEEE 802.4 token
bus standard.
● It contains two token rings, a primary ring for data and token transmission and a
● secondary ring that provides backup if the primary ring fails.
● FDDI technology can also be used as a backbone for a wide area network (WAN).
Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following
diagram −
The fields of an FDDI frame are −
● Frame Control: 1 byte that specifies whether this is a data frame or control frame.
● Payload: A variable length field that carries the data from the network layer.
State the various design issues for the data link layer. What is piggybacking? A bit string
01111011111101111110 needs to be transmitted at the data link layer. What is the string
actually transmitted after bit stuffing?
Frame Header
Payload field that contains the data packet from network layer
Trailer
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to with respect
to error control are −
A technique called piggybacking is used to improve the efficiency of the bidirectional protocols.
When a frame is carrying data from A to B, it can also carry control information about arrived (or
lost) frames from B; when a frame is carrying data from B to A, it can also carry control
information about the arrived (or lost) frames from A.
Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of dataframes
after which the acknowledgement should be sent. As we learnt, stop and wait flow control
mechanism wastes resources, this protocol tries to make use of underlying resources as much as
possible.
Ans: Static channel allocation is a traditional method of channel allocation in which a fixed portion
of the frequency channel is allotted to each user, who may be base stations, access points or
terminal equipment. In case more static data space is declared than needed, there is waste of space.
In case less static space is declared than needed,
then it becomes impossible to expand this fixed size during run time. Hence, static channel
assignment is not efficient.
Piggybacking data is a bit different from Sliding Protocol used in the OSI model. In the data frame
itself, we incorporate one additional field for acknowledgment (called ACK). Whenever party A
wants to send data to party B, it will carry additional ACK information in the PUSH as well. Three
rules govern the piggybacking data transfer:
If the station A wants to send both data and acknowledgement, it keeps both fields there.
If station A wants to send just the acknowledgement, then a separate ACK is sent.
If station A wants to send just the data, then the previous acknowledgement field is sent along with
the data. Station B simply ignores this duplicate ACK frame upon receiving.
Suppose a bit string, 0111101111101111110, needs to be transmitted at the data link layer.
What is the string actually transmitted after bit stuffing?
Stuffed Bit
In case of 6 consecutive 1’s, after 5 1’s a 0 is placed.
Why do we think that there arised the need of classless IP address although class-based IP
address was in used? Show the classless IP with an example?
Classful Addressing, introduced in 1981, with classful routing, IP v4 addresses were
divided into 5 classes (A to E).
Class A with a mask of 255.0.0.0 can support 16, 777, 214 addresses
Class B with a mask of 255.255.0.0 can support 65, 534 addresses
Class C with a mask of 255.255.255.0 can support 254 addresses
To resolve problems like the one mentioned above Classless Inter-Domain Routing (CIDR)
was introduced. It allows the user to use Variable Length Subnet Masks.
This Framing Method is used only in those networks in which Encoding on the Physical
Medium contains some redundancy. Some LANs encode each bit of data by using two
Physical Bits i.e. Manchester coding is Used. Here, Bit 1 is encoded into high-low (10)
pair and Bit 0 is encoded into low-high (01) pair. The scheme means that every data bit has
a transition in the middle, making it easy for the receiver to locate the bit boundaries. The
combinations high-high and low-low are not used for data but are used for delimiting
frames in some protocols.
*Bit Stuffing:
3. Allows frame to contain arbitrary number of bits and arbitrary character size. The frames
are separated by separating flag.
4. Each frame begins and ends with a special bit pattern, 01111110 called a flag byte. When
five consecutive l's are encountered in the data, it automatically stuffs a '0' bit into outgoing
bit stream.
In this method, frames contain an arbitrary number of bits and allow character codes with
an arbitrary number of bits per character. In his case, each frame starts and ends with a
special bit pattern, 01111110.
In the data a 0 bit is automatically stuffed into the outgoing bit stream whenever the
sender's data link layer finds five consecutive 1s.
This bit stuffing is similar to byte stuffing, in which an escape byte is stuffed into the
outgoing character stream before a flag byte in the data.
When the receiver sees five consecutive incoming i bits, followed by a 0 bit, it
automatically de stuffs (i.e., deletes) the 0 bit. Bit Stuffing is completely transparent to
network layer as byte stuffing. The figure1 below gives an example of bit stuffing.
This method of framing finds its application in networks in which the change of data into
code on the physical medium contains some repeated or duplicate data. For example, some
LANs encodes bit of data by using 2 physical bits.
*Byte Stuffing:
3. In this method, start and end of frame are recognized with the help of flag bytes. Each
frames starts with and ends with a flag byte. Two consecutive flag bytes indicate the end
of one frame and start of the next one. The flag bytes used in the figure 2 used is named as
“ESC” flag byte.
4. A frame delimited by flag bytes. This framing method is only applicable in 8-bit character
codes which are a major disadvantage of this method as not all character codes use 8-bit
characters e.g. Unicode.
*Character Count:
Each frame starts with the ASCII character sequence DLE STX and ends with the sequence DLE
ETX. (Where DLE is Data Link Escape, STX is Start of Text and ETX is End of Text.) This method
overcomes the drawbacks of the character count method. If the destination ever loses
synchronization, it only has to look for DLE STX and DLE ETX characters. If however, binary
data is being transmitted then there exists a possibility of the characters DLE STX and DLE ETX
occurring in the data. Since this can interfere with the framing, a technique called character stuffing
is used. The sender's data link layer inserts an ASCII DLE character just before the DLE character
in the data. The receiver's data link layer removes this DLE before this data is given to the network
layer. However character stuffing is closely associated with 8-bit characters and this is a major
hurdle in transmitting arbitrary sized characters.
What do you understand by Media Access Control Explain why token bus is also called as
the token ring.
A media access control is a network data transfer policy that determines how data is transmitted
between two computer terminals through a network cable. The media access control policy
involves sub-layers of the data link layer 2 in the OSI reference model. MAC describes the process
that is employed to control the basis on which devices can access the shared network. Some level
of control is required to ensure the ability of all devices to access the network within a reasonable
period of time, thereby resulting in acceptable access and response times.
This network channel through which data is transmitted between terminal nodes to avoid collision
has three various ways of accomplishing this purpose. They include:
The essence of the MAC protocol is to ensure non-collision and eases the transfer of data packets
between two computer terminals. The basic function of MAC is to provide an addressing
mechanism and channel access so that each node available on a network can communicate with
other nodes available on the same or other networks. Sometimes people refer to this as the MAC
layer.
Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in LANs. The
physical media has a bus or a tree topology and uses coaxial cables. A virtual ring is created with
the nodes/stations and the token is passed from one node to the next in a sequence along this virtual
ring. Each node knows the address of its preceding station and its succeeding station. A station can
only transmit data when it has the token. The working principle of token bus is similar to Token
Ring.
Token Passing Mechanism in Token Bus
A token is a small message that circulates among the stations of a computer network providing
permission to the stations for transmission. If a station has data to transmit when it receives a token,
it sends the data and then passes the token to the next station; otherwise, it simply passes the token
to the next station. This is depicted in the following diagram −
Explain the working principle of CSMA/CD with appropriate figure. [4,6]
Ans: CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection, with collision
detection being an extension of the CSMA protocol. This creates a procedure that regulates how
communication must take place in a network with a shared transmission medium. The extension
also regulates how to proceed if collisions occur i.e. when two or more nodes try to send data
packets via the transmission medium (bus) simultaneously and they interfere with one other.