Telecom And Networking Blog posts and YouTube videos along with Gift of Discount Coupons Fri, 23 Mar 2018 13:02:00 +0000 en-US hourly 1 Telecom And Networking 32 32 135429384 Passive Optical Network (PON) – Brief Introduction Sun, 25 Feb 2018 07:14:43 +0000 Passive Optical Network (PON) Passive Optical Network (PON) – Brief Introduction       Books...

The post Passive Optical Network (PON) – Brief Introduction appeared first on Telecom And Networking.


Passive Optical Network (PON)

Passive Optical Network (PON) – Brief Introduction

Books you may interested
Books on PON  Books on PON  Books on PON  Books on PON  Books on PON  Books on PON 


The post Passive Optical Network (PON) – Brief Introduction appeared first on Telecom And Networking.

]]> 0 5643
PPP Reliable Transmission¬† Sun, 25 Feb 2018 06:23:49 +0000 PPP Reliable Transmission  (RFC 1663) The Point-to-Point Protocol (PPP) [1] provides a standard method for...

The post PPP Reliable Transmission  appeared first on Telecom And Networking.


PPP Reliable Transmission 
(RFC 1663)

The Point-to-Point Protocol (PPP) [1] provides a standard method for transporting multi-protocol datagrams over point-to-point links. This document defines a method for negotiating and using Numbered-Mode, as defined by ISO 7776 [2], to provide a reliable serial link.

  1. Introduction

By default, PPP packets over HDLC framed links consist of “connectionless” datagrams.  If reliable transmission over the HDLC link is desired, the implementation MUST specify the Numbered-Mode Configuration Option during Link Establishment phase.

Generally, serial link reliability is not a major issue.  The architecture of protocols used in datagram networking presumes best-effort non-sequential delivery.  When errors are detected, datagrams are discarded.

However, in certain circumstances, it is advisable to provide a reliable link, at least for a subset of the messages.  The most obvious case is when the link is compressed.  Since the dictionary is recovered from the compressed data stream, and a lost datagram corrupts the dictionary, datagrams must not be lost.  Not all compression types will require a reliable data stream since the cost to detect and reset a corrupt dictionary is small.

The ISO 7776 LAPB can be used guarantee delivery.  This is referred to in this document as “Numbered Mode” to distinguish it from the use of “Unnumbered Information”, which is standard PPP framing practice.

Where multiple parallel links are used to emulate a single link of higher speed, Bridged traffic, Source Routed traffic, and traffic subjected to Van Jacobsen TCP/IP header compression must be delivered to the higher layer in a certain sequence.  However, the fact of the links being relatively asynchronous makes traffic ordering uncertain.

The ISO 7776 Multi-Link Procedure MAY be used to restore order. Implementation of the ISO Multi-Link Procedure is deprecated.  It is recommended that the PPP multilink procedure [4] be used instead.

  1. Physical Layer Requirements

PPP Reliable Transmission imposes the same requirements that are described in “PPP in HDLC Framing” [3], with the following exceptions.

Control Signals

While PPP does not normally require the use of control signals, implementation of Numbered-Mode LAPB or LAPD requires the provision of control signals, which indicate when the link has become connected or disconnected.  These, in turn, provide the Up and Down events to the LCP state machine.

  1. The Data Link Layer

Numbered-Mode affects only the Address and Control fields.  The remainder of the frame conforms to the framing in use for PPP.

The Address Field of the frame MUST take the value announced in the Numbered-Mode Configuration Option, and the Control Field MAY take any value valid in ISO 7776.

Once the link enters Numbered-Mode, Numbered-Mode MUST be used on all frames, as some implementations do not support the use of the Unnumbered-Information control field or the use of the All-Stations address intermixed with Numbered-Mode frames.

  1. Frame Format

The following frame format is valid under Numbered-Mode.  The fields are transmitted from left to right.

Numbered Mode

frame format - Numbered Mode

The Protocol, Information, and Padding fields are described in the Point-to-Point Protocol Encapsulation [1].  The FCS and Flag Sequence fields are described in “PPP in HDLC Framing” [3].

  1. Configuration Option Format


The LCP Numbered-Mode Configuration Option negotiates the use of Numbered-Mode on the link.  By default or ultimate disagreement, Unnumbered-Mode is used.

A summary of the Numbered-Mode Configuration Option format is shown below.  The fields are transmitted from left to right.

Configuration Option Format




>= 4


A value between 1 and 127.  This indicates the number of frames the receiver will buffer, which is the maximum number that the sender should send without receiving an acknowledgment.  If window < 8, then modulo 8 sequencing is used on the link. Otherwise, modulo 128 sequencing is used.

It is conceivable and legal that differing window values might be announced.  However, it is not permitted for one system to use modulo 8 sequencing and the other to use modulo 128.  Therefore, the rule is: a Configure-Nak may reduce the window but may not increase it.


An HDLC Address as specified in ISO 3309.  ISO 7776 specifies four of the possible values: 1 and 3 for single link operation, 7 and 15 for the Multi-Link Procedure.  Other values consistent with ISO 3309 are considered legal.

Implementation of the Multi-Link Procedure is optional; A Configure-Nak may, therefore, force a change from MLP to single link mode, but not the reverse.

Should the address be zero upon receipt, the receiver MUST Configure-Nak with an appropriate address.  If both peers send address zero, the system advertising the numerically smaller window will select the smaller address.  If both windows are the same size, a random choice MUST be made; when good sources of randomness are used, the link will converge in a reasonable time.

If magic numbers have been negotiated on the link, the system with the numerically smaller magic number SHOULD specify the smaller address.

  1. Numbered-Mode Operation

When using the Numbered-Mode, each link is established in the usual manner for the type of link.  The Numbered-Mode Configuration Option is negotiated, the Magic-Number Configuration Option MUST also be negotiated, and the Address-and-Control-Field-Compression Configuration Option MUST NOT be negotiated.

Following the successful negotiation of the Numbered-Mode Configuration Option during LCP Link Establishment phase, the system with the numerically smaller Magic-Number will send a SABM or SABM(E), and the other will respond with a UA.  In the event that either the SABM or UA is lost, this exchange may be repeated according to the same parameters as the configuration exchange itself, using the Restart Timer and counter values.  Authentication, Link Quality Determination, and NCP Configuration follow this step.

Once the link has been established with Numbered-Mode, when re-negotiation of link configuration occurs, the entire re-negotiation MUST be conducted in Numbered-Mode.  If the Numbered-Mode Configuration Option is not successfully re-negotiated, the link reverts to Unnumbered-Information operation prior to Authentication, Link Quality Determination, and NCP Configuration.

When an implementation which is capable of Numbered-Mode, and is not currently configured for Numbered-Mode operation, detects a frame which has a correct FCS but does not have a UI Control octet, the implementation MUST send a DM message, immediately followed by an LCP Configure-Request.

When an implementation which is currently configured for Numbered-Mode operation receives a DM message, it MUST revert to Unnumbered-Information operation, and immediately send aanLCP Configure-Request.

Single Link

When Network-Layer packets are sent over a single link, the packets are encapsulated in the following order:

Numbered-Mode Operation-Single Link

Inverse Multiplexing

Since sending several connections over a single link is often called “multiplexing”, sending packets from a single connection over multiple parallel links is sometimes called “inverse-multiplexing”. By default, PPP performs no special processing for such links.  Each link is established and terminated independently, negotiates its own configuration options, and may have different combinations of such options as ACCM, Protocol Field Compression, and IP-Address.  This facilitates using the links simultaneously over dissimilar media, such as 56K sync with async backup.

Every link in a single machine MUST have different Magic Numbers, and each end of every link between two peers SHOULD have Magic Numbers which are unique to those peers.  This protects against patch-panel errors in addition to looped-back links.

The distribution of each link is controlled by higher level routing mechanisms.  When Network-Layer specific compression techniques (such as Van Jacobsen Compression) rely on sequential delivery, without Multi-Link Procedure support such compression MUST be applied on a link by link basis.
Inverse Multiplexing

Using Multi-Link Procedure

This document does not offer a standard for ISO Multi-Link but does offer a method for agreeing on the addressing scheme usable with Multi-Link.  A sample implementation is shown below.  Implementation of Multi-Link is not required.

When using the ISO 7776 Multi-Link Procedure, each link is established as described above.  In addition, the Numbered-Mode Configuration Option is negotiated with appropriate addresses for the Multi-Link Procedure.  The distribution of each link is controlled by the Multi-Link Procedure, as is the recovery sequence in the receiving system.

Using Multi-Link Procedure

LAPB Parameter Defaults

The following guidelines specify the default values of LAPB configurable parameters.

Timer T1

Timer T1 is the maximum time permitted before a retransmission is started, as a result of no response to a transmitted I frame.  This value must be greater than the time required for a maximum sized frame to be received by the other side of the link, and for a response to be generated for the frame.  This SHOULD be determined dynamically, based on the measured round trip time delay of the link at the LAPB level.  In the event that the system cannot determine the round trip time of the link, this value SHOULD be set to twice the bit rate of the link, divided by the maximum number of bits per frame, plus 100 milliseconds processing time.  For example, on a 14,400 bps link, with a maximum frame size of 8000 bits (1000 octets), the T1 value would be set to 3.7 seconds.

Timer T3

Timer T3 gives an indication of the idle state of the link. Its value must be greater than the T1 value. A maximum number of attempts to complete a transmission, N2 Parameter N2 gives the maximum number of retransmission attempts for a given frame.  If this value is exceeded, the link SHOULD be terminated.  The default value for parameter N2 SHOULD be 3.


  1. Simpson, W., Editor, “The Point-to-Point Protocol (PPP)”, STD 51, RFC 1661, Daydreamer, July 1994.
  2. ISO 7776, Information Processing Systems – Data Communication – High-Level Data Link Control Procedures – Description of the X.25 LAPB-Compatible DTE Data Link Procedures
  3. Simpson, W., Editor, “PPP in HDLC Framing”, STD 51, RFC 1662, Daydreamer, July 1994.
  4. Sklower, K., “PPP MultiLink Procedure”, Work in Progress.
v4link  Save up to 90% on Textbooks with Back to School Sale 
Imagine your best photo ever  Corel AfterShot 2  Corel VideoStudio Pro X6 Pro 

The post PPP Reliable Transmission  appeared first on Telecom And Networking.

]]> 0 5757
PPP Multilink Protocol Sun, 18 Feb 2018 10:49:31 +0000 PPP Multilink Protocol (MP)  (RFC1717) Introduction The method discussed here for PPP Multilink protocol is...

The post PPP Multilink Protocol appeared first on Telecom And Networking.


PPP Multilink Protocol (MP) 

  1. Introduction

The method discussed here for PPP Multilink protocol is similar to the multilink protocol described in ISO 7776 [4] but offers the additional ability to split and recombine packets, thereby reducing latency, and potentially increase the effective maximum receive unit (MRU).  Furthermore, there is no requirement here for acknowledged-mode operation on the link layer, although that is optionally permitted.

1.1.  Motivation

Basic Rate and Primary Rate ISDN both offer the possibility of opening multiple simultaneous channels between systems, giving users additional bandwidth on demand (for additional cost).  Previous proposals for the transmission of internet protocols over ISDN have stated as a goal the ability to make use of this capability.

There are proposals being advanced for providing synchronization between multiple streams at the bit level (the BONDING proposals); such features are not as yet widely deployed and may require additional hardware for end system.  Thus, it may be useful to have a purely software solution or at least an interim measure.

There are other instances where bandwidth on demand can be exploited, such as using a dial-up async line at 28,800 baud to back up a leased synchronous line, or opening additional X.25 SVCs where the window size is limited to two by international agreement.

The simplest possible algorithms of alternating packets between channels on a space available basis (which might be called the Bank Teller’s algorithm) may have undesirable side effects due to reordering of packets.

By means of a four-byte sequencing header, and simple synchronization rules, one can split packets among parallel virtual circuits between systems in such a way that packets do not become reordered, or at least the likelihood of this is greatly reduced.

1.2.  Functional Description

The method discussed here is similar to the multilink protocol described in ISO 7776 [4] but offers the additional ability to split and recombine packets, thereby reducing latency, and potentially increase the effective maximum receive unit (MRU).  Furthermore, there is no requirement here for acknowledged-mode operation on the link layer, although that is optionally permitted.

Multilink is based on an LCP option negotiation that permits a system to indicate to its peer that it is capable of combining multiple physical links into a “bundle”.  Only under exceptional conditions would a given pair of systems require the operation of more than one bundle connecting them.

Multilink is negotiated during the initial LCP option negotiation.  A system indicates to its peer that it is willing to do multilink by sending the multilink option as part of the initial LCP option negotiation.  This negotiation indicates three things:

  1. The system offering the option is capable of combining multiple physical links into one logical link;
  2. The system is capable of receiving upper layer protocol data units (PDU) fragmented using the multilink header (described later) and reassembling the fragments back into the original PDU for processing;
  3. The system is capable of receiving PDUs of size N octets where N is specified as part of the option even if N is larger than the maximum receive unit (MRU) for a single physical link.

Once multilink has been successfully negotiated, the sending system is free to send PDUs encapsulated and/or fragmented with the multilink header.

  1. General Overview

In order to establish communications over a point-to-point link, each end of the PPP link must first send LCP packets to configure the data link during Link Establishment phase.  After the link has been established, PPP provides for an Authentication phase in which the authentication protocols can be used to determine identifiers associated with each system connected by the link.

The goal of the multilink operation is to coordinate multiple independent links between a fixed pair of systems, providing a virtual link with greater bandwidth than any of the constituent members.  The aggregate link, or bundle, is named by the pair of identifiers for two systems connected by the multiple links.  A system identifier may include information provided by PPP Authentication [3] and information provided by LCP negotiation.  The bundled links can be different physical links, as in multiple async lines, but may also be instances of multiplexed links, such as ISDN, X.25 or Frame Relay.  The links may also be of different kinds, such as pairing dialup async links with leased synchronous links.

We suggest that multilink operation can be modeled as a virtual PPP link-layer entity wherein packets received over different physical link-layer entities are identified as belonging to a separate PPP network protocol (the Multilink Protocol, or MP) and recombined and sequenced according to information present in a multilink fragmentation header.  All packets received over links identified as belonging to the multilink arrangement are presented to the same network-layer protocol processing machine, whether they have multilink headers or not.

The packets to be transmitted using the multilink procedure are encapsulated according to the rules for PPP where the following options would have been manually configured:

  • No async control character Map
  • No Magic Number
  • No Link Quality Monitoring
  • Address and Control Field Compression
  • Protocol Field Compression
  • No Compound Frames
  • No Self-Describing-Padding

Of course, individual links are permitted to have different settings for these options.  As described below, member links SHOULD negotiate Self-Describing-Padding, even though pre-fragmented packets MUST NOT be padded.

LCP negotiations are not permitted on the bundle itself.  An implementation MUST NOT transmit LCP Configure-Request, -Reject, -Ack, -Nak, Terminate-Request or -Ack packets via the multilink procedure and an implementation receiving them MUST silently discard them.  (By “silently discard” we mean to not generate any PPP packets in response; an implementation is free to generate a log entry registering the reception of the unexpected packet).  By contrast,    other LCP packets having control functions not associated with changing the defaults for the bundle itself are permitted.  An implementation MAY transmit LCP Code-Reject, Protocol-Reject, Echo-Request, Echo-Reply and Discard-Request Packets.

The effective MRU for the logical-link entity is negotiated via an LCP option.  It is irrelevant whether Network Control Protocol packets are encapsulated in multilink headers or not, or even over which link they are sent, once that link identifies itself as belonging to a multilink arrangement.

Note that network protocols that are not sent using multilink headers cannot be sequenced.  (And consequently will be delivered in any convenient way).

For example, consider the case in Figure 1.  Link 1 has negotiated network layers NL 1, NL 2, and MP between two systems.  The two systems then negotiate MP over Link 2.

Frames received on link 1 are demultiplexed at the data link layer according to the PPP network protocol identifier and can be sent to NL 1, NL 2, or MP.  Link 2 will accept frames with all network protocol identifiers that Link 1 does.

Frames received by MP are further demultiplexed at the network layer according to the PPP network protocol identifier and sent to NL 1 or NL 2.  Any frames received by MP for any other network layer protocols are rejected using the normal protocol reject mechanism.

Multilink Overview  PPP Multilink

  1. Packet Formats

In this section, we describe the layout of individual fragments, which are the “packets” in the Multilink Protocol.  Network Protocol packets are first encapsulated (but not framed) according to normal PPP procedures, and large packets are broken up into multiple segments sized appropriately for the multiple physical links.  A new PPP header consisting of the Multilink Protocol Identifier, and the Multilink header is inserted before each section.  (Thus the first fragment of a multilink packet in PPP will have two headers, one for the fragment, followed by the header for the packet itself).

Systems implementing the multilink procedure are not required to fragment small packets.  There is also no requirement that the segments be of equal sizes, or that packets must be broken up at all. A possible strategy for contending with member links of differing transmission rates would be to divide the packets into segments proportion to the transmission rates.  Another strategy might be to divide them into many equal fragments and distribute multiple fragments per link, the numbers being proportional to the relative speeds of the links.

PPP multilink fragments are encapsulated using the protocol identifier 0x00-0x3d.  Following the protocol, an identifier is a four-byte header containing a sequence number, and two one bit fields indicating that the fragment begins a packet or terminates a packet. After negotiation of an additional PPP LCP option, the four-byte header may be optionally replaced by a two-byte header with only a 12-bit sequence space.  Address & Control and Protocol ID compression are assumed to be in effect.  Individual fragments will, therefore, have the following format:

Long Sequence Number Fragment Format.

Short Sequence Number Fragment Format.

The (B)eginning fragment bit is a one-bit field set to 1 on the first fragment derived from a PPP packet and set to 0 for all other fragments from the same PPP packet.

The (E)nding fragment bit is a one-bit field set to 1 on the last fragment and set to 0 for all other fragments.  A fragment may have both the (B)eginning and (E)nding fragment bits set to 1.

The sequence field is a 24 bit or 12-bit number that is incremented for every fragment transmitted.  By default, the sequence field is 24 bits long but can be negotiated to be only 12 bits with an LCP configuration option described below.

Between the (E)nding fragment bit and the sequence number is a reserved field, whose use is not currently defined, which MUST be set to zero.  It is 2 bits long when the use of short sequence numbers has been negotiated, 6 bits otherwise.

In this multilink protocol, a single reassembly structure is associated with the bundle.  The multilink headers are interpreted in the context of this structure.

The FCS field shown in the diagram is inherited from the normal framing mechanism from the member link on which the packet is transmitted.  There is no separate FCS applied to the reconstituted packet as a whole if transmitted in more than one fragment.

3.1.  Padding Considerations

Systems that support the multilink protocol SHOULD implement Self-Describing-Padding.  A system that implements self-describing-padding by definition will either include the padding option in its initial LCP Configure-Requests, or (to avoid the delay of a Configure-Reject) include the padding option after receiving a NAK containing the option.

A system that must pad its own transmissions but does not use Self-Describing-Padding when not using multilink, MAY continue to not use Self-Describing-Padding if it ensures by careful choice of fragment lengths that only (E)nding fragments of packets are padded.  A system MUST NOT add padding to any packet that cannot be recognized as padded by the peer.  Non-terminal fragments MUST NOT be padded with trailing material by any other method than Self-Describing-Padding.

A system MUST ensure that Self-Describing-Padding, as described in RFC 1570 [11], is negotiated on the individual link before transmitting any multilink data packets if it might pad non-terminal fragments or if it would use network or compression protocols that are vulnerable to padding, as described in RFC 1570.  If necessary, the system that adds padding MUST use LCP Configure-NAK’s to elicit a Configure-Request for Self-Describing-Padding from the peer.

Note that LCP Configure-Requests can be sent at any time on any link and that the peer will always respond with a Configure-Request of its own.  A system that pads its transmissions but uses no protocols other than multilink that are vulnerable to padding MAY delay ensuring that the peer has Configure-Requested Self-Describing-   Padding until it seems desirable to negotiate the use of Multilink itself.  This permits the interoperability of a system that pads with older peers that support neither Multilink nor Self-Describing- Padding.

  1. Trading Buffer Space Against Fragment Loss

In a multilink procedure, one channel may be delayed with respect to the other channels in the bundle.  This can lead to fragments being received out of order, thus increasing the difficulty in detecting the loss of a fragment.  The task of estimating the amount of space required for buffering on the receiver becomes more complex because of this.  In this section, we discuss a technique for declaring that a fragment is lost, with the intent of minimizing the buffer space required, yet minimizing the number of avoidable packet losses.

4.1.  Detecting Fragment Loss

On each member link in a bundle, the sender MUST transmit fragments with strictly increasing sequence numbers (modulo the size of the sequence space).  This requirement supports a strategy for the receiver to detect lost fragments based on comparing sequence numbers.  The sequence number is not reset upon each new PPP packet, and a sequence number is consumed even for those fragments which contain an entire PPP packet, i.e., one in which both the (B)eginning and (E)nding bits are set.

An implementation MUST set the sequence number of the first fragment transmitted on a newly-constructed bundle to zero.  (Joining a secondary link to an existing bundle is invisible to the protocol, and an implementation MUST NOT reset the sequence number space in this situation).

The receiver keeps track of the incoming sequence numbers on each link in a bundle and maintains the current minimum of the most recently received sequence number over all the member links in the bundle (call this M).  The receiver detects the end of a packet when it receives a fragment bearing the (E)nding bit.  Reassembly of the packet is complete if all sequence numbers up to that fragment have been received.

A lost fragment is detected when M advances past the sequence number of a fragment bearing an (E)nding bit of a packet which has not been completely reassembled (i.e., not all the sequence numbers between the fragment bearing the (B)eginning bit and the fragment bearing the (E)nding bit have been received).  This is because of the increasing sequence number rule over the bundle.

An implementation MUST assume that if a fragment bears a (B)eginning bit, that the previously numbered fragment bore an (E)nding bit. Thus if a packet is lost bearing the (E)nding bit, and the packet whose fragment number is M contains a (B)eginning bit, the implementation MUST discard fragments for all unassembled packets through M-1, but SHOULD NOT discard the fragment bearing the new (B)eginning bit on this basis alone.

The detection of a lost fragment causes the receiver to discard all fragments up to M.  If the fragment with sequence number M has the (B)eginning bit set then the receiver starts reassembling the new packet, otherwise the receiver resynchronizes on the next fragment bearing the (B)eginning bit.  All fragments received while the receiver is attempting to resynchronize not bearing the (B)eginning bit SHOULD be discarded.

Fragments may be lost due to a corruption of individual packets or catastrophic loss of the link (which may occur only in one direction).  This version of the multilink protocol mandates no specific procedures for the detection of failed links.  The PPP link quality management facility or the periodic issuance of LCP echo-requests could be used to achieve this.

Senders SHOULD avoid keeping any member links idle to maximize early detection of lost fragments by the receiver since the value of M is not incremented on idle links.  Senders SHOULD rotate traffic among the member links if there isn’t sufficient traffic to overflow the capacity of one link to avoid idle links.

Loss of the final fragment of a transmission can cause the receiver to stall until new packets arrive.  The likelihood of this may be decreased by sending a null fragment on each member link in a bundle that would otherwise become idle immediately after having transmitted a fragment bearing the (E)nding bit, where a null fragment is one consisting only of a multilink header bearing both the (B)egin and (E)nding bits (i.e., having no payload).  Implementations concerned about either wasting bandwidth or per packet costs are not required to send null fragments and may elect to defer sending them until a timer expires, with the marginally increased possibility of lengthier stalls in the receiver.  The receiver SHOULD implement some type of link idle timer to guard against indefinite stalls.

The increasing sequence per link rule prohibits the reallocation of fragments queued up behind a failing link to a working one, a practice which is not unusual for implementations of ISO multilink over LAPB [4].

4.2.  Buffer Space Requirements

There is no amount of buffering that will guarantee correct detection of fragment loss, since an adversarial peer may withhold a fragment on one channel and send arbitrary amounts on the others.  For the usual case where all channels are transmitting, you can show that there is a minimum amount below which you could not correctly detect packet loss.  The amount depends on the relative delay between the channels, (D[channel-i,channel-j]), the data rate of each channel, R[c], the maximum fragment size permitted on each channel, F[c], and the total amount of buffering the transmitter has allocated amongst the channels.

When using PPP, the delay between channels could be estimated by using LCP echo request and echo reply packets.  (In the case of links of different transmission rates, the round trip times should be adjusted to take this into account.)  The slippage for each channel is defined as the bandwidth times the delay for that channel relative to the channel with the longest delay, S[c] = R[c] * D[c,c-worst]. (S[c-worst] will be zero, of course!)

A situation which would exacerbate sequence number skew would be one in which there is extremely bursty traffic (almost allowing all channels to drain), and then where the transmitter would first queue up as many consecutively numbered packets on one link as it could, then queue up the next batch on a second link, and so on.  Since transmitters must be able to buffer at least a maximum- sized fragment for each link (and will usually buffer up at least two) A receiver that allocates any less than S[1] + S[2] + … + S[N] + F[1] + … + F[N], will be at risk for incorrectly assuming packet loss, and therefore, SHOULD allocate at least twice that.

  1. PPP Link Control Protocol Extensions

If a reliable multilink operation is desired, PPP Reliable Transmission [6] (essentially the use of ISO LAPB) MUST be negotiated prior to the use of the Multilink Protocol on each member link.

Whether or not reliable delivery is employed over member links, an implementation MUST present a signal to the NCP’s running over the multilink arrangement that a loss has occurred.

Compression may be used separately on each member link, or run over the bundle (as a logical group link).  The use of multiple compression streams under the bundle (i.e., on each link separately) is indicated by running the Compression Control Protocol [5] but with an alternative PPP protocol ID.

5.1.  Configuration Option Types

The Multilink Protocol introduces the use of additional LCP Configuration Options:

  • Multilink Maximum Received Reconstructed Unit
  • Multilink Short Sequence Number Header Format
  • Endpoint Discriminator

5.1.1.  Multilink MRRU LCP option

Multilink MRRU LCP option


The presence of this option indicates that the system sending it implements the PPP Multilink Protocol, and unless rejected, will construe all packets receive on this link as being able to be processed by a common protocol machine with any other packets received from the same peer on any other link on which this option has been accepted.  A system MUST NOT accept the Multilink MRRU LCP Option if it is not willing to symmetrically have the packets it sends interpreted in the same fashion.

This option also advises the peer that the implementation will be able to reconstruct a PPP packet whose payload will contain the number of bytes as Max-Receive-Reconstructed-Unit.

A system MAY indicate the desire to conduct multilink operation solely by use of the Multilink Short Sequence Number Header Format LCP option (discussed next); the default value for MRRU option is 1600 bytes if not otherwise explicitly negotiated.

Note: this option corresponds to what would have been the MRU of the bundle when conceptualized as a PPP-like entity.

5.1.2.  Short Sequence Number Header Format Option

Short Sequence Number Header Format Option

This option advises the peer that the implementation wishes to receive fragments with short, 12-bit sequence numbers.  By default sequence, numbers are 24 bits long.  When this option is received, an implementation MUST either transmit all subsequent multilink packets on all links of the bundle with 12-bit sequence numbers or Configure-NAK or Configure-Reject the option.

An implementation wishing to transmit multilink fragments with short sequence numbers MAY include the multilink short sequence number in a Configure-NAK to ask that the peer responds with a request to receive short sequence numbers.  The peer is not compelled to respond with the option.

5.1.3.  Endpoint Discriminator Option

Endpoint Discriminator Option

The Endpoint Discriminator Option represents the identification of the system transmitting the packet.  This option advises a system that the peer on this link could be the same as the peer on another existing link.  If the option distinguishes this peer from all others, a new bundle MUST be established from the link being negotiated.  If this option matches the class and address of some other peer of an existing link, the new link MUST be joined to the bundle containing the link to the matching peer or MUST establish a new bundle, depending on the decision tree shown in (1) through (4) below.

To securely join an existing bundle, a PPP authentication protocol [3] must be used to obtain authenticated information from the peer to prevent a hostile peer from joining an existing bundle by presenting a falsified discriminator option.

This option is not required for multilink operation.  If a system does not receive either of the Multilink MRRU or Short Sequence options but does receive the Endpoint Discriminator Option, and there is no manual configuration providing outside information, the implementation MUST NOT assume that multilink operation is being requested on this basis alone.

As there is also no requirement for authentication, there are four sets of scenarios:

  1. No authentication, no discriminator: All new links MUST be joined to one bundle.
  2. Discriminator, no authentication: Discriminator match -> MUST join the matching bundle, discriminator mismatch -> MUST establish new bundle.
  3. No discriminator, authentication: Authenticated match -> MUST join the matching bundle, authenticated mismatch -> MUST establish new bundle.
  4. Discriminator, authentication: Discriminator match and authenticated match -> MUST join the bundle, discriminator mismatch -> MUST establish new bundle, authenticated mismatch -> MUST establish new bundle.

The option contains a Class which selects an identifier address space and an Address which selects a unique identifier within the class address space.

This identifier is expected to refer to the mechanical equipment associated with the transmitting system.  For some classes, the uniqueness of the identifier is global and is not bounded by the scope of a particular administrative domain.  Within each class, the uniqueness of address values is controlled by a class dependent policy for assigning values.

Each endpoint may choose an identifier class without restriction. Since the objective is to detect mismatches between endpoints erroneously assumed to be alike, mismatch on class alone is sufficient.  Although no one class is recommended, classes which have universally unique values are preferred.

This option is not required to be supported either by the system or the peer.  If the option is not present in a Configure-Request, the system MUST NOT generate a Configure-Nak of this option, instead, it SHOULD behave as if it had received the option with Class = 0, Address = 0.  If a system receives a Configure-Nak or Configure-Reject of this option, it MUST remove it from any additional Configure-Request.

The size is determined from the Length field of the element.  For some classes, the length is fixed, for others the length is variable. The option is invalid if the Length field indicates a size below the minimum for the class.

An implementation MAY use the Endpoint Discriminator to locate administration or authentication records in a local database.  Such use of this option is incidental to its purpose and is deprecated when a PPP Authentication protocol [3] can be used instead.  Since some classes permit the peer to generate random or locally assigned address values, use of this option as a database key requires prior agreement between peer administrators.

The specification of the subfields are:


     19 = for Endpoint Discriminator


     3 + length of Address


     The Class field is one octet and indicates the identifier address space.  The most up-to-date values of the LCP Endpoint Discriminator Class field are specified in the most recent “Assigned Numbers” RFC [7].  Current values are assigned as follows:

        0    Null Class

        1    Locally Assigned Address

        2    Internet Protocol (IP) Address

        3    IEEE 802.1 Globally Assigned MAC Address

        4    PPP Magic-Number Block

        5    Public Switched Network Directory Number


The Address field is one or more octets and indicates the identifier address within the selected class. The length and content depend on the value of the Class as follows:

Class 0 – Null Class

Maximum Length: 0


     This class is the default value if the option is not present received Configure-Request.

Class 1 – Locally Assigned Address

Maximum Length: 20


This class is defined to permit a local assignment in the case where the use of one of the globally unique classes is not possible.  Use of a device serial number is suggested.  The use of this class is deprecated since uniqueness is not guaranteed.

Class 2 – Internet Protocol (IP) Address

Fixed Length: 4


An address in this class contains an IP host address as defined in [8].

Class 3 – IEEE 802.1 Globally Assigned MAC Address

Fixed Length: 6


An address in this class contains an IEEE 802.1 MAC address in canonical (802.3) format [9].  The address MUST have the global/local assignment bit clear and MUST have the multicast/specific bit clear.  Locally assigned MAC addresses should be represented using Class 1.

Class 4 – PPP Magic-Number Block

Maximum Length: 20


This is not an address but a block of 1 to 5 concatenated 32 bit PPP Magic-Numbers as defined in [2].  This class provides for automatic generation of a value likely but not guaranteed to be unique.  The same block MUST be used by an endpoint continuously during any period in which at least one link is in the LCP Open state.  The use of this class is deprecated.

Note that PPP Magic-Numbers are used in [2] to detect unexpected loopbacks of a link from an endpoint to itself. There is a small probability that two distinct endpoints will generate matching magic-numbers.  This probability is geometrically reduced when the LCP negotiation is repeated in search of the desired mismatch if a peer can generate uncorrelated magic-numbers.

As used here, magic-numbers are used to determine if two links are in fact from the same peer endpoint or from two distinct endpoints.  The numbers always match when there is one endpoint.  There is a small probability that the numbers will match even if there are two endpoints.  To achieve the same confidence that there is not a false match as for LCP loopback detection, several uncorrelated magic-numbers can be combined in one block.

Class 5 – Public Switched Network Directory Number

Maximum Length: 15


An address in this class contains an octet sequence as defined by I.331 (E.164) representing an international telephone directory number suitable for use to access the endpoint via the public switched telephone network [10].

  1. Closing Member links

Member links may be terminated according to normal PPP LCP procedures using LCP terminate-request and terminate-ack packets on that member link.  Since it is assumed that member links usually do not reorder packets, receipt of a terminate ack is sufficient to assume that any multilink protocol packets ahead of it are at no special risk of loss.

Receipt of an LCP terminate-request on one link does not conclude the procedure on the remaining links.

So long as any member links in the bundle are active, the PPP state for the bundle persists as a separate entity.

If the multilink procedure is used in conjunction with PPP reliable transmission, and a member link is not closed gracefully, the implementation should expect to receive packets which violate the increasing sequence number rule.

  1. Interaction with Other Protocols

In the common case, LCP, and the Authentication Control Protocol would be negotiated over each member link.  The Network Protocols themselves and associated control exchanges would normally have been conducted once, on the bundle.

In some instances, it may be desirable for some Network Protocols to be exempted from sequencing requirements, and if the MRU sizes of the link did not cause fragmentation, those protocols could be sent directly over the member links.

Although explicitly discouraged above, if there were several member links connecting two implementations, and independent sequencing of two protocol sets was desired, but the blocking of one by the other was not, one could describe two multilink procedures by assigning multiple endpoint identifiers to a given system.  Each member link, however, would only belong to one bundle.  One could think of a physical router as housing two logically separate implementations, each of which is independently configured.

A simpler solution would be to have one link refuse to join the bundle, by sending a Configure-Reject in response to the Multilink LCP option.

  1. References
  1. Leifer, D., Sheldon, S., and B. Gorsline “A Subnetwork Control Protocol for ISDN Circuit-Switching”, University of Michigan (unpublished), March 1991.
  2. Simpson, W., Editor, “The Point-to-Point Protocol (PPP)”, STD 51, RFC 1661, Daydreamer, July 1994.
  3. Lloyd, B., and W. Simpson, “PPP Authentication Protocols”, RFC 1334, Lloyd Internetworking, Daydreamer, October 1992.
  4. International Organisation for Standardization, “HDLC – Description of the X.25 LAPB-Compatible DTE Data Link Procedures”, International Standard 7776, 1988
  5. Rand, D., “The PPP Compression Control Protocol (CCP)”, PPP Extensions Working Group, Work in Progress.
  6. Rand, D., “PPP Reliable Transmission”, PPP Extensions Working Group, Work in Progress.
  7. Reynolds, J., and J. Postel, “Assigned Numbers”, STD 2, RFC 1700, USC/Information Sciences Institute, October 1994.
  8. Postel, J., Editor, “Internet Protocol – DARPA Internet Program Protocol Specification”, STD 5, RFC 791, USC/Information Sciences Institute, September 1981.
  9. Institute of Electrical and Electronics Engineers, Inc., “IEEE Local and Metropolitan Area Networks: Overview and Architecture”, IEEE Std. 802-1990, 1990.
  10. The International Telegraph and Telephone Consultative Committee (CCITT), “Numbering Plan for the ISDN Area”, Recommendation I.331 (E.164), 1988.
  11. Simpson, W., Editor, “PPP LCP Extensions”, RFC 1570, Daydreamer, January 1994.

Books you may interested

Books on wireless and mobile  Books on PPP  Books for CCNA  Books on Telecom  Books on Telecom  Books for BSNL J.E. Examination

The post PPP Multilink Protocol appeared first on Telecom And Networking.

]]> 0 5545
VLAN Basics Fri, 01 Sep 2017 09:48:00 +0000 VLAN Basics A VLAN is an administratively configured LAN or broadcast domain. Instead of going...

The post VLAN Basics appeared first on Telecom And Networking.


VLAN Basics

A VLAN is an administratively configured LAN or broadcast domain. Instead of going to the wiring closet to move a cable to a different LAN, network administrators can accomplish this task remotely by configuring a port on an 802.1Q compliant switch to belong to a different VLAN. The ability to move end stations to different broadcast domains by setting membership profiles for each port on centrally managed switches is one of the main advantages of 802.1Q VLANs. The IEEE’s 802.1Q standard was developed to address the problem of how to break large networks into smaller parts so broadcast and multicast traffic wouldn’t grab more bandwidth than necessary. The standard also helps provide a higher level of security between segments of internal networks.

The IEEE’s 802.1Q standard was developed to address the problem of how to break large networks into smaller parts so broadcast and multicast traffic wouldn’t grab more bandwidth than necessary. The standard also helps provide a higher level of security between segments of internal networks.

It acts like an ordinary LAN, but connected devices don’t have to be physically connected to the same segment. While clients and servers may be located anywhere on a network, they are grouped together by VLAN technology, and broadcasts are sent to devices within the VLAN.

The switch acts as an intelligent traffic forwarder and a simple network security device. Frames get sent only to the ports where the destination device is attached. Broadcast and multicast frames are constrained by VLAN boundaries so only stations whose ports are members of the same VLAN see those frames. This way, bandwidth is optimized and network security is enhanced.


802.1Q VLANs aren’t limited to one switch. VLANs can span many switches, even across WAN links. Sharing VLANs between switches is achieved by inserting a tag with a VLAN identifier (VID) between one and 4,094 into each frame. A VID must be assigned to each VLAN. By assigning the same VID to VLANs on many switches, one or more VLAN (broadcast domain) can be extended across a large network.


The secret to performing this magic is in the tags. 802.1Q compliant switch ports can be configured to transmit tagged or untagged frames. A tag field containing VLAN (and/or 802.1p priority) information can be inserted into an Ethernet frame. If a port has an 802.1Qcompliant device attached (such as another switch), these tagged frames can carry VLAN membership information between switches, thus letting a VLAN span multiple switches.


There is one important caveat: Network administrators must ensure ports with non 802.1Q compliant devices attached are configured to transmit untagged frames. Many network interface cards for PCs and printers are not 802.1Q compliant. If they receive a tagged frame, they will not understand the VLAN tag and will drop the frame. Also, the maximum legal Ethernet frame size for tagged frames was increased in 802.1Q (and its companion, 802.3ac) from 1,518 to 1,522 bytes. This could cause network interface cards and older switches to drop tagged frames as “oversized.”


In the case of a network with an ATM WAN, Ethernet switches with ATM uplinks can have a VLAN to emulated LAN (ELAN) mapping feature that matches 802.1Q VIDs to ATM ELAN names. This lets the benefits of VLAN bandwidth optimization and security be extended between campus buildings or even between remote sites.


Advantage of VLAN

VLAN provides following advantages:-

  • Solve broadcast problem
  • Reduce the size of broadcast domains
  • Allow us to add additional layer of security
  • Make device management easier
  • Allow us to implement the logical grouping of devices by function instead of location

Solve broadcast problem

When we connect devices to the switch ports, the switch creates separate collision domain for each port and single broadcast domain for all ports. Switch forwards a broadcast frame from all possible ports. In a large network having hundreds of computers, it could create a performance issue. Of course, we could use routers to solve a broadcast problem, but that would be a costly solution since each broadcast domain requires its own port on a router. A switch has a unique solution to broadcast issue known as VLAN. In the practical environment, we use VLAN to solve the broadcast issue instead of a router.

Each VLAN has a separate broadcast domain. Logically VLANs are also subnets. Each VLAN requires a unique network number known as VLAN ID. Devices with same VLAN ID are the members of the same broadcast domain and receive all broadcasts. These broadcasts are filtered from all ports on a switch that aren’t members of the same VLAN.

Reduce the size of broadcast domains

VLAN increase the numbers of the broadcast domain while reducing their size. For example, we have a network of 100 devices. Without any VLAN implementation, we have a single broadcast domain that contains 100 devices. We create 2 VLANs and assign 50 devices in each VLAN. Now we have two broadcast domains with fifty devices in each. Thus more VLAN means more broadcast domain with fewer devices.

Allow us to add additional layer of security

VLANs enhance the network security. In a typical layer 2 networks, all users can see all devices by default. Any user can see network broadcast and responds to it. Users can access any network resources located on that specific network. Users could join a workgroup by just attaching their system to existing switch. This could create real trouble on a security platform. Properly configured VLANs gives us total control over each port and users. With VLANs, you can control the users from gaining unwanted access to the resources. We can put the group of users that need high-level security into their own VLAN so that users outside from VLAN can’t communicate with them.

Make device management easier

Device management is easier with VLANs. Since VLANs are a logical approach, a device can be located anywhere in the switched network and still belong to the same broadcast domain. We can move a user from one switch to another switch in the same network while keeping his original VLAN.

For example, our company has a five story building and a single layer two networks. In this scenario, VLAN allows us to move the users from one floor to another floor while keeping his original VLAN ID. The only limitation we have is that device, when moved, must still be connected to the same layer 2 networks.

Allow us to implement the logical grouping of devices by function instead of location

VLANs allow us to group the users by their function instead of their geographic locations. Switches maintain the integrity of your VLANs. Users will see only what they are supposed to see regardless what their physical locations are.


Books you may interested



Fine Tuxedos


The post VLAN Basics appeared first on Telecom And Networking.

]]> 0 99
IP Version 6 over PPP Fri, 01 Sep 2017 03:53:00 +0000 IP Version 6 over PPP The Point-to-Point Protocol (PPP) provides a standard method of encapsulating...

The post IP Version 6 over PPP appeared first on Telecom And Networking.


IP Version 6 over PPP

The Point-to-Point Protocol (PPP) provides a standard method of encapsulating Network Layer protocol information over point-to-point links.  PPP also defines an extensible Link Control Protocol and proposes a family of Network Control Protocols (NCPs) for establishing and configuring different network-layer protocols.
This document is for defining the method for transmission of IP Version 6 packets over PPP links as well as the Network Control Protocol (NCP) for establishing and configuring the IPv6 over PPP. It also specifies the method of forming IPv6 link-local addresses on PPP links.

1.  Introduction

PPP has three main components:

  1. A method for encapsulating datagrams over serial links.
  2. Link Control Protocol (LCP) for establishing, configuring, and testing the data-link connection.
  3. A family of Network Control Protocols (NCPs) for establishing and configuring different network-layer protocols.

In order to establish communications over a point-to-point link, each end of the PPP link must first send LCP packets to configure and test the data link.  After the link has been established and optional facilities have been negotiated as needed by the LCP, PPP must send NCP packets to choose and configure one or more network-layer protocols.  Once each of the chosen network-layer protocols has been configured, datagrams from each network-layer protocol can be sent over the link.

In this document, the NCP for establishing and configuring the IPv6 over PPP is referred as the IPv6 Control Protocol (IPV6CP).

The link will remain configured for communications until explicit LCP or NCP packets close the link down, or until some external event occurs (power failure at the other end, carrier drop, etc.).

2.  Sending IPv6 Datagrams

Before any IPv6 packets may be communicated, PPP MUST reach the Network-Layer Protocol phase, and the IPv6 Control Protocol MUST reach the Opened state.

Exactly one IPv6 packet is encapsulated in the Information field of PPP Data Link Layer frames where the Protocol field indicates type hex 0057 (Internet Protocol Version 6).

The maximum length of an IPv6 packet transmitted over a PPP link is the same as the maximum length of the Information field of a PPP data link layer frame.  PPP links supporting IPv6 MUST allow the information field at least as large as the minimum link MTU size required for IPv6.

3.  A PPP Network Control Protocol for IPv6

The IPv6 Control Protocol (IPV6CP) is responsible for configuring, enabling, and disabling the IPv6 protocol modules on both ends of the point-to-point link.  IPV6CP uses the same packet exchange mechanism as the Link Control Protocol (LCP).  IPV6CP packets may not be exchanged until PPP has reached the Network-Layer Protocol phase. IPV6CP packets received before this phase is reached should be silently discarded.

The IPv6 Control Protocol is exactly the same as the Link Control Protocol with the following exceptions:

Data Link Layer Protocol Field

Exactly one IPV6CP packet is encapsulated in the Information field of PPP Data Link Layer frames where the Protocol field indicates type hex 8057 (IPv6 Control Protocol).

Code field

Only Codes 1 through 7 (Configure-Request, Configure-Ack, Configure-Nak, Configure-Reject, Terminate-Request, Terminate-Ack, and Code-Reject) are used.  Other Codes should be treated as unrecognized and should result in Code-Rejects.


IPV6CP packets may not be exchanged until PPP has reached the Network-Layer Protocol phase.  An implementation should be prepared to wait for Authentication and Link Quality Determination to finish before timing out waiting for a Configure-Ack or other response.  It is suggested that an implementation give up only after user intervention or a configurable amount of time.

Configuration Option Types

IPV6CP has a distinct set of Configuration Options.

4.  IPV6CP Configuration Options

IPV6CP Configuration Options allow negotiation of desirable IPv6 parameters.  IPV6CP uses the same Configuration Option format defined for LCP, with a separate set of Options.  If a Configuration Option is not included in a Configure-Request packet, the default value for that Configuration Option is assumed.

Up-to-date values of the IPV6CP Option Type field are specified in the most recent “Assigned Numbers” RFC.  Current values are assigned as follows:

  1. Interface-Identifier
  2. IPv6-Compression-Protocol

The only IPV6CP options defined in this document are Interface-Identifier and IPv6-Compression-Protocol.  Any other IPV6CP configuration options that can be defined over time are to be defined in separate documents.

4.1.  Interface-Identifier


This Configuration Option provides a way to negotiate a unique 64-bit interface identifier to be used for the address auto configuration at the local end of the link. A Configure-Request MUST contain exactly one instance of the Interface-Identifier option.  The interface identifier MUST be unique within the PPP link; i.e.  upon completion of the negotiation different Interface-Identifier values are to be selected for the ends of the PPP link.  The interface identifier MAY also be unique over a broader scope.

Before this Configuration Option is requested, an implementation chooses its tentative Interface-Identifier. The non-zero value of the tentative Interface-Identifier SHOULD be chosen such that the value is both unique to the link and, if possible, consistently reproducible across initializations of the IPV6CP finite state machine (administrative Close and reOpen, reboots, etc).  The rationale for preferring a consistently reproducible unique interface identifier to a completely random interface identifier is to provide stability to global scope addresses that can be formed from the interface identifier.

Assuming that interface identifier bits are numbered from 0 to 63 in canonical bit order where the most significant bit is the bit number 0, the bit number 6 is the “u”  bit  (universal/local  bit in  IEEE EUI-64 terminology) which indicates whether or not the interface identifier is based on a globally unique IEEE identifier (EUI-48  or EUI-64)  (see  the  case  1  below).  It is set to one (1) if a globally unique IEEE identifier is used to derive the interface identifier, and it is set to zero (0) otherwise.

The following are methods for choosing the tentative Interface Identifier in the preference order:

1) If an IEEE global identifier (EUI-48 or EUI-64) is available anywhere on the node, it should be used to construct the tentative Interface-Identifier due to its uniqueness properties.  When extracting an IEEE global identifier from another device on the node, care should be taken to that the extracted identifier is presented in canonical ordering.

The only transformation from an EUI-64 identifier is to invert the “u” bit (universal/local bit in IEEE EUI-64 terminology).

In the case of an EUI-48 identifier, it is first converted to the EUI-64 format by inserting two bytes, with hexadecimal values of 0xFF and 0xFE, in the middle of the 48 bit MAC (between the company_id and extension-identifier portions of the EUI-48 value).

2) If an IEEE global identifier is not available a different source of uniqueness should be used.  Suggested sources of uniqueness include link-layer addresses, machine serial numbers, et cetera.

In this case, the “u” bit of the interface identifier MUST be set to zero (0).

3) If a good source of uniqueness cannot be found, it is recommended that a random number be generated.  In this case, the “u” bit of the interface identifier MUST be set to zero (0).

Good sources of uniqueness or randomness are required for the Interface-Identifier negotiation to succeed.  If neither a unique number or a random number can be generated it is recommended that a zero value be used for the Interface-Identifier transmitted in the Configure-Request.  In this case, the PPP peer may provide a valid non-zero Interface-Identifier in its response as described below.

Note that if at least one of the PPP peers is able to generate separate non-zero numbers for itself and its peer, the identifier negotiation will succeed.

When a Configure-Request is received with the Interface-Identifier Configuration Option and the receiving peer implements this option, the received Interface-Identifier is compared with the Interface-Identifier of the last Configure-Request sent to the peer. Depending on the result of the comparison an implementation MUST respond in one of the following ways:

If the two Interface-Identifiers are different but the received Interface-Identifier is zero, a Configure-Nak is sent with a non-zero Interface-Identifier value suggested for use by the remote peer.  Such a suggested Interface-Identifier MUST be different from the Interface-Identifier of the last Configure-Request sent to the peer.  It is recommended that the value suggested be consistently reproducible across initializations of the IPV6CP finite state machine (administrative Close and reOpen, reboots, etc). The “u” universal/local) bit of the suggested identifier MUST be set to zero (0) regardless of its source unless the globally unique EUI-48/EUI-64 derived identifier is provided for the exclusive use by the remote peer.

If the two Interface-Identifiers are different and the received Interface-Identifier is not zero, the Interface-Identifier MUST be acknowledged, i.e.  a Configure-Ack is sent with the requested Interface-Identifier, meaning that the responding peer agrees with the Interface-Identifier requested.

If the two Interface-Identifiers are equal and are not zero, a Configure-Nak MUST be sent specifying a different non-zero Interface-Identifier value suggested for use by the remote peer. It is recommended that the value suggested be consistently reproducible across initializations of the IPV6CP finite state machine (administrative Close and reOpen, reboots, etc).  The “u” universal/local) bit of the suggested identifier MUST be set to zero (0) regardless of its source unless the globally unique EUI-48/EUI-64 derived identifier is provided for the exclusive use by the remote peer.

If the two Interface-Identifiers are equal to zero, the Interface-Identifiers negotiation MUST be terminated by transmitting the Configure-Reject with the Interface-Identifier value set to zero. In this case, a unique Interface-Identifier can not be negotiated.

If a Configure-Request is received with the Interface-Identifier Configuration Option and the receiving peer does not implement this option, Configure-Rej is sent. A new Configure-Request SHOULD NOT be sent to the peer until normal processing would cause it to be sent (that is, until a Configure-Nak is received or the Restart timer runs out). A new Configure-Request MUST NOT contain the Interface-Identifier option if a valid Interface-Identifier Configure-Reject is received.

Reception of a Configure-Nak with a suggested Interface-Identifier different from that of the last Configure-Nak sent to the peer indicates a unique Interface-Identifier.  In this case, a new Configure-Request MUST be sent with the identifier value suggested in the last Configure-Nak from the peer.  But if the received Interface-Identifier is equal to the one sent in the last Configure-Nak, a new Interface-Identifier MUST be chosen.  In this case, a new Configure-Request SHOULD be sent with the new tentative Interface-Identifier.  This sequence (transmit Configure-Request, receive Configure-Request, transmit Configure-Nak, receive Configure-Nak) might occur a few times, but it is extremely unlikely to occur repeatedly.  More likely, the Interface-Identifiers chosen at either end will quickly diverge, terminating the sequence.

If negotiation of the Interface-Identifier is required, and the peer did not provide the option in its Configure-Request, the option SHOULD be appended to a Configure-Nak.  The tentative value of the Interface-Identifier given must be acceptable as the remote Interface-Identifier; i.e.  it should be different from the identifier value selected for the local end of the PPP link.  The next Configure-Request from the peer may include this option.  If the next Configure-Request does not include this option the peer MUST NOT send another Configure-Nak with this option included.  It should assume that the peer’s implementation does not support this option.

By default, an implementation SHOULD attempt to negotiate the Interface-Identifier for its end of the PPP connection. A summary of the Interface-Identifier Configuration Option format is shown below.  The fields are transmitted from left to right.


Book Sightseeing Tours, Day trips, Activities and Things to do with


4.2.  IPv6-Compression-Protocol


This Configuration Option provides a way to negotiate the use of a specific IPv6 packet compression protocol.  The IPv6-Compression-Protocol Configuration Option is used to indicate the ability to receive compressed packets.  Each end of the link must separately request this option if bi-directional compression is desired.  By default, compression is not enabled.

IPv6 compression negotiated with this option is specific to IPv6 datagrams and is not to be confused with compression resulting from negotiations via Compression Control Protocol (CCP), which potentially affect all datagrams.

A summary of the IPv6-Compression-Protocol Configuration Option format is shown below.  The fields are transmitted from left to right.


The IPv6-Compression-Protocol field is two octets and indicates the compression protocol desired.  Values for this field are always the same as the PPP Data Link Layer Protocol field values for that same compression protocol. No IPv6-Compression-Protocol field values are currently assigned. Specific assignments will be made in documents that define specific compression algorithms.


The Data field is zero or more octets and contains additional data as determined by the particular compression protocol.


No IPv6 compression protocol enabled.

Reference: RFC 2472



Millions of T-Shirts sorted by Themes


The post IP Version 6 over PPP appeared first on Telecom And Networking.

]]> 0 100
Wireless Local Area Network (WLAN) Fri, 01 Sep 2017 02:32:00 +0000 Wireless Local Area Network (WLAN) A wireless local area network (WLAN) is a wireless distribution...

The post Wireless Local Area Network (WLAN) appeared first on Telecom And Networking.


Wireless Local Area Network (WLAN)

A wireless local area network (WLAN) is a wireless distribution method for two or more devices that use high-frequency radio waves and often include an access point to the Internet. A WLAN allows users to move around the coverage area, often a home or small office, while maintaining a network connection.

A WLAN is sometimes called a local area wireless network (LAWN).

In the early 1990s, WLANs were very expensive and were only used when wired connections were strategically impossible. By the late 1990s, most WLAN solutions and proprietary protocols were replaced by IEEE 802.11 standards in various versions (versions “a” through “n”). WLAN prices also began to decrease significantly.

Every component that connects to a WLAN is considered a station and falls into one of two categories: access points (APs) and clients. APs transmit and receive radio frequency signals with devices able to receive transmitted signals; they normally function as routers. Clients may include a variety of devices such as desktop computers, workstations, laptop computers, IP phones and other cell phones and Smartphones. All stations able to communicate with each other are called basic service sets (BSSs), of which there are two types: independent and infrastructure.

Independent BSSs (IBSS) exist when two clients communicate without using APs, but cannot connect to any other BSS. Such WLANs are called a peer-to-peer or an ad-hoc WLANs.

The second BSS is called an infrastructure BSS. It may communicate with other stations but only in other BSSs and it must use APs.

Ad Hoc Mode (Peer-to-Peer Workgroup)

In an ad hoc network, computers are brought together as needed; thus, the network has no structure or fixed points—each node can be set up to communicate with any other node. No access point is involved in this configuration. This mode enables you to quickly set up a small wireless workgroup and allows workgroup members to exchange data or share printers as supported by Microsoft® networking in the various Windows® operating systems. Some vendors also refer to ad-hoc networking as peer-to-peer group networking.

In this configuration, network packets are directly sent and received by the intended transmitting and receiving stations. As long as the stations are within range of one another, this is the easiest and least expensive way to set up a wireless network.

Infrastructure Mode

With a wireless access point, the wireless LAN can operate in the infrastructure mode. This mode lets you connect wirelessly to wireless network devices within a fixed range or area of coverage. The access point has one or more antennas that allow you to interact with wireless nodes.

In infrastructure mode, the wireless access point converts airwave data into wired Ethernet data, acting as a bridge between the wired LAN and wireless clients. Connecting multiple access points via a wired Ethernet backbone can further extend the wireless network coverage.


Books you may interested



Idakoos Custom Products



The post Wireless Local Area Network (WLAN) appeared first on Telecom And Networking.

]]> 0 101
Wireless Local Area Networks – Challenges Fri, 01 Sep 2017 07:57:00 +0000 Wireless Local Area Networks – Challenges Wireless computing is a rapidly emerging technology providing users...

The post Wireless Local Area Networks – Challenges appeared first on Telecom And Networking.


Wireless Local Area Networks – Challenges

Wireless computing is a rapidly emerging technology providing users with network connectivity without being tethered off of a wired network. Wireless local area networks (WLANs), like their wired counterparts, are being developed to provide high bandwidth to users in a limited geographical area.
WLANs are being studied as an alternative to the high installation and maintenance costs incurred by traditional additions, deletions, and changes experienced in wired LAN infrastructures. Physical and environmental necessity is another driving factor in favor of WLANs. Typically, new building architectures are planned with network connectivity factored into the building requirements.

However, users inhabiting existing buildings may find it infeasible to retrofit existing structures for wired network access. Examples of structures that are very difficult to wire include concrete buildings, trading floors, manufacturing facilities, warehouses, and historical buildings.
Lastly, the operational environment may not accommodate a wired network, or the network may be temporary and operational for a very short time, making the installation of a wired network impractical. Examples where this is true include ad hoc networking needs such as conference registration centers, campus classrooms, emergency relief centers, and tactical military environments.
Ideally, users of wireless networks will want the same services and capabilities that they have commonly come to expect with wired networks. However, to meet these objectives, the wireless community faces certain challenges and constraints that are not imposed on their wired counterparts.


Fine Tuxedos


Frequency Allocation

Operation of a wireless network requires that all users operate on a common frequency band. Frequency bands for particular uses must typically be approved and licensed in each country, which is a time-consuming process due to the high demand for available radio spectrum.

Interference and Reliability

Interference in wireless communications can be caused by simultaneous transmissions (i.e., collisions) by two or more sources sharing the same frequency band. Collisions are typically the result of multiple stations waiting for the channel to become idle and then beginning transmission at the same time. Collisions are also caused by the “hidden terminal” problem, where a station, believing the channel is idle, begins transmission without successfully detecting the presence of a transmission already in progress.
Interference is also caused by multipath fading, which is characterized by random amplitude and phase fluctuations at the receiver. The reliability of the communications channel is typically measured by the average bit error rate (BER). For packetized voice, packet loss rates on the order of 10–2 are generally acceptable; for uncoded data, a BER of 10–5 is regarded as acceptable. Automatic repeat request (ARQ) and forward error correction (FEC) are used to increase reliability.


In a wired network, the transmission medium can be physically secured, and access to the network is easily controlled. A wireless network is more difficult to secure since the transmission medium is open to anyone within the geographical range of a transmitter. Data privacy is usually accomplished through a radio medium using encryption. While encryption of wireless traffic can be achieved, it is usually at the expense of increased cost and decreased performance.

Power Consumption

Typically, devices connected to a wired network are powered by the local 110 or 230 V commercial power provided in a building depending upon the country. Wireless devices, however, are meant to be portable and/or mobile and are typically battery powered. Therefore, devices must be designed to be very energy-efficient, resulting in “sleep” modes and low-power displays, causing users to make cost versus performance and cost versus capability trade-offs.

Human Safety

Research is ongoing to determine whether radio frequency (RF) transmissions from radio and cellular phones are linked to human illness. Networks should be designed to minimize the power transmitted by network devices. For infrared (IR) WLAN systems, optical transmitters must be designed to prevent vision impairment.


Unlike wired terminals, which are static when operating on the network, one of the primary advantages of wireless terminals is freedom of mobility. Therefore, system designs must accommodate handoff between transmission boundaries and route traffic to mobile users.


The capacity of WLANs should ideally approach that of their wired counterparts. However, due to physical limitations and limited available bandwidth, WLANs are currently targeted to operate at data rates between 1–20 Mb/s. To support multiple transmissions simultaneously, spread spectrum techniques are frequently employed.

Books you may interested



Idakoos Custom Products



The post Wireless Local Area Networks – Challenges appeared first on Telecom And Networking.

]]> 4 102
IEEE 802.11 WLAN Security attacks Fri, 01 Sep 2017 07:49:00 +0000 IEEE 802.11 WLAN Security attacks Wireless Local Area Networks (WLANs) are cost-effective and desirable gateways...

The post IEEE 802.11 WLAN Security attacks appeared first on Telecom And Networking.


IEEE 802.11 WLAN Security attacks

Wireless Local Area Networks (WLANs) are cost-effective and desirable gateways to mobile computing. They allow computers to be mobile, cable-less and communicate with speeds close to the speeds of wired LANs. These features came with an expensive price to pay in areas of security of the network.

In this document, we are going to identify and summarize these security concerns. Broadly, security concerns in the WLAN world are classified into physical and logical. This video overviews both physical and logical WLANs security problems. It addresses logical security attacks like man-in-the-middle attack and Denial of Service attacks as well as physical security attacks like rouge APs.

Wired Equivalent Privacy (WEP) was the first logical solution to secure WLANs. However, WEP suffered many problems which were partially solved by IEEE802.1x protocol. Towards perfection in securing WLANs, IEEE802.11i emerged as a new MAC layer standard which permanently fixes most of the security problems found in WEP and other temporary WLANs security solutions.

WLAN Security attacks

There are many security threats and attacks that can damage the security of WLANs. Those attacks can be classified into logical attacks and physical attacks.

1 Logical attacks 

1.1 Attacks on WEP

Wired Equivalent Privacy (WEP) is a security protocol based on encryption algorithm called “RC4” that aims to provide security to the WLAN similar to the security provided in the wired LAN. WEP has many drawbacks like the usage of small Initialization Vector (IV) and short RC4 encryption key as well as using XOR operation to cipher the key with the plain text to generate cipher text.

Sending the MAC addresses and the IV in the clear in addition to the frequent use of a single IV and the fact that secret keys are actually shared between communications parties are WEPs major security problems. WEP encrypted messages can be easily retrieved using publicly available tools like WEPCrack and AirSnort.

1.2 MAC Address Spoofing

MAC addresses are sent in the clear when a communication between STAs and AP takes place. A way to secure access to APs and hence to the network is done to deny other users from listening to the communication. Integrity means preserving the accurateness and the correctness of information transmitted between STAs and AP. Any security solution should achieve these three goals together.

The security and management problem becomes huge as more APs are installed in the network. So there is a need to centralize and manage security issues in small WLANs as well as large ones and a need to develop techniques to counter security threats. As WLANs applications like wireless Internet and wireless e-commerce spread very fast, there is a need to assure the security of such applications. Many documents have been written to address WLANs security problems. This video reviews WLANs security problem in both physical and logical aspects and discusses the currently available solutions to these problems.

Here in this video, we will, therefore, discus major threats affecting WLANs security and available security protocols and technologies used to counter these threats to filter accesses based on MAC addresses of the STAs attempting to access the network. Since MAC addresses are sent in the clear, an attacker can obtain the MAC address of authorized station by sniffing airwaves using tools like ethereal or kismet to generate a database of legitimate wireless stations and their MAC addresses. The attacker can easily spoof the MAC address of a legitimate wireless station and use that MAC address to gain access to the WLAN.

Stealing STAs with MAC addresses authorized by the AP is also possible. This can cause a major security violation. The network security administrator has to be notified of any stolen or lost STA to remove it from the list of STAs allowed to access the AP hence the WLAN.

1.3 Denial of Service attack

Denial of Service attacks or DoS is a serious threat on both wired and wireless networks. This attack aims to disable the availability of the network and the services it provides. In WLANs, DoS is conducted in several ways like interfering the frequency spectrum by external RF sources hence denying access to the WLAN or, in best cases, granting access with lower data rates. Another way is sending failed association messages to AP and overloads the AP with connections till it collapses which, as a result, will deny other STAs from associating with the AP.

Attempts are made by researchers to overcome such attack by introducing new network elements like Admission Controller (AC) and Global Monitor (GM). AC and GM allocate specific bandwidth to be utilized by STAs and in the case of heavy traffic on AP, they can de-route some packets to neighboring AP to deter DoS attacks on APs.

Also, attackers try to exploit the authentication scheme used by APs; this will force the AP to refuse all legitimate connections initiated by valid STAs. Little is done so far to counter DoS attacks, the fact that DoS attacks are serious and tools to counter them are minimum attracted attackers to vandalize WLANs using such attacks.

1.4 Man-in-the-middle attack

A figure shows how this attack is conducted.
Man-in-the-middle attack
Man-in-the-middle attack

This is a famous attack in both wired and wireless networks. An illicit STA intercepts the communication between legitimate STAs and the AP. The illegal STA fools the AP and pretends to be a legitimate STA; on the other hand, it also fools the other end STA and pretends to be trusted AP. Using techniques like IEEE802.1x to achieve mutual authentications between APs and STAs as well as adopting an intelligent wireless Intrusion Detection System can help in preventing such attacks.

1.5 Bad network design

WLANs function as an extension to the wired LAN hence the security of the LAN depends highly on the security of the WLAN. The vulnerability of WLANs means that the wired LAN is directly at risk. A proper WLAN design should be implemented by trying to separate the WLAN from the wired LAN by placing the WLAN in the Demilitarized Zone (DMZ) with firewalls, switches, and any additional access control technology to limit the access to the WLAN. Also dedicating specific subnets for WLAN than the once used for wired LAN could help in limiting security breaches. Careful wired and wireless LAN network design plays an important role to secure access to the WLAN.

1.6 Default AP configurations

Most APs are shipped with minimum or no security configuration by default. This is true because shipping them with all security features enabled will make usage and operation difficult for normal users. The aim of AP suppliers is to deliver high data rate, out of the box installation APs with- out sincere commitment to security. Network security administrators should configure these AP according to the organization’s security policy. Some of the default unsecured settings in APs shipped today are default passwords which happen to be weak or blank.

Service Set Identifier (SSID) is the name given to a certain WLAN and it is announced by the AP, the knowledge of SSID is important and works like the first security defense. Unfortunately, by default, some Aps disable SSID request which means users can access the WLAN without proving the knowledge of SSID. On the other hand, Some APs don’t disable SSID request, in fact, the SSID request is enabled but the SSID name itself is broadcasted in the air. This is another security problem because it advertises the existence of the WLAN. SSID requests should be enabled and SSID names shouldn’t be broadcasted so users have to prove the knowledge of WLAN’s SSID prior establishing communication.

Another default configuration in APs is that Dynamic Host Configuration Protocol (DHCP) is ON so users can obtain IP addresses automatically and hence access the WLAN easily. Simple Network Management Protocol (SNMP) parameters are also set to unsecured values. Network security administrators have the responsibility to change these configurations to maximize APs security.


Idakoos Custom Products


2 Physical attacks

2.1 Rogue Access Points

In normal situations, AP authenticates STAs to grant access to the WLAN. The AP is never asked for authentication, this raises a security concern, what if the AP is installed without IT center’s awareness? These Aps are called “Rogue APs” and they form a security hole in the network. An attacker can install a Rogue AP with security features disabled causing a mass security threat. There is a need for mutual authentication between STAs and APs to ensure that both parties are legitimate.

Technologies like IEEE802.1x can be used to overcome this problem. Network security administrators can discover Rogue APs by using wireless analyzing tools to search and audit the network.

2.2 Physical placement of APs

The installation location of APs is another security issue because placing APs inappropriately will expose it to physical attacks. Attackers can easily reset the APs once found causing the AP to switch to its default settings which are totally insecure. It is very important for network security administrators to carefully choose appropriate places to mount APs.

2.3 AP’s coverage

The main difference between WLANs and wired/fixed LANs is that WLANs relies on Radio Frequency (RF) signals as a communication medium. The signals broadcasted by the AP can propagate outside the perimeter of a room or a building, where an AP is placed, allowing users who are not physically in the building to gain access to the network.

Attackers use special equipments and sniffing tools to find available WLANs and eavesdrop live communications while driving a car or roaming around CBD areas. Because RF signals obey no boundaries, attackers outside a building can receive such signals and launch attacks on the WLAN. This kind of attack is called “war driving”.Publicly available tools are used for war driving like NetStumbler.

Hobbyists also chalk buildings to indicate that signals are broadcasted from the building and the WLAN in it can be easily accessed. This marking is called “war chalking”. In War chalking, information about the speed of the connection and whether the authentication scheme used is open or shared keys are mentioned in the form of special codes agreed upon between war-chalkers. There are a lot of doubts and debates in the wireless network community regarding the legality of war chalking and war driving activities.

Network security administrators can test the propagation of APs by using special tools to verify to what extent the signals can reach. Accordingly, they can control the propagation of APs by lowering the signal strength or by using a smart type of antennas to control the direction of the signal or move the AP to a place where it is guaranteed that the signal will not travel beyond the building premises. Some work has been done in the area of smart antennas in APs to direct the propagation of traffic. Directing the propagation of traffic as well as managing the power of signals originating form the APs can be helpful in restricting the coverage of APs to specified regions.

Sometimes public and open access to the WLAN is preferable, such public WLANs are called “hot spots”. Implementing hot spots is subject to many of the mentioned security problems. It is important to understand that breaking the security of a hot spot will result in breaking the security of wired network connected to that hot spot.

The control and monitoring of APs are minimal because it is installed in a public area like hotel lobbies, coffee shops, and airport lounges so preventing physical access to AP is more difficult as the site has to be monitored all the time. In this case, there is a tradeoff between giving users the mobility and the flexibility to log in to the network in public areas versus the security of the network infrastructure. The network backbone can be highly secured but a breach in the security of the network access node (i.e. AP) can always lead to a breach in the security of the backbone behind the node.

Books you may interested


Fine Tuxedos



Join Affiliate Marketing

The post IEEE 802.11 WLAN Security attacks appeared first on Telecom And Networking.

]]> 0 103
IEEE 802.11 WLAN Security attacks – Part :02 Fri, 01 Sep 2017 07:16:00 +0000 IEEE 802.11 WLAN Security attacks – Part :02 IEEE 802.11 WLAN Security attacks – Part...

The post IEEE 802.11 WLAN Security attacks – Part :02 appeared first on Telecom And Networking.

IEEE 802.11 WLAN Security attacks – Part :02

IEEE 802.11 WLAN Security attacks – Part :02


Books you may interested

Books on WLAN  Books on Electromagnetic Radiation  Books on Broadband  Books on Wireless Communication System  Books on WLAN  Books on WLAN 


Last Minute Travel

Valentine's day    Find the Best tours, Excursions &amp; Activities with City Discovery 


The post IEEE 802.11 WLAN Security attacks – Part :02 appeared first on Telecom And Networking.

]]> 0 104
IEEE 802.11 WLAN Security attacks Thu, 31 Aug 2017 07:41:00 +0000 IEEE 802.11 WLAN Security attacks IEEE 802.11 WLAN Security attacks   Books you may interested...

The post IEEE 802.11 WLAN Security attacks appeared first on Telecom And Networking.

IEEE 802.11 WLAN Security attacks

IEEE 802.11 WLAN Security attacks


Herbs products

Books you may interested

Books on WLAN  Books on WLAN  Books on WLAN  Books on WLAN  Books on WLAN  Books on WLAN 


Herbs Products


The post IEEE 802.11 WLAN Security attacks appeared first on Telecom And Networking.

]]> 0 105