TCP and TCP/IP

TCP – TRANSMISSION CONTROL PROTOCOL

Posted on |
checkoutstore.com

TRANSMISSION CONTROL PROTOCOL (TCP)

Originally, Vinton Cerf and Robert Kahn designed TCP to provide reliable data transmission between remote hosts communicating over a packet-switched network. Before the invention of TCP, data transmission over packet-switched network infrastructures proved somewhat unreliable. The quality of delivery (or lack of reliability), different media types, and the potential for congestion to impede data delivery made it necessary for a connection-oriented protocol to provide end-to-end reliable services to processes and applications communicating between remote hosts. The DoD (Department of Defense) adopted TCP as its primary protocol for reliable delivery of information over the ARPA network. Since its invention, TCP has become a standard protocol for the Internet, providing guaranteed delivery of data between hosts.
 
TCP maps to the Host-to-Host layer within the DoD and Transport layer of the OSI model. Only TCP and UDP function at these layers. Vendors can implement TCP when they need guaranteed delivery of data or use UDP when they require speed more than guaranteed delivery.
 

Fundamentals of TCP Operation

TCP provides a bidirectional communication pipe between remote host processes, identified by ports. As described in RFC 793, TCP controls the communications between these processes by providing the following:
  • Connection setup and teardown
  • Multiplexing
  • Data transfer
  • Flow control
  • Reliability
  • Precedence and security

TCP can be thought of as the Fed-Ex of protocols, which boasts “When it absolutely, positively has to get there overnight!” In other words, it guarantees delivery of packets. TCP actually can boast a speedier delivery of packets than Fed-EX; however, TCP still remains slower than UDP.

Achieving such a high standard of delivery involves overhead in the form of establishing, maintaining, and terminating sessions between hosts. Unlike connectionless protocols, TCP does not rely on lower layers to track data. TCP does not limit itself by only identifying the sending and receiving host process, placing data on the wire, and hoping it arrives at its destination without any follow-up. TCP uses sequencing and acknowledgments to guarantee the delivery of packets.

Unlike its counterpart UDP, when TCP receives a stream of data (messages), it breaks the streams into segments and assigns sequence numbers to each byte prior to delivery by IP within a datagram. These sequence numbers require corresponding acknowledgments to be returned from the destination to ensure it has received from the sender each segment within the datagram. TCP maintains a copy of the segments contained within a buffer at the host, known as a TCB (transmission control block). If it does not receive an acknowledgment, it assumes the datagram has been lost and retransmits it. We discuss this in more detail later in this chapter.

 

 

soulelectronics.com

 

Connection Setup and Teardown

To provide reliable data delivery between processes, TCP must make a connection before the upper-layer applications can exchange any meaningful data. To accomplish this, TCP establishes a connection known as a logical circuit between the remote host ports first. This connection links ports or processes running within each host. TCP maintains this connection throughout the entire conversation and tears down the connection when it is no longer needed.

Once IP learns the logical address of the destination host, TCP sets up a session that provides the reliable foundation for the upper-layer protocols to deliver data. When the user or one of the hosts requests to close a session, TCP tears the session down. We discuss the session setup and teardown procedures and exchanges in more detail later in this chapter.

Multiplexing

Multiplexing capability enables TCP to establish and maintain multiple communication paths between two hosts simultaneously. Multiplexing also allows a single host to distinguish and maintain sessions with many hosts simultaneously. Hosts need this capability because usually, they run multiple applications or services such as Telnet, FTP (File Transfer Protocol), or other services. TCP has to distinguish one process from another and manage and maintain the communications for these processes.

To accomplish this, TCP utilizes ports to differentiate communications and manage them. There are two main port types: server ports and client ports. Server ports identify major applications or services; for example, Telnet (port 23), SMTP (port 25), and FTP (ports 20, 21). Client ports vary; they are chosen on the fly and dynamically applied, ranging 1024–65535.

Data Transfer

TCP receives and organizes streams of data (messages) from upper-layer processes or applications as segments and passes them down to be formatted as datagrams by IP (Network layer) for addressing, packing, and delivery. When IP receives datagrams from a remote host, it inspects the protocol address within the IP header to determine whether to send the information through TCP or UDP for processing.

TCP runs on top of the Internet Protocol that provides Network Layer addressing and connectionless delivery of datagrams between hosts. The protocol type value 06 identifies TCP within the IP header. The protocol type value 17 identifies UDP within the IP header.

When TCP receives segments within datagrams from IP it reassembles them into organized data streams (messages), identifies the receiving client or server port, and passes them on to the appropriate (upper-layer) application for processing. The upper (applications) and lower (Internet Protocol) layers have a bi-directional relationship depending on the direction of the data flow. TCP provides the same fundamental services to all upper-layer protocols. This is a simplistic view of how TCP operates; we will discuss TCP operation in more detail later in this chapter.

Flow Control

TCP needs a method of controlling the inbound flow of data. Flow control guarantees that incoming traffic does not overwhelm a host’s receive buffer and that the receiving host can adequately process and respond to the sending host’s requests. The window mechanism identified within the TCP header provides this function. We will take a detailed look at flow control and the TCP header later in this chapter.

Each end host maintains its own window and advertises this window to the other side. When congestion occurs, a host reduces its window size and advertises it to the other side. In effect, the host asks the other side to slow down its transmissions.

When congestion no longer exists, a host can increase the size, alerting the other side that it can send more data. The capability to dynamically increase or decrease the window as needed is referred to as a sliding window. An administrator can configure the initial window size at the host. This configuration varies depending on the operating system used.

Reliability

Reliability comes from TCP’s guaranteed delivery of packets. TCP requires the sequencing of each byte sent and a corresponding acknowledgment of each byte from the other side. This enables a host to detect whether information has been lost or sent out of order.

The receiving host does not send an ACK if datagrams become lost in transit. The transmitting (sending) host has the task of detecting lost or missing frames and retransmits if necessary. If the sending host does not receive an acknowledgment within a specified period of time, it retrieves a copy of the previously sent information from its TCB buffer and retransmits the lost data. The sending host uses a timer based on a round-trip delay calculation to detect a lost frame and retransmit. If a timer expires before receiving an acknowledgment of sent data, the sending host assumes the datagram is lost and retransmits.

TCP deals with damaged frames through a CRC field contained within the TCP header. The sending host performs a CRC calculation before transmitting. The receiving host performs a CRC check upon receipt to determine whether a datagram has been damaged while in transit.

If the receiving host detects a damaged datagram it simply trashes the frame without notifying the source host. Eventually, the source host realizes something has happened to this frame because it has not received a corresponding acknowledgment from the receiving host. At this point the TCP timer expires, causing this host to retransmit the data.

Precedence and Security

The DoD mandates that all protocols implemented within its networks support a multilevel security model and precedence levels. TCP can utilize the service and security options within IP to provide this level of service to upper-layer applications. TCP offers these types of services to upper-layer applications:

  • Precedence
  • Delay
  • Throughput
  • Reliability

The options within the Type of Service field of the IP header indicate how a datagram should be handled. When implemented, the Type of Service options, precedence delay, throughput, and reliability influence route selection when delivering datagrams. For example, if an application requires a datagram to be sent by a fast path when there is more than one path to a destination, it can request routers to send the frame over the path offering the lowest delay.

The first option, known as precedence, indicates whether this datagram carries routine or high priority (precedence) information; the higher the precedence level, the higher the security level. A host with a mismatched or lower precedence (security level) cannot establish a connection to a process with another host having a higher security level. Thus a host rejects a connection request based on a multilevel security basis.

Although the IP header contains the Type of Service options, TCP can make use of these functions on a per-connection basis through bi-directional communication. TCP can store these options in its TCP memory buffer to provide increased security and more efficient delivery of application data.

 

dobermanproducts.com

 

Connection-oriented Characteristics

As we know, protocols fall into one of two categories: connection-oriented or connectionless. As a connection-oriented protocol, TCP implements all six of these basic characteristics:

  • Session setup
  • Teardown
  • Sequencing
  • Acknowledgements
  • Keepalives
  • Flow control
The following sections detail how each of these basic characteristics works.

Session Setup

Before any data transmission can occur, a reliable connection-oriented logical circuit needs to be established between remote hosts communicating processes or applications. Session establishment remains the same regardless of the process. TCP identifies all upper-layer processes and applications using a port or socket address. TCP uses these port addresses to distinguish one process from another within the same host, which ensures proper delivery and processing.

Socket Pairing

Once the address resolution process has been completed, TCP has enough information to begin the session setup process. The source Network layer address, client port/socket, and destination Network layer address and server port/socket make up what is called a socket pairing.

Session Teardown

The user or TCP can request a session teardown. If a user no longer wishes or needs the services of the remote application, he or she can exit the local application, causing a session teardown request to be sent. The TCP session teardown process is similar to the session setup. It utilizes a three-frame exchange to close the session. Either side can request a teardown.

Sequencing and Acknowledgements

The cornerstone of TCP’s services is to provide the most reliable transmission of data. To guarantee the delivery of datagrams, TCP sequences each byte of data it sends. This might seem like overkill because many protocols sequence each datagram regardless of the byte size; however, this guarantees that all sent data is tracked. Tracking all the sent data can detect and correct lost data, duplicated data, and data delivered out of order.

Retransmission

The sending host determines whether a datagram needs to be retransmitted. Because datagrams can take separate paths and might get lost, the sending device keeps a copy in case it needs to retransmit a datagram to a destination host. When the source host sends data it places a copy of the data into a retransmission queue in case this information needs to be resent and starts a timer. If the sending host receives a corresponding acknowledgment in response to the previously sent information, it deletes the copy from its queue. If the sending host does not receive a response, or if the timer expires, it assumes the datagram is lost in transit and retransmits the datagram from the queue.

Timers

An algorithm calculates the value of the retransmission timer based on the amount of time it takes between sent datagrams and receipt of the corresponding acknowledgments. This algorithm dynamically calculates the retransmission timer by evaluating the time between a transmitted datagram and its subsequent acknowledgment. It then uses this result to compute an SRTT (smooth round-trip timer). The SRTT uses this value and a base value configured on the host to calculate an RTO (retransmit timeout) value used by the host. An administrator can change the base value to affect the RTO of a TCP host. The configuration of this parameter varies depending on the operating system in use.

Keepalives

Every connection-oriented protocol needs some way to maintain the logical circuit between communicating processes even when no data is being exchanged; TCP is no exception. To maintain the logical circuit TCP sends a datagram called a keepalive in which no upper-layer data is present. It uses these types of datagrams to keep the session alive; hence the name. Because the frame contains no data, the length field value is zero, which means the subsequent ACK number does not advance.

Congestion

TCP uses keepalives as a normal part of session maintenance; however, this adds overhead to the network. Depending on how many open TCP connections you have on your network, keepalives can cause a large amount of unnecessary overhead to an already congested link. Note that connections to remote hosts, if unused, should be released (closed) and opened only when needed.

Flow Control

Flow control is a function of TCP’s window feature. TCP uses flow control to manage the flow of data into a receiver’s buffer to avoid overrunning the receiver. If a sending host transmits data faster than a receiver can process it, the receiver asks the sender to throttle back and send less data until the receiver can accept more.
 

TCP Ports

TCP uses well-known ports to establish a client-server relationship. TCP uses these ports at the Transport layer to identify which upper-layer processes have sent or should receive data streams.

 
Decimal  Keyword  Protocol(s) Description
20 FTP-Data TCP File Transfer Protocol (Data)
21 FTP TCP File Transfer Protocol (Control)
23 Telnet TCP Telnet
25 SMTP  TCP Simple Mail Transfer Protocol
49 LOGIN  TCP Login Host Protocol
53 DNS TCP/UDP Domain Name Service
63 VIA-FTP TCP VIA-Systems-FTP
70 Gopher TCP  Gopher File Service
80 WWW TCP World Wide Web Services

 

Books you may interested

TCP - TRANSMISSION CONTROL PROTOCOL 6 TCP - TRANSMISSION CONTROL PROTOCOL 7 TCP - TRANSMISSION CONTROL PROTOCOL 8 TCP - TRANSMISSION CONTROL PROTOCOL 9 TCP - TRANSMISSION CONTROL PROTOCOL 10

 

 

aceTech    AceTech   aulola.co.uk 

 

megamotormadness.com

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.