Transport Control protocol
TCp is a layer 4- The Transport Layer
The 4 main features of TcP are:
1. it guanrantees the deliverey of data in sequnes
2. It Controls the integrity of dat
3. it organizes the error recovery process
4. it has folw control and controls the flow end to end
the first Request for comment (RFC) forn modern TCP published in 1981. RFC 793
The basic TCP mechanisms
• Concept of transport connection
• Concept of transport connection multiplexing
• Establishment of the transport connection
• Mechanism of exchange management
A Transport connection is a virtual link between 2 applications over the network
TCP identifies a transport connection using
• Source & Destination Port Number (ID of the applications)
• Source & Destination IP addresses (ID of the systems/devices)
• Transport Protocol used (TCP in our case)
Multiplexing is One system/device is able to handle multiple applications
This means multiple transport connections Applications are identified using ports numbers
Eight well-known TCP ports along with their functions are
Certainly, here are the functions of TCP ports 20, 23, and 123:
1. Port 20 (FTP Data Transfer): Port 20 is used in conjunction with port 21 for FTP (File Transfer Protocol) connections. While port 21 handles the control information (commands and responses), port 20 is dedicated to the actual data transfer between the FTP client and server. When a file is being transferred using FTP, the data connection is established over port 20. it allows high spped doen load an upload
Port 21 (FTP - File Transfer Protocol): FTP is used for transferring files between a client and a server on a network. Port 21 is the default port for the FTP control connection, which handles commands and responses between the client and server.
Port 22 (SSH - Secure Shell): SSH is a cryptographic network protocol used for secure remote access to systems and for secure data communication between two devices
Port 22 is the default port for SSH connections.
Port 23 (Telnet): Telnet is a protocol used for remote terminal access to devices over a network. Port 23 is the default port for Telnet connections. It allows users to log in to a remote system and execute commands as if they were directly connected to that system’s console. However, Telnet is considered insecure because it transmits data, including passwords, in plaintext, making it vulnerable to eavesdropping and interception.
Port 25 (SMTP - Simple Mail Transfer Protocol): SMTP is a protocol used for sending email messages between servers. Port 25 is the default port for SMTP communication, used primarily for sending outgoing mail from an email client to a mail server. SMTP handles out going mails
SMPt also works in colloboration with other email protocols such Post Office Protocol POP and Internet Message Protoclol, which handle in omming mails
4. Port 80 (HTTP - Hypertext Transfer Protocol): HTTP is the foundation of data communication for the World Wide Web. It is used for transferring web pages from web servers to web browsers. Port 80 is the default port for unencrypted HTTP connections.
5. Port 110 (POP3 - Post Office Protocol version 3): POP3 is a protocol used by email clients to retrieve email messages from a mail server. Port 110 is the default port for POP3 communication.
Port 123 (NTP - Network Time Protocol): NTP is a protocol used to synchronize the clocks of computer systems over a network. Port 123 is the default port for NTP communication. NTP servers provide accurate time information to client devices, ensuring that they maintain synchronized time, which is crucial for various network operations, such as logging, authentication, and coordination of distributed systems. NTP helps maintain consistency and accuracy across interconnected systems by compensating for network delays and inaccuracies in local system clocks.
Port 143 (IMAP - Internet Message Access Protocol): IMAP is another protocol used by email clients to retrieve email messages from a mail server. Unlike POP3, IMAP allows users to access their email messages on the server while still keeping them synchronized with their email client. Port 143 is the default port for IMAP communication.
Port 443 (HTTPS - Hypertext Transfer Protocol Secure): HTTPS is the secure version of HTTP, using encryption to secure the data transmitted between the client and the server. Port 443 is the default port for HTTPS connections, commonly used for secure web browsing, online banking, and other secure internet transactions.
Port 3389 (RDP - Remote Desktop Protocol): RDP is a protocol developed by Microsoft that allows a user to remotely connect to a computer over a network connection. Port 3389 is the default port for RDP communication, commonly used for remote administration and desktop sharing.
These ports facilitate various essential network services, allowing communication between devices and enabling different types of network-based activities.
Registered ports are Used because the 1,024 well known ports were fully assigned
They assigned by IANA for specific services/applications
Registered port numbers ranges from 1,024 to 49,151 and are assigned for specific service on a first come first serve basis, and may include HTTP, HTP, SMPT, POP3,IMAP
Dynamic ports:
are Ports allocated dynamically to the source of the TCP connection to identify the client side of the connection
Dynamic port numbers ranges from 49,152 to 65,535
State
After session ends, the dynamic port is released and becomes avIlable for others
The three-way handshake is a process used in TCP (Transmission Control Protocol) to establish a connection between a client and a server. It consists of three steps:
1. SYN (Synchronize): The client sends a TCP segment with the SYN (synchronize) flag set to the server, indicating its intention to initiate a connection. This segment contains the client’s initial sequence number (ISN) for data communication. The client also starts a timer to wait for a response from the server.
2. SYN-ACK (Synchronize-Acknowledge): Upon receiving the SYN segment from the client, the server responds with a TCP segment that has both the SYN and ACK (acknowledge) flags set. This segment acknowledges the client’s SYN request and also indicates the server’s own initial sequence number (ISN) for data communication. The server also includes an acknowledgment number, which is the client’s ISN incremented by one. This step confirms the receipt of the client’s SYN and notifies the client that the server is ready to establish a connection.
3. ACK (Acknowledge): Finally, the client sends a TCP segment with the ACK flag set. This segment acknowledges the server’s SYN-ACK segment. The acknowledgment number in this segment is the server’s ISN incremented by one, confirming that the client received the server’s initial sequence number. At this point, the TCP connection is established, and both the client and server can begin sending data packets to each other.
The three-way handshake ensures that both the client and server agree on initial sequence numbers, confirm the readiness to establish a connection, and acknowledge each other’s intentions before exchanging data. This process helps in establishing reliable and ordered communication between TCP endpoints.
Sure, TCP termination involves a series of steps to gracefully close a connection between a client and a server. Here's an explanation of the steps in TCP termination:
1. **Initiation of Termination:** Either the client or the server initiates the termination process by sending a TCP segment with the FIN (finish) flag set to the other party. This segment indicates that the sender has finished sending data and wants to close the connection.
2. **Acknowledgment of FIN Segment:** Upon receiving the FIN segment, the receiving party acknowledges it by sending a TCP segment with the ACK (acknowledge) flag set. This ACK confirms the receipt of the FIN segment and acknowledges the sequence number of the next expected segment.
3. **Half-Close:** After acknowledging the FIN segment, the receiving party can continue sending data if it has any pending data to transmit. This allows for a "half-close" state where one side of the connection can still send data while the other side is in the process of closing.
4. **Second FIN Segment:** Once the receiving party has finished sending data, it also initiates the termination process by sending its own FIN segment to the other party.
5. **Acknowledgment of Second FIN Segment:** Upon receiving the second FIN segment, the other party acknowledges it with an ACK segment. This ACK confirms the receipt of the FIN segment and acknowledges the sequence number of the next expected segment.
6. **Connection Closure:** Once both parties have sent and acknowledged FIN segments, the TCP connection is fully terminated. At this point, both sides have closed their respective sending and receiving ends of the connection.
7. **Time-Wait State:** After the connection is closed, each side enters a time-wait state to ensure that any delayed segments related to the terminated connection are not mistaken for new connections. During this time, the socket pair (source IP, source port, destination IP, destination port) cannot be reused. The duration of the time-wait state is typically twice the maximum segment lifetime (2MSL).
By following these steps, TCP ensures that the connection termination process is orderly and reliable, allowing both the client and server to release the network resources associated with the connection.
Initial Sequence Number. Identify each TCP segment. each byte ovf data is assigned a sequence number to help maintatin order
Acknowledgement Management
• Possibility of sending ACK along with Data in the same TCP segment . The ACK field in the TCP header, indicates the sequence number of tje next expected byte
it confirms the successfwul receipt of data up to that point
Controlled Delay Management
this introduces delay in the system or network to achieve specific objective related to peformance stability and fluency
• Dynamic adjustment of the RTO
Certainly, let's explain each of these aspects of control delay management in the context of networking:
1. **Rate Limiting:** Rate limiting is a technique used to control the flow of packets within a network by enforcing a maximum rate at which packets are allowed to enter or exit a network interface. In the context of control delay management, rate limiting can be applied to control the rate of incoming control packets, such as routing updates, signaling messages, or management traffic. By imposing a limit on the rate of control packets, network administrators can prevent congestion and ensure that critical control traffic is not overwhelmed by non-essential data traffic.
2. **Buffering:** Buffering involves temporarily storing packets in memory buffers before they are forwarded to their destination. In control delay management, buffering plays a crucial role in managing delays by absorbing bursts of incoming control traffic and smoothing out fluctuations in packet arrival rates. By properly sizing and managing buffers, network devices can accommodate temporary surges in control traffic without dropping packets or introducing significant delays. Buffer management policies may include techniques such as dynamic buffer allocation, priority queuing, and congestion avoidance to optimize buffer utilization and minimize delay.
3. **Query:** In the context of control delay management, query mechanisms are used to retrieve status or configuration information from network devices in order to monitor and manage network performance. Queries can be initiated by network management systems or control protocols to gather real-time data on network conditions, device states, or traffic patterns. By querying network devices, administrators can identify potential bottlenecks, detect anomalies, and make informed decisions to optimize control traffic and mitigate delays.
4. **Congestion Control:** Congestion control mechanisms are designed to prevent or alleviate congestion within a network by regulating the flow of traffic and ensuring that network resources are efficiently utilized. In the context of control delay management, congestion control techniques can be applied to control the rate of control packets, prioritize critical traffic, and avoid network overloads. Examples of congestion control mechanisms include traffic shaping, admission control, and congestion avoidance algorithms such as TCP's congestion control mechanisms. By proactively managing congestion, network operators can minimize delays and maintain optimal performance for control traffic and other critical applications.
Overall, these aspects of control delay management play complementary roles in ensuring that control traffic is effectively managed and delivered within a network, thereby enhancing network performance, reliability, and responsiveness.
The Retransmission Time Out (RTO) is used by protocol to evaluate when to ask for retransmission
• TCP uses a dynamic RTO in order to adapt to any type of under layer networks used
• RTO is calculated using a specific algorithm taking into consideration the Round Trip Time (RTT)
• TCP uses dynamic RTO to make the protocol as efficient as possible regardless of what under layer protocol is used
TCP Segment
Source Port
• The source port number (16 bits) Destination Port
The destination port number (16 bits) Sequence Number
• Initial and random number. The Initial Sequence Number
(ISN) is calculated for each systems at the establishment of the connection. The sequence number of the first data octet in this segment. (32 bits)
Acknowledgment Number
• Indicate the number of the next expected sequence number (32 bits)
Header Length
• The number of 32 bit works in the TCP header.
• Indicates where the data begins
Unused
• 6 bits reserved for future used (must be 0)
Controlled Delay Management uses the following
The Retransmission Time Out (RTO) is used by protocol to evaluate when to ask for retransmission. it uses a specific algorithm which takes into consideration Round Trip Time RTT
• TCP uses a dynamic RTO in order to adapt to any type of under layer networks used
• TCP uses dynamic RTO to make the protocol as efficient as possible regardless of what under layer protocol is used
It uses a mschznism to adPtively set and update retransmission timeout based on varying net work conditions
it uses adaptive algorithm to dynamically adjust RTO considereing factors such as the Round trip time RTT, variations in RTT, and potential network conditions
Dynamic Round Trip Time , relise on SRTT as a centrL metric. it is the smooth average roound trip time looking at history and recent RTT
TCP Advanced Mschanism
TCP – Error Control
Goals
• Make sure that data has not been altered while being transferred (Maintain Integrity)
• Make sure that the data is sent to the right destination
How does it work
• TCP adds a pseudo-header containing IP information (IP destination address, IP source address...)
• Checksum is calculated including these information before sending the TCP sequence
• The receiver does the same calculation using its IP address
TCP – Data Retention
To optimized the transmission, TCP waits for the buffer to be full before transmitting
Segments are being transmitted once:
• The buffer become full
• The PUSH field is set to 1
This reduces overhead and make the protocol more efficient
TCP - Flow Control
Definition
• Control the sender emission rate according to the receiver’s receiving capacity
Usage of dynamic windows
• The size of the window (amount of data to send) is being adjusted dynamically all along the transmission TCP -
Congestion Control
Definition
• Control the sender emission rate according to the lower layer physical network capacity Every time a segment is lost, TCP reduces the amount of data sent (by reducing its window) Communications starts with a windows of 1 and go up every new ACK received.
The goal is to reach the largest window possibly handle by the lower layer physical
The goal is tokmreach the largest window possible handled by the lower layer physical layer
TCP SYN and Window Size Numbers
• Each connection starts with a Sequence number (SYN)
• The SYN is incremented by one in the “SYN – SYN-ACK – ACK” from the startup of the connection (seen in slide 7)
• Window sizes are set large to start to keep the Maximum Transmission Unit MTUs full (full 1500 Byte MTU are more efficient)
• Window size will get larger or smaller depending on the network conditions between the system involved
The window size determines the number of bytes sent before an acknowledgment is expected.
The acknowledgement number is the number of the next expected byte.
Transport Control Protocol
Layer 4 Protocol
Connection Oriented
Usage of Acknowledgment
Error Control
Flow Control
Congestion Control
Internet Protocol
Layer 3 Protocol
Connectionless
No Acknowledgment
Best Effort