%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % writeLaTeX Example: A quick guide to LaTeX % % Source: Dave Richeson (divisbyzero.com), Dickinson College % % A one-size-fits-all LaTeX cheat sheet. Kept to two pages, so it % can be printed (double-sided) on one piece of paper % % Feel free to distribute this example, but please keep the referral % to divisbyzero.com % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % How to use writeLaTeX: % % You edit the source code here on the left, and the preview on the % right shows you the result within a few seconds. % % Bookmark this page and share the URL with your co-authors. They can % edit at the same time! % % You can upload figures, bibliographies, custom classes and % styles using the files menu. % % If you're new to LaTeX, the wikibook is a great place to start: % http://en.wikibooks.org/wiki/LaTeX % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[8pt,landscape]{article} \usepackage{amssymb,amsmath,amsthm,amsfonts} \usepackage{multicol,multirow} \usepackage{calc} \usepackage{ifthen} \usepackage{listings} \usepackage{graphicx} \graphicspath{ {./images/} } \usepackage{helvet} \renewcommand{\familydefault}{\sfdefault} \usepackage[fontsize=6pt]{fontsize} \usepackage[landscape]{geometry} \geometry{a4paper, landscape, margin=0.25in} \usepackage[colorlinks=true,citecolor=blue,linkcolor=blue]{hyperref} \usepackage[ protrusion=true, activate={true,nocompatibility}, final, tracking=true, kerning=true, spacing=true, factor=1100]{microtype} \SetTracking{encoding={*}, shape=sc}{40} %%Packages added by Sebastian Lenzlinger: \usepackage{enumerate} %% Used to change the style of enumerations (see below). \newtheorem{definition}{Definition} \newtheorem{theorem}{Theorem} \newtheorem{axiom}{Axiom} \newtheorem{lem}{Lemma} \newtheorem{corr}{Corollary} \usepackage{tikz} %% Pagacke to create graphics (graphs, automata, etc.) \usetikzlibrary{automata} %% Tikz library to draw automata \usetikzlibrary{arrows} %% Tikz library for nicer arrow heads %%End \microtypecontext{spacing=nonfrench} \ifthenelse{\lengthtest { \paperwidth = 11in}} { \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} } {\ifthenelse{ \lengthtest{ \paperwidth = 297mm}} {\geometry{top=0.5cm,left=0.5cm,right=0.5cm,bottom=0.5cm} } {\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} } } \pagestyle{empty} \makeatletter \renewcommand{\section}{\@startsection{section}{1}{0mm}% {0.1mm}% {0.0001mm}%x {\normalfont\normalsize\bfseries}} \renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}% {0mm}% {0mm}% {\normalfont\small\bfseries}} \renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}% {-1ex plus -.5ex minus -.2ex}% {1ex plus .2ex}% {\normalfont\small\bfseries}} \makeatother \setcounter{secnumdepth}{0} \setlength{\parindent}{0pt} \setlength{\parskip}{0pt plus 0.5ex} % ----------------------------------------------------------------------- \title{Internet and Security FS23} \begin{document} \tiny \raggedright \footnotesize \begin{multicols*}{4} \setlength{\premulticols}{1pt} \setlength{\postmulticols}{1pt} \setlength{\multicolsep}{1pt} \setlength{\columnsep}{1pt} \section{Intro} \textbf{Packet Switching}: Messages are divided into packets for transmission. Packets travel through links and switches (routers and switches). Store-and-forward transmission is used. End-to-end delay: $d_{\text{end-to-end}} = N\frac{L}{R}$. \textbf{Circuit Switching}: Resources reserved for session duration. Constant transmission rate. Used in traditional telephone networks. Transmission time calculated based on reserved capacity. \textbf{Comparison}: Packet switching offers better sharing of capacity. Circuit switching reserves capacity regardless of demand. Packet switching is more efficient and cost-effective. \textbf{Queuing Delays and Packet Loss}: Output buffers used to store packets. Queuing delays occur during congestion. Full buffer leads to packet loss. \subsection{IP} **4.1.2 Network Service Model** \textbf{Network layer services:} - Guaranteed delivery - Guaranteed delivery with bounded delay - In-order packet delivery - Guaranteed minimal bandwidth - Security (encryption) - Internet provides best-effort service with no guarantees on delivery, order, delay, or minimal bandwidth. \textbf{Router components}: - Input ports - Switching fabric - Output ports - Routing processor - Input ports: Terminate incoming links, perform link-layer functions, and perform lookup for forwarding. - Switching fabric: Connects input ports to output ports for packet transfer. - Output ports: Store and transmit packets, perform link-layer and physical-layer functions. - Routing processor: Performs control-plane functions, executes routing protocols, maintains routing tables, and computes the forwarding table. - Analogy: Packet forwarding is like cars entering and leaving a roundabout, with entry stations determining the roundabout exit based on destination. \textbf{Subnet} Hosts reachable without via router. \subsection{DHCP } provide permanent or temporary IP addresses. It also offers subnet masks, default gateway (router) addresses, and DNS server addresses. DHCP is plug-and-play, suitable for various networks. It uses a client-server model, with clients discovering DHCP servers. If no server is available, a relay agent forwards DHCP messages. The DHCP process involves server discovery, offers, client request, and server acknowledgment. Clients can renew leases to extend IP address usage. DHCP doesn't maintain TCP connections as nodes move between subnets. Protocol: Discover[optional], Offer[optional], Request, ACK. \subsection{DNS} \textbf{ Services:} Host aliasing, mail server aliasing, load distribution. \textbf{Resourse Records:} (name, value, type, ttil) \textbf{Classes of DNS Servers:} Root provide IP of Top-level domain (TLD) server provide IP of Autoritative DNS server. Also local DNS (not part of hierarchy but still important: talks to the other DNS severs.) \textbf{Caching} So not always to go through whole DNS hierarchy. \section{Transport Layer} Logical IPC. \emph{Services:} reliable, in-order delivery incl congestion and flow control and connection $\Rightarrow$ TCP. Unreliable, unordered delivery, as is extension of IP $\Rightarrow$ UDP. \emph{Unavailable services:} delayguarantees, bandwidth guarantees. \subsection{Socket Prog} \emph{A socket is:} bi-direct IPC abstraction, comm endpoint, an API for IPC. \emph{Types:} \verb|SOCK_STREAM| connection oriented, guaranteed delivery (e.g. TCP), \verb|SOCK_DGRAM| datagram based (e.g. UDP), \verb|SOCK_RAW| direct access to network layer, \verb|SOCK_PACKET| dir. acc. to link layer. etc. \subsection{Reliable Transfer} \emph{Rel. Channel:} No xtra, just send, recv. \emph{Ch. with Bit Errors (in :} Sender waits for receiver feedback before sending next packet (Stop-and-wait) ACK/NAK. But what if ACK/NAK corrupt? Simle sol: sequence numbers. Then, no NAK, if Duplicate ACK, sender knows following packet not received. \emph{Lossy Ch. with Bit Errors:} Add timer, resend after timeout (sender side). \emph{Performance Stop-and-wait:} $d_{trans}=\frac{L(\text{packet size)}}{R(\text{trans rate})}$, \emph{Sender util (time actually busy sending bits} $U_{sender}=\frac{(L/R)}{RTT + (L/R)}$, very bad. Idea $\Rightarrow$ Pipelining: Keep sending within a send window. \emph{Go-back-N} After timeout: resend everything from oldest unacked packet. ACK as comulative ack (everything upto that sequence number received). Rec. disgards out of order packets (no rec buffering). Go-Back-N (GBN) protocol allows sender to transmit multiple packets without waiting for acknowledgments, limited to N unacknowledged packets. Sender's view: base represents oldest unacknowledged packet, nextseqnum is smallest unused sequence number. Four intervals in sequence numbers: [0, base-1] for acknowledged packets, [base, nextseqnum-1] for sent but unacknowledged packets, [nextseqnum, base+N-1] for immediate sending, and >= base+N for future use. N is the window size, defining the sliding window protocol. Packet sequence numbers are carried in a fixed-length field in the packet header. Receiver discards out-of-order packets and sends ACKs for in-order packets. GBN sender responds to events: invocation from above, receipt of ACK, timeout. GBN incorporates techniques like sequence numbers, cumulative acknowledgments, checksums, and timeout/retransmission.\emph{Selective Repeat:} Resend only specific packet.The receiver individually acknowledges each correctly received packet. Out-of-order packets are buffered at the receiver until missing packets are received. The sender retransmits only those packets suspected to be lost or corrupted. SR uses a window size to limit the number of outstanding, unacknowledged packets. The sender and receiver windows may not always coincide, leading to potential performance issues. Duplicate packets can occur due to packet reordering in network channels. Sequence number reuse is guarded against by ensuring a maximum packet lifetime. Additional techniques and extensions exist to address packet reordering challenges. \subsection{TCP} \emph{Overview:} P2P (1 send, 1 rec), reliable, in order BYTE stream, pipelined, full duplex MSS, connection oriented (handshaking), flow controlled (sender wont overwhelm recv). \textbf{RTT and Timeout:} \verb|EstRTT|$_{new}=(1-\alpha)*$\verb|EstRTT|$_{old}+ \alpha *$ \verb|SampleRTT|. \verb|DevRTT| $= (1-\beta)*$ DevRTT$_{old} + \beta* |$\verb|SampleRTT| $-$ \verb|EstRTT|$|$. \verb|TimeoutInterval| $=$ \verb|EstRTT| $+ 4 *$\verb|DevRTT|. \textbf{Fast Retransmit} sender get 3 ACKS for same data $\Rightarrow$ resend unacked segment with smallest seqno. since likely unacked segment lost, no wait for timeout. \textbf{Flow Control}recv: \verb|rwnd| $\leq$ \verb|RcvBuff|- (\verb|LstByteRcv| - \verb|LstByteRead|), sender: \verb|LstByteSent| - \verb|LstByteAcked| $\leq$ \verb|rwnd|. \subsection{TCP Congestion Control} \emph{Approach:} Additive increase, Multiplicative Decrease (AIMD sawtooth graph bandwidth probing). sender: \verb|LstByteSent| - \verb|LstByteAcked| $\leq$ min(\verb|rwnd|,\verb|cwnd|). Send rate$\approx cwnd/RTT bytes/sec$ see FSM \emph{Fairness} TCP converges to equal bandwidth share. \includegraphics[width=\linewidth]{images/tcp_cc.png} \section{Network Layer} \emph{Forwarding and Routing} Routing algo says what goes in fw table. Forward is local based on dest. host addr within router. Routing is global. Longest prefic matching when doing fw table lookup. Routng algo determens end2end path through network. \subsection{Routing I} global: all routers have complete topology $\Rightarrow$ LinkState algo. decentralized: router knows phys. connected neighbors, and link cost to neighbor. iterative comp, xchange info w neighbor $\Rightarrow$ distance vector algo. static: routes change slowly, dynamic: more quickly, periodic update n response to link cost change. \textbf{Link State Algo Dijkstra:}least cost paht from one node to all others, net topology, link cost know to all nodes (same info), iterative: after k iterations, know least cost patht to k dest.'s. Algo: \begin{lstlisting} Init: N'={u} for all nodes v if v neigh of u D(v) = c(u,v) else D(v) = infinity Loop: find w not in N', s.t. D(w) is a minimum add w to N' update D(V) forall v adj to w and not in N': D(v) = min(D(v), D(w) + c(w,v)) /* new cost to v either old cost or know shortest patht w plus cost from w to v */ \end{lstlisting} Complexity: $O(n^2)$ \textbf{Dist Vect Bellman-Ford:} $d_x(y):=$ cost of least-cost path fom x to y. then $d_x(y)=min_v\{c(x,v) + d_v(y)\}$. $D_x(y)=$ \emph{estimate} of least cost x to y, x maintins dist vector $\mathbf{D}_x=[D_x(y):y\in N]$, node x additionally: knows cost to each neighbor v: c(x,v), hand dist vector for each neighbor. \emph{key idea} from time to time: each node sends own DV to neighbor, when x recv new DV, update own DV using BF equation: $Dx(y)\leftarrow min_v\{c(x,v)+D_v(y)\}$ for each node $y\in N$ \begin{lstlisting} Initialization: for all destinations y in N: Dx(y)= c(x,y) /* if y is not a neighbor then c(x,y)= infty*/ for each neighbor w Dw(y) = ? for all destinations y in N for each neighbor w send distance vector Dx = [Dx(y): y in N] to w loop wait (until I see a link cost change to some neighbor w or until I receive a distance vector from some neighbor w) for each y in N: Dx(y) = minv{c(x,v) + Dv(y)} if Dx(y) changed for any destination y send distance vector Dx = [Dx(y): y in N] to all neighbors forever \end{lstlisting} Good news travels fast, bad news travels slow (count to infty problem). Solution: poisoned reverse, If Z routes via Y to X: Z tells Y c(Z,Y)=infinite, so Y wont route to X via Z. \subsection{Routing 2} Intra-AS routing: protocol used in same AS, gateway router: edge of own AS, has link to router in other AS. Routing across AS: inter-AS routing protocol. Fw table configured by inter and intra protol : intra for internal dest, intra and inter forn external dests. \emph{Inter AS tasks:} 1. learn from inter AS protocol that subne x is reachable via mult. gateways $\rightarrow$ use info rom intra AS to determine cost of least cost paths to each gateway $\rightarrow$ hot potat routing: chose gateway w. smallest least cost $\rightarrow$ from fw table get interface I that goes to least cost gateway, Enter (x,I) in fw table. \textbf{Intra AS Routing:} RIP: Routing info protocol. DV algo with no of hops as dist metric. DV uses (subnet,hops) pair. RIP routing tables managed by app lvl daemon, advertiuses in UDP periodically. \textbf{BGP} provides each AS with: eBGP: get subnet reachability info from neighboring ASs. iBGP: propagate reachability info to all AS-internal routers. determine 'good' routes to other networks. $\Rightarrow$ allow subents to advertise existence to rest of Internet. \emph{BGP session} two BGP routers (peers) xch BGP msgs: path advertising to different dest network prefixes via semi permanent TCP. When router learns new prefix, creates new entry for prefix in fw table. Advertised prefix attributes: prefix + attribute = "route". \emph{Important attribute:} AS-PATH: AS thorough which prefic ad has passed, NEXT-HOP (is IP of addr of router interface that begins AS-PATH aka where to leave current AS) \textbf{Summary how info gets in FW table:}1. router becomes aware of prefix via BGP ad from others. 2. Determine router output port for prefix: 2.1 use BGP route selection to get best inter-AS route. 2.2 Use OSPF to get best intra AS route leading to best inter-AS route. 2.3 router identifies router port for best route. 3. enter prefix-port entry in fw table. \emph{Why different intra inter AS routing?:} \emph{policy}: inter: admin wants control over how/who traffic routed. intra: single admin, no policy decisions needed. \emph{scale:} hierarchical routing saves table sizes, reduced update traffic. \emph{performance:} intra: can focus on performance. inter: policy may dominate over performance. \section{Data Link Layer} \textbf{Terminology:} \emph{nodes: }hosts and routers. \emph{links:} comm channals connecting adjacent nods along comm path. \emph{frame:} layer-2 packet encapsulating datagram. \emph{Purpose:} data-link layer transfers datagram from node to physically adjacent node via link. \subsection{MAC: TDMA, random access} \emph{Link types:} point-to-point: like ethernet switch to host. broadcast: like old ethernet, 802.11 wireless LAN. \emph{Probelm:} collision. $\Rightarrow$ Media Access Protocol (MAC): Distributed algo, determine how nodes share channel, when who transmit. no out-of-band channel for coordination, same channel for protocol and normal data. \emph{MAC protocol types:} Channel partitioning: TDMA (devide time on channel into rounds, every station fixed slot in round, unused slot go idle) FSMA(devide frequency, stations get fixd freq band, unused trans time in freq bands go idle). Random Acces: if node has packet, transmit at full ch rate R, no a prior coord among nodes, can lead to collision, so MAC protocol specifies hwo to detect and/or recover colisons. All following are random acces. \emph{slotted ALOHA} assumption: all frames same size; time div in equal size, node starts transmitt onlya at slot beginning, synced nodes, if 2 or more nodes transmit in slot, all nodes detect collison. Operation: when node gets fresh frame, transmit in next slot, if no collison, node can send frame in next slot, if collison, node retransmits frame each subsequent slot with prob. p until success. Pros: single active node can cont trans at full rate of ch, highly decent, aka only slots in nodes need to be in sync, simple. Cons: collisons, wasting slots, idle slots, clock synchro. Efficiency (long-run frac of successful slots) N nodes transmit with prob. p., then prob that given node has success $=p(1-p)^{N-1}$, and prob that \emph{any} has succ $=Np(1-p)^{N-1}$. If N goes infty max effc is .37. \emph{CSMA(carrier sense multiple access} listen before transmit, if ch idle, then send, else, defer transmission (aka dont interrupt others). collision still possible since prop delay. if collision, entire packet trans time wasted. \emph{CSMA/CD(coll detect) } CD easy if wired, hard if wireless. Abort if coll detected. \emph{Ethernet CSMA/CD algo:} 1. NIC recv datagram from network layer, makes frame. 2. If NIC senses ch idlem start frame transmiison. If sense that busy, wait until idle, then transmit. 3. If NIC transmit entire frame w.o. detect another transm. NIC is finnished w. frame. 4. If derect another transm while transmitting itself, abort and sens jam sig. 5. after abort, NIC enters binary (exponential) backoff: after m th coll, NIC choose K at random from $\{0,1,2,...,2^m-1\}$. waits K*512 bit times, return to step 2. longer backoff interval when more collisons. \emph{CSMA?CD efficiency:} $T_{prop}=$ max prop delay between 2 nodes in LAN. $t_{trans}=$ time to transmit max-size frame. $efficiency=1/(1+5t_{prop}/t_{trans})$. goes to 1 if tprop to 0 or ttrans to infinity. better than ALOHA, simple, clean, decentralized. \textbf{Taking turns protcols} best if ch part and random access: polling (master node invites slave node to transmit in turn) probelem: polling overhead, latency, single point of failure (master node). Bluetooth uses this. Token Passing: control token passed from node to next node sequencially (token ring). Problem: same as polling. \subsection{Ethernet} connectionless, unreliable (recv NIC no acks or nacks, dropped frame only recovered if higher layer uses RDT (like TCP)). MAC Protocol: unslotted CSMA?CD with binary backoff. Ethernet has link layer protocol and format, and physcal layer implementation via different media like fiber, cable. \emph{Physical Topology:} bus:all nodes in same collision domain. star: (std today) active switch in center, each spoke own Ethernet protocol, no collisions btw. nodes. \emph{Frame structure:} sending adapter encapsulates IP datagram in Ehternet frame. preamble: 7 bytes for synchro btw, recv and sender clock rates. address: 6 byte source, dest MAC addr. type: indicates higher lvl protocol. CRC: cyclic redundancy check at recv, if error, drop frame. \subsection{ARP} 32-bit IP addr is network layer addr for interface. used for layer 3 forwarding. MAC address: used 'locally' to get frame from one interface to another physically-connected interface (same network, in IP-addr sense). Each adapter on LAN has unique LAN address (MAC addr). \emph{Question:} How get interfaces MAC addre if have IP addr? Solution: ARP table, each IP on LAN has table: IP/MAC addr map for some LAN nodes (incl TTL). \emph{ARP protocol: same LAN:} A to B: If Bs MAC not in As table: boadcast ARP queary with Bs IP. All nodes recv, B replies to A with Bs MAC (frame to As MAC, unicast). A caches IP/MAC pair in ARP table until times out. ARP is plug-and-play. \emph{ARP routing to other LAN} A sends datagram to B via router R. A makes IP datagram w. sourc IP A dest IP B. A makes link layer fram w. Rs MAC as dest around dataframe. When R gets As frame, removes datagram, passes to IP. R forwards datagram with IP src A, dest B. R makes link layer frame, R MAC as src and Bs MAC as dest and A-to-B IP datagram in frame \subsection{Hubs and Switches} \emph{Hubs:} Physical layer repeaters. Incoming bits go to all other links at same rate. all nodes connected to hub can collide, no frame buffering, no CSMA?CD at hub: host NICs detect collisions. Can interconnect LAN segments with backbone hubs but this makes collision domains bigger and bigger. No interconnect possible with 10BaseT and 100BaseT. \emph{Ethernet switch:} Link layer device. store and forward Ehternet frames. Selective forwarding based on incoming MAC. uses CSMA/CD to access segment. transparent, i.e. hosts are unaware of swicthes. plug-and-play and self-learning: no config! Hosts direclty connect to switch interfaces. switch buffers packets. ethernet protocol on each incoming link, but no collisions; full duplex. each link its own collision domain. can transmit simultaneously if no two dest MAC addrr are same. \emph{Self Learning:} Switch builds switch table when some host wants to send other host smth. I notes src MAC and from which interface it came. If cannout find dest MAC: flood, i.e. forward on all interfaces. \subsection{NAT and Firewall} $\bullet$ NAT operates at the network layer (Layer 3) of the OSI model.\ $\bullet$ It typically resides in a router or firewall device.\ $\bullet$ NAT translates private IP addresses used within a local network to a public IP address for communication with external networks.\ $\bullet$ It maintains a translation table that maps private IP addresses to public IP addresses.\ $\bullet$ NAT can be configured in different modes, such as static NAT, dynamic NAT, and port address translation (PAT).\ $\bullet$ Static NAT maps a specific private IP address to a specific public IP address.\ $\bullet$ Dynamic NAT assigns public IP addresses from a pool of available addresses on a first-come, first-served basis.\ $\bullet$ IP pooling is a technique used in dynamic NAT where a range of public IP addresses is allocated for translation.\ $\bullet$ NAT supports the migration of services and devices between networks by updating the translation table accordingly.\ $\bullet$ IP masquerading, also known as IP spoofing or network address hiding, is a form of NAT where the source IP address of outgoing packets is modified to appear as if they originated from the NAT device itself.\ $\bullet$ NAT provides additional security by hiding internal IP addresses from external networks.\ $\bullet$ It can also help in load balancing and traffic management by distributing incoming traffic across multiple internal devices.\ $\bullet$ Load balancing refers to the distribution of network traffic across multiple servers or devices to optimize resource utilization and improve performance.\ $\bullet$ NAT is widely used in home networks, small office networks, and large-scale enterprise networks.\ $\bullet$ It plays a crucial role in the adoption of IPv4 in the face of limited address space.\ $\bullet$ However, NAT can introduce certain limitations, such as difficulties in hosting certain types of services that require inbound connections.\ $\bullet$ The introduction of IPv6, with its larger address space, reduces the need for NAT in future network deployments. $\bullet$ A firewall is a network security device that monitors and controls incoming and outgoing network traffic.\ $\bullet$barrier between internal networks (e.g., private LAN) and external networks (e.g., the Internet) to enforce security policies.\ $\bullet$ Firewalls can be implemented in both hardware and software forms.\ $\bullet$ The primary purpose of a firewall is to protect the network from unauthorized access and malicious activities.\ $\bullet$ Packet filtering is a basic firewall technique that examines individual packets of network traffic based on pre-defined rules.\ $\bullet$ It allows or blocks packets based on criteria such as source/destination IP addresses, ports, and protocols.\ $\bullet$ Packet filtering firewalls operate at the network layer (Layer 3) or transport layer (Layer 4) of the OSI model.\ $\bullet$ They can be configured to permit or deny specific types of traffic, effectively creating a security perimeter.\ $\bullet$ Application gateways, also known as proxy firewalls, operate at the application layer (Layer 7) of the OSI model.\ $\bullet$ They act as intermediaries between clients and servers, inspecting and filtering network traffic at the application level.\ $\bullet$ Application gateways provide more advanced security features and analyze content of packets, making intelligent decisions based on the application protocol.\ $\bullet$ can prevent unauthorized access, perform deep packet inspection, and provide additional security measures like encryption and authentication.\ $\bullet$ Firewalls can be configured to support various network security policies, including allowing or blocking specific protocols (e.g., ICMP, TCP, UDP), defining access control lists (ACLs), and setting up virtual private networks (VPNs).\ $\bullet$ Firewalls can also implement Network Address Translation (NAT) to hide internal IP addresses and provide an extra layer of security.\ $\bullet$ specialized Next-Generation Firewalls (NGFW) combine traditional packet filtering with advanced features like intrusion detection and prevention, deep packet inspection, and application awareness.\ Stateless filtering, aka packet filtering, treats each packet in isolation. makes decisions based solely on packet's individual properties, without considering context in network session or connection. Criteria may include source and destination IP addresses, ports, or protocols. Stateful filtering,keeps track of active sessions. checks each packet's state and context as part of a larger conversation. more flexible and secure than stateless filtering because understands state of network connections. more resource-intensive. \subsection{Switches in Data Center Networks} $\bullet$ Role of Switches: In data center networks, switches play a critical role in directing and controlling data traffic between servers and systems. They can operate at several layers of the OSI model, primarily Layer 2 (data link) and Layer 3 (network). $\bullet$ Types of Switches: In data centers, both Ethernet switches (Layer 2) and multilayer switches (Layer 3) are commonly used. Layer 2 switches forward packets based on MAC addresses, while Layer 3 switches also incorporate routing functionality, forwarding packets based on IP addresses. $\bullet$ Challenges: Data center networks face several challenges, including scalability, redundancy, load balancing, fault tolerance, and energy efficiency. There's a need to handle a high volume of traffic and simultaneously support a wide range of applications with varying performance needs. $\bullet$ Load Balancing: Load balancing at the application layer (Layer 7) involves distributing network traffic across multiple servers based on the content of the client's request. This approach allows for more intelligent and flexible distribution of traffic. Load balancers can take into account factors such as server load, application type, and session information to make routing decisions. For instance, load balancers can direct web traffic to servers optimized for web hosting, and direct database queries to servers optimized for database operations. $\bullet$ Rich Switch Interconnection: In large-scale data centers, it's critical to have a high degree of interconnection between switches to ensure low latency, high bandwidth, and fault tolerance. Two common architectures are: \subsection{Wireless: Concepts, CDMA} \emph{Wireless Link Characteristics:} $\bullet$ Signal Strength Attenuation and Path Loss: The strength of a wireless signal decreases (attenuates) as it travels further from the source. This can be affected by the transmission power, frequency of the signal, distance traveled, and environmental factors. $\bullet$ Multipath Propagation: In wireless communication, the signal from the transmitter can reach the receiver via multiple paths due to reflection, refraction, and scattering. This can cause interference at the receiver and may result in signal fading. $\bullet$ Interference: Wireless links are susceptible to interference from other devices using the same frequency band. This interference can degrade the performance of the wireless network. \emph{Wireless host:} laptop, smartphone etc. run apps, may be stat or mobile. \emph{base station:} usually connect to wired network, relay - resp for sending packets btwn wires network and wireless hosts in its area. \emph{wireless link:} usually used to connect mobile to base station, or as backbone link, Multiple access protocol coord link access, various data rates, transmission distance. \emph{Infrastructure mode:} base station connects mobiles into wired network. handoff: mobiles change base station. \emph{ad hoc mode:} no base satiton. nodes only transmit to other nodes in within link coverage. nodes self organize into network. \subsection{Wireless: 802.11} $\bullet$ 802.11 LAN Architecture: An 802.11 wireless LAN typically consists of one or more Access Points (APs) and multiple wireless clients. The APs form the basis of the network and connect the wireless clients to the wired network infrastructure. The wireless clients can operate in two modes: infrastructure mode (connected to an AP) and ad-hoc mode (directly connected to other wireless clients, forming a peer-to-peer network). $\bullet$ Channels: The 802.11 standard uses specific frequency bands that are divided into channels. In the 2.4GHz band (used by 802.11b/g/n), there are typically 11 channels in North America, and 13 in most of Europe. The 5GHz band (used by 802.11a/n/ac/ax) has more available channels. Each channel has a certain width, measured in MHz. The choice of channel can influence the wireless network's performance due to potential overlap with nearby networks and interference. $\bullet$ Association: Before a wireless client can send or receive data, it needs to associate with an AP. This process involves the client scanning for available networks, selecting one, authenticating, and then associating with the selected AP. Once association is complete, the client can start transmitting and receiving data. $\bullet$ Passive and Active Scanning: Passive scanning involves the client listening for Beacon Frames from nearby APs. These beacons contain all the information a client needs to understand the capabilities of the AP. Active scanning, on the other hand, involves the client sending a Probe Request and then waiting for a Probe Response from an AP. This can speed up the process of finding an AP but consumes more power and bandwidth. $\bullet$ Multiple Access and Collision Avoidance: To avoid collisions (i.e., two clients trying to send data at the same time), 802.11 uses a method called Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). When a device wants to transmit, it first listens to the wireless medium to see if other devices are transmitting. If the medium is busy, the device waits for it to become free. It then waits a random amount of additional time before starting its transmission, reducing the chance of a collision. If the medium is free, it can begin transmission immediately. Additionally, 802.11 devices can use RTS (Request to Send) and CTS (Clear to Send) control frames to reserve the medium for a certain amount of time, further reducing the chances of a collision. \subsection{Mobile Internet (cellular, 5G etc.)} Cellular Network Architecture: $\bullet$ A cellular network consists of Mobile Stations (MSs), Base Transceiver Stations (BTSs), Base Station Controllers (BSCs), Mobile Switching Centers (MSCs), and a network backbone. $\bullet$ The MS is the user's device, such as a cellphone. $\bullet$ The BTS is the closest network entity to the MS, often referred to as a cell tower. $\bullet$ The BSC controls multiple BTSs and manages their radio resources. $\bullet$ The MSC manages multiple BSCs and serves as a bridge between the cellular network and the PSTN (Public Switched Telephone Network). $\bullet$ The network backbone includes the servers and high-speed data links that manage internet connectivity. The First Hop: $\bullet$ The first hop in cellular internet access is between the MS and the BTS. Data is transferred using radio waves. Mobility Handling and GSM Indirect Routing to Mobile: $\bullet$ In GSM (Global System for Mobile Communications), when a call is made to a mobile user, the call is first routed to the user's home network. The home network then queries the current location of the mobile device in a database (the HLR - Home Location Register) and routes the call to that location. $\bullet$ If the mobile user moves during the call, the network performs a handover to transfer the call to a new cell tower without interrupting the call. Handoff with Common MSC and Between MSCs: $\bullet$ Handoff or handover is the process of transferring an ongoing call or data session from one cell network to another. $\bullet$ Intra-MSC handover happens when the MS moves from one cell to another within the same MSC. The MSC updates the necessary data and controls the handover. $\bullet$ Inter-MSC handover happens when the MS moves from a cell controlled by one MSC to a cell controlled by another MSC. In this case, the original MSC requests the new MSC to allocate the necessary resources and then transfers control to the new MSC. The original MSC also updates the HLR with the new location of the MS. \section{Multimedia Networking} $\bullet$ Requirements: Bandwidth: Multimedia files, especially video, can be large and thus require significant bandwidth for smooth streaming. Low Latency: Real-time or interactive multimedia (like video calls or live streaming) requires low latency for synchronization between users and maintaining the quality of the service. Jitter Control: The variation in packet arrival times, or jitter, should be minimized. High jitter can lead to a choppy audio or video playback experience. Data Loss: Given that the Internet Protocol allows for packet loss, multimedia content needs to be transferred in such a way that loss doesn't significantly degrade the quality. $\bullet$ Solutions: Compression: Codecs reduce file sizes, making them easier to transmit over networks. For example, H.264 and VP9 for video, and AAC and MP3 for audio. Adaptive Bitrate Streaming: Techniques like DASH or HLS dynamically adjust the quality of a video stream in real time based on network conditions and CPU utilization. Buffering: To compensate for network variability, players can buffer, or temporarily download, a certain amount of video or audio before starting playback. This helps ensure smooth playback even when network conditions fluctuate. Error Correction Techniques: Forward Error Correction (FEC) and Automatic Repeat reQuest (ARQ) are used to detect and correct errors that occur during the transmission of data. Quality of Service (QoS): Networks can implement QoS mechanisms to prioritize multimedia traffic and ensure it receives the necessary bandwidth and low latency. CDNs (Content Delivery Networks): CDNs distribute multimedia content closer to the user, reducing the distance that data has to travel and hence improving speed and reducing latency. Protocols: Use of specialized protocols like RTP for transport, RTCP for quality feedback, and control protocols like RTSP or SIP for setup and management of multimedia sessions. Network Performance Requirements: Delay, packet loss, bandwidth, and jitter are the primary factors for transmitting audio or video. While multimedia applications are delay-sensitive and require a certain bandwidth, they can tolerate infrequent losses. Challenges: All packets currently get the same best-effort service, apart from the loss=0 guarantee provided by TCP. No other performance guarantees exist. Scalability also poses a challenge, especially for one-to-many or many-to-many transmissions. Multimedia Networking Solutions: $\bullet$ Performance Guarantees: Add Quality of Service (QoS) to the internet stack for performance guarantees. Examples are IntServ and DiffServ. $\bullet$ IP Multicast: There have been several attempts at this but with limited success. $\bullet$ Adaptive Applications: They make the most out of the best-effort service. These include streaming and (semi-)real-time support over UDP and TCP. $\bullet$ Application-Layer Solutions: These include Content Delivery Networks (CDNs), Peer-to-Peer (P2P) networks, application-layer multicast, etc. They have been increasingly successful. $\bullet$ Multimedia Networking: Multimedia networking involves the transmission of different types of data like text, graphics, video, voice, and audio over networks. The key aspect of multimedia networking is that the data is typically synchronized and continuous. $\bullet$ Performance Requirements: These include high bandwidth for large amounts of data, low latency for synchronization and real-time applications, low jitter (variance in delay) for stable and consistent stream quality, and minimal packet loss to prevent quality degradation. $\bullet$ Streaming Media: Streaming involves transmitting media over the network in a continuous stream. It can be either stored streaming (for pre-recorded content like Netflix videos) or live/interactive streaming (for real-time communication like Skype or Zoom calls). UDP for Streaming: $\bullet$ The server sends data at a rate suitable for the client, often ignoring network congestion. Often, the send rate equals the encoding rate, which is constant. $\bullet$ The fill rate equals the constant rate minus packet loss. $\bullet$ A short playout delay (a few seconds) compensates for jitter. $\bullet$ Error recovery is applied if time permits. TCP for Streaming: $\bullet$ The server sends data at the maximum rate possible under TCP. $\bullet$ The buffer fill rate fluctuates due to TCP's congestion control. $\bullet$ A larger playout delay smooths the TCP delivery rate. $\bullet$ TCP is popular for streaming over HTTP because it passes more easily through firewalls. $\bullet$ Play-Out Buffering: This mechanism is used at the receiver end to handle network jitter and compensate for the variable packet arrival time. Data packets are temporarily stored and then played out at a consistent rate, maintaining smooth playback. $\bullet$ Client Buffering: This involves storing a portion of the received media before it begins playing. This buffer can help handle network delays and fluctuation in delivery rates. Larger buffers can handle larger network delays but increase latency. $\bullet$ RTSP (Real Time Streaming Protocol): This network control protocol is designed to control the delivery of streaming media servers. It supports operations like pause, rewind, and fast forward. $\bullet$ Real-time Interactive Media: This involves live interaction between users, such as video calls. A codec (coder-decoder) is used for compressing and decompressing the media for transmission. $\bullet$ RTP/RTCP (Real-time Transport Protocol/Real-time Transport Control Protocol): RTP is a protocol used to transport real-time data, like audio and video, over networks. RTCP works alongside RTP, providing out-of-band control information and periodic transmission statistics for quality of service (QoS) monitoring. $\bullet$ SIP (Session Initiation Protocol): This is a signaling protocol used to establish, modify, and terminate multimedia sessions, like VoIP calls. $\bullet$ SDP (Session Description Protocol): SDP describes multimedia sessions, providing necessary information for participants to join a session. It is commonly used with RTSP and SIP. $\bullet$ H.323: This is an ITU-T standard for audio, video, and data communications over IP networks. It encompasses various protocols for call setup, control, and media transport. $\bullet$ Mitigating Delay and Loss: Several techniques can be used to manage delay and loss in multimedia networking, such as buffering, Forward Error Correction (FEC), interleaving, and error concealment techniques. These can help ensure media is received and played back correctly, even if some data is lost or delayed. $\bullet$ Multimedia QoS (Quality of Service): QoS mechanisms can be used to ensure satisfactory performance for multimedia traffic by prioritizing certain types of traffic, limiting delay, jitter, and packet loss, and guaranteeing a certain level of service. $\bullet$ DASH (Dynamic Adaptive Streaming over HTTP): This is an adaptive bitrate streaming technique that enables high-quality streaming of media content over the internet. DASH works by adjusting the quality of a media stream in real time, based on the viewer's network and playback conditions. $\bullet$ CDNs (Content Delivery Networks): CDNs are a system of distributed servers that deliver content to a user based on their geographic location, the origin of the webpage, and the content delivery server. This helps improve performance and scalability. $\bullet$ P2P (Peer-to-Peer) Networks: In peer-to-peer networks, direct sharing of content between peers eliminates the need for central servers, which can efficiently distribute high-demand content and reduce server load. Examples include BitTorrent and certain live streaming platforms. \begin{enumerate} \item \textbf{Application Layer (HTTP or HTTPS):} The user enters a URL in the web browser, or clicks on a link. The browser formulates an HTTP (or HTTPS) GET request. \item \textbf{DNS (Domain Name System):} The browser needs to resolve the domain name to an IP address. It sends a DNS query to a DNS server to obtain the IP address associated with the domain. \item \textbf{DHCP (Dynamic Host Configuration Protocol):} If the client doesn't have an IP address, it uses DHCP to obtain network configuration details from a DHCP server. The DHCP server assigns an IP address to the client and provides other network settings. \item \textbf{ARP (Address Resolution Protocol):} Before sending packets to other devices on the local network, the client needs to resolve the MAC address of the gateway router. It sends an ARP request to obtain the MAC address of the gateway. \item \textbf{Transport Layer (TCP or potentially QUIC):} The HTTP request is wrapped in a TCP packet for HTTP or a QUIC packet for HTTPS (if supported by the server). TCP provides reliable, connection-oriented communication between the client and server. \item \textbf{Network Layer (IP):} The transport layer packet is encapsulated in an IP packet. IP handles routing of packets across the network based on the destination IP address. \item \textbf{Data Link Layer (Ethernet or Wi-Fi):} The IP packet is encapsulated in a frame (Ethernet or Wi-Fi) for transmission over the physical network. \item \textbf{Physical transmission:} The frame is transmitted over the physical medium (e.g., Ethernet cable or Wi-Fi signal). \item \textbf{Internet routers:} Routers along the path receive the packet, examine the destination IP address, and forward it towards the server. \item \textbf{Server's Link, Network, and Transport Layers:} The server's network stack processes the frame, extracts the IP packet, and passes it up to the transport layer. The transport layer extracts the HTTP request and passes it up to the application layer. \item \textbf{Server's Application Layer (HTTP/HTTPS):} The server's web server software processes the HTTP request and generates an appropriate HTTP response. \item \textbf{Back to Client:} The HTTP response is passed down the layers at the server, transmitted back over the internet, and received by the client. The response travels up the layers at the client's machine and is rendered by the web browser. \item \textbf{Intra-Domain Routing:} Intra-domain routing protocols, such as RIP, OSPF, or IS-IS, are used by routers within a network to determine the best path for forwarding packets between subnets or domains. \end{enumerate} \begin{enumerate} \item Application Layer (HTTP or HTTPS): The web browser formulates an HTTP (or HTTPS) GET request. \item Transport Layer (TCP or potentially QUIC): The HTTP request is wrapped in a TCP packet (or QUIC packet) for reliable transport. \item Network Layer (IP): The transport layer packet is encapsulated in an IP packet for routing. \item Data Link Layer (Ethernet or Wi-Fi): The IP packet is encapsulated in a frame (Ethernet or Wi-Fi) for transmission over the physical network. \item Physical transmission: The frame is transmitted over the physical medium (e.g., Ethernet cable or Wi-Fi signal). \item Internet routers: The packet travels through a series of routers, each determining the next hop based on the destination IP address. \item Server's Link, Network, and Transport Layers: The packet arrives at the server's network interface, where the frame is processed and the IP packet is extracted. The transport layer demultiplexes the packet and passes the HTTP request up to the application layer. \item Server's Application Layer (HTTP/HTTPS): The server's web server software processes the HTTP request and formulates an HTTP response. \item Back to Client: The HTTP response is passed down the layers at the server, transmitted over the internet, and travels up the layers at the client's machine. \item Data Link Layer (Ethernet or Wi-Fi): The IP packet is encapsulated in a frame (Ethernet or Wi-Fi) for transmission back to the client. \item Physical transmission: The frame is transmitted over the physical medium (e.g., Ethernet cable or Wi-Fi signal). \item Internet routers: The packet travels back through the series of routers, each determining the next hop towards the client. \item Client's Link, Network, and Transport Layers: The packet arrives at the client's network interface, where the frame is processed and the IP packet is extracted. The transport layer demultiplexes the packet and passes the HTTP response up to the application layer. \item Client's Application Layer (HTTP/HTTPS): The web browser receives the HTTP response and processes the content (e.g., HTML, images) to render the web page for the user. \end{enumerate} \end{multicols*} \end{document}