type
status
slug
date
summary
tags
category
password
icon
4.1 Overview of Network Layer
4.2 What’s Inside a Router?
A high-level view of a generic router architecture is shown in Figure 4.4. Four
router components can be identified:
- Input ports. An input port performs several key functions. It performs the physical layer function of terminating(终结) an incoming physical link at a router; this is shown in the leftmost box of an input port and the rightmost box of an output port in Figure 4.4. An input port also performs link-layer functions needed to interoperate(交互操作) with the link layer at the other side of the incoming link; this is represented by the middle boxes in the input and output ports. (埋一个坑,这里的功能不是很懂) Perhaps most crucially, a lookup function is also performed at the input port; this will occur in the rightmost box of the input port.
Note that the term “port” here—referring to the physical input and output router
interfaces—is distinctly different from the software ports associated with network applications and sockets discussed before(chp2 chp3). In practice, the number of ports supported by a router can range from a relatively small number in enterprise routers, to hundreds of 10 Gbps ports in a router at an ISP’s edge.
- Switching fabric. The switching fabric connects the router’s input ports to its output ports. This switching fabric is completely contained within the router—a network inside of a network router.
- Output ports. An output port stores packets received from the switching fabric and transmits these packets on the outgoing link by performing the necessary link-layer and physical-layer functions. When a link is bidirectional(that is, carries traffic in both directions), an output port will typically be paired with the input port for that link on the same line card.
- Routing processor. The routing processor performs control-plane functions. In traditional routers, it executes the routing protocols, maintains routing tables and attached link state information, and computes the forwarding table for the router. In SDN routers, the routing processor is responsible for communicating with the remote controller in order to receive forwarding table entries computed by the remote controller and install these entries in the router’s input ports. The routing processor also performs the network management functions.
4.2.1 Input Port Processing and Destination-Based Forwarding
A more detailed view of input processing is shown in Figure 4.5. As just discussed,
the input port’s line-termination function and link-layer processing implement the
physical and link layers for that individual input link. The lookup performed in the
input port is central to the router’s operation—it is here that the router uses the forwarding table to look up the output port to which an arriving packet will be forwarded via the switching fabric.
The forwarding table is copied from the routing processor to the line cards over a separate bus (e.g., a PCI bus) indicated by the dashed line from the routing processor to the input line cards in Figure 4.4. (screenshot right)👉
With such a shadow copy at each line card, forwarding decisions can be made locally, at each input port, without invoking the centralized routing processor on a per-packet basis and thus avoiding a centralized processing bottleneck.
We’ll take a closer look at the blocking, queuing, and scheduling of packets (at both input ports and output ports) shortly. Although “lookup” is arguably the most important action in input port processing, many other actions must be taken:
(1) physical- and link-layer processing must occur, as discussed previously; (2) the packet’s version number, checksum and time-to-live field—all of which we’ll study in Section 4.3—must be checked and the latter two fields rewritten; and (3) counters used for network management (such as
the number of IP datagrams received) must be updated.
4.2.2 Switching
The switching fabric is at the very heart of a router, as it is through this fabric that
the packets are actually switched (that is, forwarded) from an input port to an output
port.
转发的三种方式
Switching via memory.
The simplest, earliest routers were traditional computers, with switching between input and output ports being done under direct control of the CPU(routing processor). Input and output ports functioned as traditional I/O devices in a traditional operating system. 那么过程就和计组里面的外设操作一样,packet到达,产生中断让处理器处理,注意接下来的内存操作流程:packet 从输入端口被复制入路由选择处理器内存(写入内存一次),路由选择处理器从其首部提取目的地址,查转发表,将packet 放入输出端口缓存(从内存读出)。所以对于整个流程,至少内存处就需要有两次访存时间,如果内存带宽为B (分组/秒),那么总的转发吞吐量(分组从输入端口被传送到输出端口的总速率)必然小于B/2。不过这些都是原始的拿传统计算机做的,现代使用memory进行swiching的路由器也是有改变的。主要是目的地址的查找和将分组存储(交换)进适当的内存存储位置是由输入线路卡来处理的(每个输入线路卡(端口)就会有一个处理器)。这样看起来就有点像共享内存的多处理器(shared-memory multiprocessors),用一个线路卡上的处理器将分组交换(写)进适当的输出端口的内存中。
Switching via a bus.
这个效率就有点低低滴,因为每个packet都会把所有总线占用,发到每个输出port,因为在输入port的时候会打上一个特定的标签对应一个输出port,所以会在最后输出端口的地方筛选出来并且去掉标签。(having the input port pre-pend a switch-internal label(header) to the packet indicating the local output port to which this packet is being transferred and transmitting the packet onto the bus)所以一次就只会传输一个packet, 速度被bus speed限制
Switching via an interconnection network.
A crossbar switch is an interconnection network consisting of 2N buses that connect N input ports to N output ports, as shown in Figure 4.6. Each vertical bus intersects each horizontal bus at a crosspoint, which can be opened or closed at any time by the switch fabric controller (whose logic is part of the switching fabric itself). A crossbar switch is non-blocking—a packet being forwarded to an output port will not be blocked from reaching that output port as long as no other packet is currently being forwarded to that output port. However, if two packets from two different input ports are destined(指定) to that same output port, then one will have to wait at the input, since only one packet can be sent over any given bus at a time. 这里就要仔细想想这个图:
👇
不能搞错,intersection的地方,前面原文也说了,比如A到Y的连接,这个时候只会close one intersection—也就是拐弯那里。实际上每条线(A、B、C、X、Y、Z)都是通的,只是他们的交汇点不通,而fabric controller相当于就是去选通交汇点(intersection),所以A-Y和B-Z看起来会有交叉,但是实际上BY这个交汇点没有通,所以不会有问题,只要保证output port不同就一定可以parallel地传输信息而不会遇到blocking
4.2.3 Output Port Processing
4.2.4 Where Does Queuing Occur?
Suppose that the input and output line speeds (transmission rates) all have an
identical transmission rate of packets per second, and that there are N input ports
and N output ports. To further simplify the discussion, let’s assume that all packets
have the same fixed length, and that packets arrive at input ports in a synchronous
manner. That is, the time to send a packet on any link is equal to the time to receive a
packet on any link, and during such an interval of time, either zero or one packets can
arrive on an input link. Define the switching fabric transfer rate as the rate at
which packets can be moved from input port to output port. If is N times faster
than , then only negligible(微不足道的) queuing will occur at the input ports. This is because even in the worst case, where all N input lines are receiving packets, and all packets
are to be forwarded to the same output port, each batch of N packets (one packet per
input port) can be cleared through the switch fabric before the next batch arrives.
Input Queuing
switch fabric is not fast enough—不能让所有到达的分组无时延地通过它传送
two packets (darkly shaded) at the front of their input queues are destined for the same upper-right output port.
Suppose that the switch fabric chooses to transfer the packet from the front of the upper-left queue.
In this case, the darkly shaded packet in the lower-left queue must wait. But not only must this darkly shaded packet wait, so too must the lightly shaded packet that is queued behind that packet in the lower-left queue, even though there is no contention(竞争) for the middle-right output port (the destination for the lightly shaded packet).
This phenomenon is known as head-of-the-line (HOL) blocking in an input-queued
switch—a queued packet in an input queue must wait for transfer through the fabric
(even though its output port is free) because it is blocked by another packet at the head of the line.
Output Queuing
When there is not enough memory to buffer an incoming packet, a decision must be made to either drop the arriving packet (a policy known as drop-tail) or remove one or more already-queued packets to make room for the newly arrived packet. In some cases, it may be advantageous to drop (or mark the header of) a packet before the buffer is full in order to provide a congestion signal to the sender. This marking could be done using the Explicit Congestion Notification bits that we studied in Section 3.7.2.
A number of proactive packet-dropping and -marking policies (which collectively(共同的) have become known as active queue management (AQM) algorithms)have been proposed and analyzed.
A consequence of such queuing is that a packet scheduler(调度程序) at the output port must choose one packet, among those queued, for transmission—a topic we’ll cover in the following section.
How Much Buffering Is “Enough?”
Our study above has shown how a packet queue forms when bursts of packets arrive
at a router’s input or (more likely) output port, and the packet arrival rate temporarily
exceeds the rate at which packets can be forwarded.
how much buffering should be provisioned(预分配的) at a port.
bufferbloat(缓存膨胀)
这个缓存膨胀的场景我觉得这个书的例子特别烂,完全不知所云,只能尽力理解了,这个概念的定义是:
long delay due to persistent buffering is known as bufferbloat
suppose that it takes 20 ms to transmit a packet (containing a gamer’s TCP segment), that there are negligible queuing delays elsewhere on the path to the game server, and that the RTT is 200 ms.
As shown in Figure 4.10(b), suppose that at time t = 0, a burst of 25 packets arrives to
the queue.
One of these queued packets is then transmitted once every 20 ms, so that at t = 200 msec, the first ACK arrives, just as the 21st packet is being transmitted. (这里我有问题,感觉这本书这里写错了,计算可知,20ms发送一次,到200ms的时候,发送了10次,正在发送第11次,而不应该是第21次,所以后面的排队长度应该是15而不是5)(这里应该是加入拥塞控制以及流水线的情况,所以发送虽然是20ms一次,但是前面200ms在流水线的作用下,达到正好第一个ACK到达的时候,第21个分组正在发送)
This ACK arrival causes the TCP sender to send another packet, which is queued at the outgoing link of the home router.
At t = 220, the next ACK arrives, and another TCP segment is released by the gamer and is queued, as the 22nd packet is being transmitted, and so on(以此类推).
You should convince yourself that in this scenario, ACK clocking results in a new packet arriving at the queue every time a queued packet is sent, resulting in queue size at the home router’s outgoing link that is always five packets(这里我就觉得应该是15而不是5)!
That is, the end-end-pipe is full (delivering packets to the destination at the
path bottleneck rate of one packet every 20 ms), but the amount of queuing delay is
constant and persistent.(也就是说现在单个pipe的占用看起来是满了,走的20ms发送一个分组),但是由于buffer导致了排队时长恒定很长,无法适用,哪怕你抓包都只能是看到网络正常传输,数据的确是一直在发送,只是说你TCP发送方刚发送的数据需要排队(这个队很长,由于之前的突发分组造成的)
4.2.5 Packet Scheduling
First-in-First-Out (FIFO) 略
Priority Queuing 注意一下非抢占式
Round Robin
a round robin scheduler alternates service among the classes. In the simplest form of round robin scheduling, a class 1 packet is transmitted, followed by a class 2 packet, followed by a class 1 packet, followed by a class 2 packet, and so on. A so-called work-conserving queuing discipline will never allow the link to remain idle whenever there are packets (of any class) queued for transmission.
这里有个问题:它说will never allow the link to remain idle(无事可做) whenever there are packets queued for transmission.
但是前面的FIFO和priority Queue都做得到这点啊,这个算什么优势?我觉得更应该说优势是相比于priority的方式,它能减少低级class的积累(在高class大量涌入的时候)。
下面是只有两个类的robin-queue
A generalized form of round robin queuing that has been widely implemented
in routers is the so-called weighted fair queuing (WFQ) discipline
weighted fair queuing (WFQ)
WFQ differs from round robin in that each class may receive a differential amount
of service in any interval of time. Specifically, each class, , is assigned a weight, .
Under WFQ, during any interval of time during which there are class packets to send, class will then be guaranteed to receive a fraction of service equal to , where the sum in the denominator is taken over all classes that also have packets queued for transmission.
4.3 The Internet Protocol (IP): IPv4, Addressing, IPv6, and More
4.3.1 IPv4 Datagram Format
We begin our study of IP with an overview of the syntax and semantics(语义) of the IPv4 datagram.
- Version number. These 4 bits specify the IP protocol version of the datagram. By looking at the version number, the router can determine how to interpret the remainder of the IP datagram. Different versions of IP use different datagram formats. The datagram format for IPv4 is shown in Figure 4.17. The datagram format for the new version of IP (IPv6) is discussed in Section 4.3.4.
- Header length. Because an IPv4 datagram can contain a variable number of options (which are included in the IPv4 datagram header), these 4 bits are needed to determine where in the IP datagram the payload (for example, the transport-layer segment being encapsulated in this datagram) actually begins. Most IP datagrams do not contain options, so the typical IP datagram has a 20-byte header.
- Type of service. (8bits) The type of service (TOS) bits were included in the IPv4 header to allow different types of IP datagrams to be distinguished from each other. For example, it might be useful to distinguish real-time datagrams (such as those used by an IP telephony application) from non-real-time traffic (e.g., FTP). The specific level of service to be provided is a policy issue determined and configured by the network administrator for that router. We also learned in Section 3.7.2 that two of the TOS bits are used for Explicit Congestion Notification.
- Datagram length. This is the total length of the IP datagram (header plus data), measured in bytes. Since this field is 16 bits long, the theoretical maximum size of the IP datagram is 65,535 bytes. However, datagrams are rarely larger than 1,500 bytes, which allows an IP datagram to fit in the payload field of a maximally sized Ethernet frame.
- Identifier, flags, fragmentation offset. These three fields have to do with so-called IP fragmentation(分段), when a large IP datagram is broken into several smaller IP datagrams which are then forwarded independently to the destination, where they are reassembled before their payload data (see below) is passed up to the transport layer at the destination host. Interestingly, the new version of IP, IPv6, does not allow for fragmentation. We’ll not cover fragmentation here; but readers can find a detailed discussion online, among the “retired” material from earlier versions of this book.
- Time-to-live. The time-to-live (TTL) field is included to ensure that datagrams do not circulate forever (due to, for example, a long-lived routing loop) in the network. This field is decremented by one each time the datagram is processed by a router. If the TTL field reaches 0, a router must drop that datagram.
- Protocol. This field is typically used only when an IP datagram reaches its final destination. The value of this field indicates the specific transport-layer protocol to which the data portion of this IP datagram should be passed. For example, a value of 6 indicates that the data portion is passed to TCP, while a value of 17 indicates that the data is passed to UDP. For a list of all possible values, see [IANA Protocol Numbers 2016]. Note that the protocol number in the IP datagram has a role that is analogous to the role of the port number field in the transport-layer segment. The protocol number is the glue that binds the network and transport layers together, whereas(表示对比) the port number is the glue that binds the transport and application layers together. We’ll see in Chapter 6 that the link-layer frame also has a special field that binds the link layer to the network layer.
- Header checksum. The header checksum aids a router in detecting bit errors in a received IP datagram. The header checksum is computed by treating each 2 bytes in the header as a number and summing these numbers using 1s complement arithmetic. As discussed in Section 3.3, the 1s complement of this sum, known as the Internet checksum, is stored in the checksum field. A router computes the header checksum for each received IP datagram and detects an error condition if the checksum carried in the datagram header does not equal the computed checksum. Routers typically discard datagrams for which an error has been detected. Note that the checksum must be recomputed and stored again at each router, since the TTL field, and possibly the options field as well, will change. An interesting discussion of fast algorithms for computing the Internet checksum is [RFC 1071]. A question often asked at this point is, why does TCP/IP perform error checking at both the transport and network layers? There are several reasons for this repetition. First, note that only the IP header is checksummed at the IP layer, while the TCP/UDP checksum is computed over the entire TCP/UDP segment. Second, TCP/UDP and IP do not necessarily both have to belong to the same protocol stack. TCP can, in principle, run over a different network-layer protocol (for example, ATM) [Black 1995]) and IP can carry data that will not be passed to TCP/UDP.
- Source and destination IP addresses. When a source creates a datagram, it inserts its IP address into the source IP address field and inserts the address of the ultimate destination into the destination IP address field. Often the source host determines the destination address via a DNS lookup, as discussed in Chapter 2. We’ll discuss IP addressing in detail in Section 4.3.2.
- Options. The options fields allow an IP header to be extended. Header options were meant to be used rarely—hence the decision to save overhead(开销) by not including the information in options fields in every datagram header(意思是不强制全部的首部都包含,只要需要的有). However, the mere existence of options does complicate matters—since datagram headers can be of variable length, one cannot determine a priori where the data field will start. Also, since some datagrams may require options processing and others may not, the amount of time needed to process an IP datagram at a router can vary greatly. These considerations become particularly important for IP processing in high-performance routers and hosts. For these reasons and others, IP options were not included in the IPv6 header, as discussed in Section 4.3.4.
- Data (payload). Finally, we come to the last and most important field—the raison d’etre(raison d’etre: the most important reason for sb's/sth's existence存在的理由,这是一个单词) for the datagram in the first place! In most circumstances, the data field of the IP datagram contains the transport-layer segment (TCP or UDP) to be delivered to the destination. However, the data field can carry other types of data, such as ICMP messages (discussed in Section 5.6).
Note that an IP datagram has a total of 20 bytes of header (assuming no options). If the datagram carries a TCP segment, then each datagram carries a total of 40 bytes of header (20 bytes of IP header plus 20 bytes of TCP header) along with the application-layer message.
4.3.2 IPv4 Addressing(IPv4编址)
Internet addressing is not only a juicy, subtle, and interesting topic but also one that is of central importance to the Internet. An excellent treatment of IPv4 addressing can be found in the first chapter in [Stewart 1999].
说一下主机与路由器连入网络的方法:
一台主机只有一条链路连接到网络,当主机中的IP想发送一个数据报时,它就在该链路上发送。主机与物理链路之间的边界叫作接口(Interface)。
A host typically has only a single link into the network; when IP in the host wants to send a datagram, it does so over this link. The boundary between the host and the physical link is called an interface.
现在考虑一台路由器及其接口。因为路由器的任务是从链路上接收数据报并从某些其他链路转发出去,所以路由器必须拥有两条或更多链路与它连接。路由器与它的任意一条链路之间的边界也叫作接口。
Now consider a router and its interfaces. Because a router’s job is to receive a datagram on one link and forward the datagram on some other link, a router necessarily has two or more links to which it is connected. The boundary between the router and any one of its links is also called an interface.
从技术上讲,一个IP地址与一个接口相关联,而不是与包括该接口的主机或路由器相关联。
an IP address is technically associated with an interface, rather than with the host or router containing that interface.
例子见下图👇
In IP terms, this network interconnecting three host interfaces and one router interface forms a subnet [RFC 950]. (A subnet is also called an IP network or simply a network in the Internet literature.)
IP addressing assigns an address to this subnet:
223.1.1.0/24, where the /24 (“slash-24”) notation, sometimes known as a subnet mask, indicates that the leftmost 24 bits of the 32-bit quantity define the subnet address. The 223.1.1.0/24 subnet thus consists of the three host interfaces (223.1.1.1, 223.1.1.2, and 223.1.1.3) and one router interface (223.1.1.4). Any additional hosts attached to the 223.1.1.0/24 subnet would be required to have an address of the form 223.1.1.xxx. There are two additional subnets shown in Figure 4.18: the 223.1.2.0/24 network and the 223.1.3.0/24 subnet. Figure 4.19 illustrates the three IP subnets present in Figure 4.18.
For a general interconnected system of routers and hosts, we can use the following recipe to define the subnets in the system:
To determine the subnets, detach(分离) each interface from its host or router, creating islands of isolated networks, with interfaces terminating(终结) the end points of the isolated networks. Each of these isolated networks is called a subnet.
为了确定子网,分开主机和路由器的每个接口,产生几个隔离的网络岛,使用接口端接这些隔离的网络端点。这些隔离的网络中的每一个都叫做一个子网
没看懂啊,咋分啊!!!救命
但是自己分隔子网鉴别的方式很简单,看就完了,直接按着router的interface数就行了,去掉重复的部分即可
The Internet’s address assignment strategy is known as Classless Interdomain Routing(CIDR—pronounced cider) [RFC 4632].
CIDR generalizes the notion of subnet addressing. As with subnet addressing, the 32-bit IP address is divided into two parts and again has the dotted-decimal form a.b.c.d/x, where x indicates the number of bits in the first part of the address
Obtaining a Block of Addresses
Obtaining a Host Address: The Dynamic Host Configuration Protocol
A system administrator will typically manually configure the IP addresses into the router (often remotely, with a network management tool). Host addresses can also be configured manually, but typically this is done using the Dynamic Host Configuration Protocol (DHCP) [RFC 2131].
Because of DHCP’s ability to automate the network-related aspects of connecting a host into a network, it is often referred to as a plug-and-play or zeroconf (zero-configuration) protocol.
DHCP is a client-server protocol. A client is typically a newly arriving host wanting to obtain network configuration information, including an IP address for itself. In the simplest case, each subnet (in the addressing sense of Figure 4.20) will have a DHCP server. If no server is present on the subnet, a DHCP relay agent (typically a router) that knows the address of a DHCP server for that network is needed.
For a newly arriving host, the DHCP protocol is a four-step process, as shown in Figure 4.24 for the network setting shown in Figure 4.23. In this figure, yiaddr (as in “your Internet address”) indicates the address being allocated to the newly arriving client. The four steps are:
- DHCP server discovery. The first task of a newly arriving host is to find a DHCP server with which to interact. This is done using a DHCP discover message, which a client sends within a UDP packet to port 67. The UDP packet is encapsulated in an IP datagram. But to whom should this datagram be sent? The host doesn’t even know the IP address of the network to which it is attaching, much less the address of a DHCP server for this network. Given this, the DHCP client creates an IP datagram containing its DHCP discover message along with the broadcast destination IP address of 255.255.255.255 and a “this host” source IP address of 0.0.0.0. The DHCP client passes the IP datagram to the link layer, which then broadcasts this frame to all nodes attached to the subnet.
- DHCP server offer(s). A DHCP server receiving a DHCP discover message responds to the client with a DHCP offer message that is broadcast to all nodes on the subnet, again using the IP broadcast address of 255.255.255.255. Since several DHCP servers can be present on the subnet, the client may find itself in the enviable(值得羡慕的) position of being able to choose from among several offers. Each server offer message contains the transaction ID of the received discover message, the proposed IP address for the client, the network mask, and an IP address lease time—the amount of time for which the IP address will be valid. It is common for the server to set the lease time to several hours or days.
- DHCP request. The newly arriving client will choose from among one or more server offers and respond to its selected offer with a DHCP request message, echoing back the configuration parameters.
- DHCP ACK. The server responds to the DHCP request message with a DHCP ACK message, confirming the requested parameters.
Since a client may want to use its address beyond the lease’s expiration, DHCP also provides a mechanism that allows a client to renew its lease on an IP address.
From a mobility aspect, DHCP does have one very significant shortcoming. Since a new IP address is obtained from DHCP each time a node connects to a new subnet, a TCP connection to a remote application cannot be maintained as a mobile node moves between subnets. In Chapter 7, we will learn how mobile cellular networks allow a host to retain its IP address and ongoing TCP connections as it moves between base stations in a provider’s cellular network.
4.3.3 Network Address Translation (NAT)
network address translation (NAT) [RFC 2663; RFC 3022;Huston 2004, Zhang 2007; Huston 2017].
Figure 4.25 shows the operation of a NAT-enabled router.
NAT has enjoyed widespread deployment in recent years. But NAT is not without detractors. First, one might argue that, port numbers are meant to be used for addressing processes, not for addressing hosts. This violation can indeed cause problems for servers running on the home network, since, as we have seen in Chapter 2, server processes wait for incoming requests at well-known port numbers and peers in a P2P protocol need to accept incoming connections when acting as servers.
How can one peer connect to another peer that is behind a NAT server, and has a DHCP-provided NAT address? Technical solutions to these problems include NAT traversal(穿越) tools [RFC 5389] [RFC 5389, RFC 5128, Ford 2005]
ps:私有网络: 主要用于在私有网络(局域网)内部使用。这些私有地址在互联网上是不可路由的,它们允许组织在内部网络中使用相同的地址空间,而无需担心与其他组织发生冲突。最常见的私有地址范围是:
10.0.0.0 到 10.255.255.255
172.16.0.0 到 172.31.255.255
192.168.0.0 到 192.168.255.255
在这里我有一些思考:如果我的NAT路由器有公网IP的话,我就可以去配置它,比如LAN side的10.0.0.1的80端口有个web服务,我可以把设置路由器NAT translation table固定一个WAN side 138.76.29.7的80端口与10.0.0.1的80端口绑定,这样外面的请求就能正确的到达局域网的主机的进程。
More “philosophical” arguments have also been raised against NAT by architectural purists. Here, the concern is that routers are meant to be layer 3 (i.e., network-layer) devices, and should process packets only up to the network layer. NAT violates this principle that hosts should be talking directly with each other, without interfering nodes modifying IP addresses, much less port numbers. We’ll return to this debate later in Section 4.5, when we cover middleboxes(中间件).
Additional protection can be provided with an intrusion detection system(IDS). An IDS, typically situated at the network boundary, performs “deep packet inspection,” examining not only header fields but also the payloads in the datagram (including application-layer data). An IDS has a database of packet signatures that are known to be part of attacks. This database is automatically updated as new attacks are discovered. As packets pass through the IDS, the IDS attempts to match header fields and payloads to the signatures in its signature database. If such a match is found, an alert is created. An intrusion prevention system (IPS) is similar to an IDS, except that it actually blocks packets in addition to creating alerts.
4.3.4 IPv6
IPv6 Datagram Format
- 作者:liamY
- 链接:https://liamy.clovy.top/article/csnet/note/netData
- 声明:本文采用 CC BY-NC-SA 4.0 许可协议,转载请注明出处。