linux-network-performance-parameters

linux-network-performance-parameters

Linux网络性能参数详解及优化方法

这个项目全面介绍Linux网络性能参数,包括网络队列、缓冲区、中断合并和QDisc等。详细解释了各参数的作用,并提供检查和调优方法。还涵盖TCP读写缓冲、拥塞控制等高级主题,以及相关监控工具。适合系统管理员和开发人员深入了解Linux网络栈性能优化。

Linux网络队列sysctl性能调优网络栈Github开源项目

🇷🇺

TOC

Introduction

Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-off and that works on every occasion. That's not realistic, although we can say that the newer kernel versions are very well tuned by default. In fact, you might hurt performance if you mess with the defaults.

This brief tutorial shows where some of the most used and quoted sysctl/network parameters are located into the Linux network flow, it was heavily inspired by the illustrated guide to Linux networking stack and many of Marek Majkowski's posts.

Feel free to send corrections and suggestions! :)

Linux network queues overview

linux network queues

Fitting the sysctl variables into the Linux network flow

Ingress - they're coming

  1. Packets arrive at the NIC
  2. NIC will verify MAC (if not on promiscuous mode) and FCS and decide to drop or to continue
  3. NIC will DMA packets at RAM, in a region previously prepared (mapped) by the driver
  4. NIC will enqueue references to the packets at receive ring buffer queue rx until rx-usecs timeout or rx-frames
  5. NIC will raise a hard IRQ
  6. CPU will run the IRQ handler that runs the driver's code
  7. Driver will schedule a NAPI, clear the hard IRQ and return
  8. Driver raise a soft IRQ (NET_RX_SOFTIRQ)
  9. NAPI will poll data from the receive ring buffer until netdev_budget_usecs timeout or netdev_budget and dev_weight packets
  10. Linux will also allocate memory to sk_buff
  11. Linux fills the metadata: protocol, interface, setmacheader, removes ethernet
  12. Linux will pass the skb to the kernel stack (netif_receive_skb)
  13. It will set the network header, clone skb to taps (i.e. tcpdump) and pass it to tc ingress
  14. Packets are handled to a qdisc sized netdev_max_backlog with its algorithm defined by default_qdisc
  15. It calls ip_rcv and packets are handed to IP
  16. It calls netfilter (PREROUTING)
  17. It looks at the routing table, if forwarding or local
  18. If it's local it calls netfilter (LOCAL_IN)
  19. It calls the L4 protocol (for instance tcp_v4_rcv)
  20. It finds the right socket
  21. It goes to the tcp finite state machine
  22. Enqueue the packet to the receive buffer and sized as tcp_rmem rules
    1. If tcp_moderate_rcvbuf is enabled kernel will auto-tune the receive buffer
  23. Kernel will signalize that there is data available to apps (epoll or any polling system)
  24. Application wakes up and reads the data

Egress - they're leaving

  1. Application sends message (sendmsg or other)
  2. TCP send message allocates skb_buff
  3. It enqueues skb to the socket write buffer of tcp_wmem size
  4. Builds the TCP header (src and dst port, checksum)
  5. Calls L3 handler (in this case ipv4 on tcp_write_xmit and tcp_transmit_skb)
  6. L3 (ip_queue_xmit) does its work: build ip header and call netfilter (LOCAL_OUT)
  7. Calls output route action
  8. Calls netfilter (POST_ROUTING)
  9. Fragment the packet (ip_output)
  10. Calls L2 send function (dev_queue_xmit)
  11. Feeds the output (QDisc) queue of txqueuelen length with its algorithm default_qdisc
  12. The driver code enqueue the packets at the ring buffer tx
  13. The driver will do a soft IRQ (NET_TX_SOFTIRQ) after tx-usecs timeout or tx-frames
  14. Re-enable hard IRQ to NIC
  15. Driver will map all the packets (to be sent) to some DMA'ed region
  16. NIC fetches the packets (via DMA) from RAM to transmit
  17. After the transmission NIC will raise a hard IRQ to signal its completion
  18. The driver will handle this IRQ (turn it off)
  19. And schedule (soft IRQ) the NAPI poll system
  20. NAPI will handle the receive packets signaling and free the RAM

How to check - perf

If you want to see the network tracing within Linux you can use perf.

docker run -it --rm --cap-add SYS_ADMIN --entrypoint bash ljishen/perf
apt-get update
apt-get install iputils-ping

# this is going to trace all events (not syscalls) to the subsystem net:* while performing the ping
perf trace --no-syscalls --event 'net:*' ping globo.com -c1 > /dev/null

perf trace network

What, Why and How - network and sysctl parameters

Ring Buffer - rx,tx

  • What - the driver receive/send queue a single or multiple queues with a fixed size, usually implemented as FIFO, it is located at RAM
  • Why - buffer to smoothly accept bursts of connections without dropping them, you might need to increase these queues when you see drops or overrun, aka there are more packets coming than the kernel is able to consume them, the side effect might be increased latency.
  • How:
    • Check command: ethtool -g ethX
    • Change command: ethtool -G ethX rx value tx value
    • How to monitor: ethtool -S ethX | grep -e "err" -e "drop" -e "over" -e "miss" -e "timeout" -e "reset" -e "restar" -e "collis" -e "over" | grep -v "\: 0"

Interrupt Coalescence (IC) - rx-usecs, tx-usecs, rx-frames, tx-frames (hardware IRQ)

  • What - number of microseconds/frames to wait before raising a hardIRQ, from the NIC perspective it'll DMA data packets until this timeout/number of frames
  • Why - reduce CPUs usage, hard IRQ, might increase throughput at cost of latency.
  • How:
    • Check command: ethtool -c ethX
    • Change command: ethtool -C ethX rx-usecs value tx-usecs value
    • How to monitor: cat /proc/interrupts

Interrupt Coalescing (soft IRQ) and Ingress QDisc

  • What - maximum number of microseconds in one NAPI polling cycle. Polling will exit when either netdev_budget_usecs have elapsed during the poll cycle or the number of packets processed reaches netdev_budget.
  • Why - instead of reacting to tons of softIRQ, the driver keeps polling data; keep an eye on dropped (# of packets that were dropped because netdev_max_backlog was exceeded) and squeezed (# of times ksoftirq ran out of netdev_budget or time slice with work remaining).
  • How:
    • Check command: sysctl net.core.netdev_budget_usecs
    • Change command: sysctl -w net.core.netdev_budget_usecs value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_budget is the maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are registered to polling are probed in a round-robin manner. Also, a polling cycle may not exceed netdev_budget_usecs microseconds, even if netdev_budget has not been exhausted.
  • How:
    • Check command: sysctl net.core.netdev_budget
    • Change command: sysctl -w net.core.netdev_budget value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - dev_weight is the maximum number of packets that kernel can handle on a NAPI interrupt, it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware aggregated packet is counted as one packet in this.
  • How:
    • Check command: sysctl net.core.dev_weight
    • Change command: sysctl -w net.core.dev_weight value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool
  • What - netdev_max_backlog is the maximum number of packets, queued on the INPUT side (the ingress qdisc), when the interface receives packets faster than kernel can process them.
  • How:
    • Check command: sysctl net.core.netdev_max_backlog
    • Change command: sysctl -w net.core.netdev_max_backlog value
    • How to monitor: cat /proc/net/softnet_stat; or a better tool

Egress QDisc - txqueuelen and default_qdisc

  • What - txqueuelen is the maximum number of packets, queued on the OUTPUT side.
  • Why - a buffer/queue to face connection burst and also to apply tc (traffic control).
  • How:
    • Check command: ip link show dev ethX
    • Change command: ip link set dev ethX txqueuelen N
    • How to monitor: ip -s link
  • What - default_qdisc is the default queuing discipline to use for network devices.
  • Why - each application has different load and need to traffic control and it is used also to fight against bufferbloat
  • How:
    • Check command: sysctl net.core.default_qdisc
    • Change command: sysctl -w net.core.default_qdisc value
    • How to monitor: tc -s qdisc ls dev ethX

TCP Read and Write Buffers/Queues

The policy that defines what is memory pressure is specified at tcp_mem and tcp_moderate_rcvbuf.

  • What - tcp_rmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of receive buffer used by TCP sockets.
  • Why - the application buffer/queue to the write/send data, understand its consequences can help a lot.
  • How:
    • Check command: sysctl net.ipv4.tcp_rmem
    • Change command: sysctl -w net.ipv4.tcp_rmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What - tcp_wmem - min (size used under memory pressure), default (initial size), max (maximum size) - size of send buffer used by TCP sockets.
  • How:
    • Check command: sysctl net.ipv4.tcp_wmem
    • Change command: sysctl -w net.ipv4.tcp_wmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc)
    • How to monitor: cat /proc/net/sockstat
  • What tcp_moderate_rcvbuf - If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer.
  • How:
    • Check command: sysctl net.ipv4.tcp_moderate_rcvbuf
    • Change command: sysctl -w net.ipv4.tcp_moderate_rcvbuf value
    • How to monitor: cat /proc/net/sockstat

Honorable mentions - TCP FSM and congestion algorithm

Accept and SYN Queues are governed by net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. Nowadays net.core.somaxconn caps both queue sizes.

  • sysctl net.core.somaxconn - provides an upper limit on the value of the backlog parameter passed to the listen() function, known in userspace as SOMAXCONN. If you change this value, you should also change your application to a compatible value (i.e. nginx backlog).
  • cat /proc/sys/net/ipv4/tcp_fin_timeout - this specifies the number of seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification but required to prevent denial-of-service attacks.
  • cat /proc/sys/net/ipv4/tcp_available_congestion_control - shows the available congestion control choices that are registered.
  • cat /proc/sys/net/ipv4/tcp_congestion_control - sets the congestion control algorithm to be used for new connections.
  • cat /proc/sys/net/ipv4/tcp_max_syn_backlog - sets the maximum number of queued connection requests which have still not received an acknowledgment from the connecting client; if this number is exceeded, the kernel will begin dropping requests.
  • cat /proc/sys/net/ipv4/tcp_syncookies - enables/disables syn cookies, useful for protecting against syn flood attacks.
  • cat /proc/sys/net/ipv4/tcp_slow_start_after_idle - enables/disables tcp slow start.

How to monitor:

  • netstat -atn | awk '/tcp/ {print $6}' | sort | uniq -c - summary by state
  • ss -neopt state time-wait | wc -l - counters by a specific state: established, syn-sent, syn-recv, fin-wait-1, fin-wait-2, time-wait, closed, close-wait, last-ack, listening, closing
  • netstat -st - tcp stats summary
  • nstat -a - human-friendly tcp stats summary
  • cat /proc/net/sockstat - summarized socket stats
  • cat /proc/net/tcp - detailed stats, see each field meaning at the kernel docs
  • cat /proc/net/netstat - ListenOverflows and ListenDrops are important fields to keep an eye on
    • cat /proc/net/netstat | awk '(f==0) { i=1; while ( i<=NF) {n[i] = $i; i++ }; f=1; next} \ (f==1){ i=2; while ( i<=NF){ printf "%s = %d\n", n[i], $i; i++}; f=0} ' | grep -v "= 0; a human readable /proc/net/netstat

![tcp finite state machine](https://upload.wikimedia.org/wikipedia/commons/a/a2/Tcp_state_diagram_fixed.svg

编辑推荐精选

博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
商汤小浣熊

商汤小浣熊

最强AI数据分析助手

小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。

imini AI

imini AI

像人一样思考的AI智能体

imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。

下拉加载更多