Message ID | 20230517124201.441634-4-imagedong@tencent.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1113724vqo; Wed, 17 May 2023 05:59:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7qyuS7j3VhOAtWMHWYpc0AyHZbEkB+rcQsjNG5yDDJaJQPNGo2FPTti7yCt6ojlUKXJ7Lj X-Received: by 2002:a17:903:1110:b0:1a9:bc13:7e20 with SMTP id n16-20020a170903111000b001a9bc137e20mr49757238plh.44.1684328379125; Wed, 17 May 2023 05:59:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684328379; cv=none; d=google.com; s=arc-20160816; b=xd8CDkdeatUomFDQ2DjfkNlfNh3FZlCppJa2jb9glzD+SRiXVwh+ZexuUE72nilR5X p5lacSto3vd3/ahYhYEGDwAG7q/dpF6ErpBT4cBgltit9hCIddGdAH84AEdRW1629gTk wBuXd7s/Qhn1API8ExOiduM1waUMyTQUa9Dn2MXioAd4/K4v8b4gYslqdpcyzVbcHxlB dWyCGYHuCYvc5NBCjj8ln30wx/DuSLCK0HZuxZZqqGJFMI0jslMcx1uvgmNF6ehSZyRW fjhA2Uv+pelGUNuE23wGLX2ep5xeDhgwbAj7q4x1HsmXzKx/KFjaGs80zI3HC0Xnkd6+ rxbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pX2hrFRMTsVQF1+5JbeKRRhV3fvCd61M3POL1LFhKAA=; b=FW+oA7sOC7OMx1I0ZLo/VYKUBq5b7EIAHyh8UwCkYY3zjIrvLfUp4mcnM+XxGTl8Ej MMyzzVhFZkpgfeEfomKfR32FnBUEDhPYz41trGAF7nCHaNG0PxCqiIPo6fJ3raS6kxP/ mVIWxpiHskLjIpq600QIlrx3EZT+ELbZCURhauPBZKaWGNfAJZT7heNLNE8PwA+eqmVI cEVLjEAyE6CJl3q1MzNZlD0NJRJ9wTsfcFmBAUFOTigLZvXqjJGVClHmOvT5W1ijz8IU nc6WUqMXnCG5I2WtbLbXBeWgQTbwzb3aFjoY465geBQfqVfYH4tC2mVZVd3Ylj1xPZII Vr8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=kDUlcJaW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji10-20020a170903324a00b001aaed82c2afsi20568942plb.171.2023.05.17.05.59.25; Wed, 17 May 2023 05:59:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=kDUlcJaW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231755AbjEQMm1 (ORCPT <rfc822;pacteraone@gmail.com> + 99 others); Wed, 17 May 2023 08:42:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231495AbjEQMmS (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 17 May 2023 08:42:18 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1CEE269F; Wed, 17 May 2023 05:42:16 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id d2e1a72fcca58-64359d9c531so517302b3a.3; Wed, 17 May 2023 05:42:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684327336; x=1686919336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pX2hrFRMTsVQF1+5JbeKRRhV3fvCd61M3POL1LFhKAA=; b=kDUlcJaW77qv2uceQP01whPf+y603pZWGnpjr3X5VKoYFKVAN6D0NRY4f6T6MCq2rh ep/rW4c+eeXIXZULnUI0ht//PM48uzBnwnViUcXEz9K7VDB807FyPRXi1fNsoPggsXhd Qw8zXKiP7ZUxHg0BWtYyH1OLe/YZM6iJEATdlO9UYbuQSlR1sTtScm/Efj5kkySgeJuf ++nVpAlRG9lPVPbSUyMgroB+NqvhtDflxtk8H28X45+vJeK4a+5WX2MyIFmT0M2cVebq USEmEd9DtH9c0sXzZwfDPQng5207kUEhHH/8MrLEDwE2kcbzX6wxedShDdA+5EHLTf26 2/Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684327336; x=1686919336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pX2hrFRMTsVQF1+5JbeKRRhV3fvCd61M3POL1LFhKAA=; b=LTqlPHajUDQuypc7t+90ZqnZ3eSAWlzSUwYo9PG7dJXuEJboAQAIsdJv8qQOYjpi5X 9Zr8Q9jHwfFbeARrq0MEvpps71wSsDDLhwNoX+iC3ar3hFNV3jbORx5NaezZsk8OSdMX Wvhng7Ki3KEQu/bpvVC6PAaRxWRuSdXBuY1fP070twpsWcxbceHU4hMbTOPKgF0We0WP lNex2DuMcuCGsCurrB/YlCqrUVjLK1tJcq0ncOHBHOYpZQfl6CJkL6rhjP1LnNT+BBAE Udcq1r0oaZdjppoAid68ixw6i+Y61BxFn1cp26FxnU1zhC8iyQSo/Mblwre9GfDNYzsu On1g== X-Gm-Message-State: AC+VfDzlVA+jNuracUQWRCDBGBzhKwEwQ21bJbGDp79VAjulY116tXNw KoahN2E3PniXkKBfO1bLCNw= X-Received: by 2002:a05:6a20:54a8:b0:104:6432:270 with SMTP id i40-20020a056a2054a800b0010464320270mr24929219pzk.46.1684327336317; Wed, 17 May 2023 05:42:16 -0700 (PDT) Received: from localhost.localdomain ([81.70.217.19]) by smtp.gmail.com with ESMTPSA id u23-20020aa78497000000b0064aea45b040sm9244224pfn.168.2023.05.17.05.42.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 05:42:16 -0700 (PDT) From: menglong8.dong@gmail.com X-Google-Original-From: imagedong@tencent.com To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, dsahern@kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Menglong Dong <imagedong@tencent.com> Subject: [PATCH net-next 3/3] net: tcp: handle window shrink properly Date: Wed, 17 May 2023 20:42:01 +0800 Message-Id: <20230517124201.441634-4-imagedong@tencent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230517124201.441634-1-imagedong@tencent.com> References: <20230517124201.441634-1-imagedong@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766146314317353429?= X-GMAIL-MSGID: =?utf-8?q?1766146314317353429?= |
Series |
net: tcp: add support of window shrink
|
|
Commit Message
Menglong Dong
May 17, 2023, 12:42 p.m. UTC
From: Menglong Dong <imagedong@tencent.com> Window shrink is not allowed and also not handled for now, but it's needed in some case. In the origin logic, 0 probe is triggered only when there is no any data in the retrans queue and the receive window can't hold the data of the 1th packet in the send queue. Now, let's change it and trigger the 0 probe in such cases: - if the retrans queue has data and the 1th packet in it is not within the receive window - no data in the retrans queue and the 1th packet in the send queue is out of the end of the receive window Signed-off-by: Menglong Dong <imagedong@tencent.com> --- include/net/tcp.h | 21 +++++++++++++++++++++ net/ipv4/tcp_input.c | 41 +++++++++++++++++++++++++++++++++++++++++ net/ipv4/tcp_output.c | 3 +-- net/ipv4/tcp_timer.c | 4 +--- 4 files changed, 64 insertions(+), 5 deletions(-)
Comments
On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > From: Menglong Dong <imagedong@tencent.com> > > Window shrink is not allowed and also not handled for now, but it's > needed in some case. > > In the origin logic, 0 probe is triggered only when there is no any > data in the retrans queue and the receive window can't hold the data > of the 1th packet in the send queue. > > Now, let's change it and trigger the 0 probe in such cases: > > - if the retrans queue has data and the 1th packet in it is not within > the receive window > - no data in the retrans queue and the 1th packet in the send queue is > out of the end of the receive window Sorry, I do not understand. Please provide packetdrill tests for new behavior like that. Also, such fundamental change would need IETF discussion first. We do not want linux to cause network collapses just because billions of devices send more zero probes.
Hi, kernel test robot noticed the following build warnings: [auto build test WARNING on net-next/main] url: https://github.com/intel-lab-lkp/linux/commits/menglong8-dong-gmail-com/net-tcp-add-sysctl-for-controling-tcp-window-shrink/20230517-204436 base: net-next/main patch link: https://lore.kernel.org/r/20230517124201.441634-4-imagedong%40tencent.com patch subject: [PATCH net-next 3/3] net: tcp: handle window shrink properly config: m68k-allyesconfig compiler: m68k-linux-gcc (GCC) 12.1.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/3712ab843c151a10d6570ded9d8c77ad831a7a41 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review menglong8-dong-gmail-com/net-tcp-add-sysctl-for-controling-tcp-window-shrink/20230517-204436 git checkout 3712ab843c151a10d6570ded9d8c77ad831a7a41 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash net/ If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202305180050.jwEMlDOS-lkp@intel.com/ All warnings (new ones prefixed by >>): >> net/ipv4/tcp_input.c:3477: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * This function is called only when there are packets in the rtx queue, vim +3477 net/ipv4/tcp_input.c 3475 3476 /** > 3477 * This function is called only when there are packets in the rtx queue, 3478 * which means that the packets out is not 0. 3479 * 3480 * NOTE: we only handle window shrink case in this part. 3481 */ 3482 static void tcp_ack_probe_shrink(struct sock *sk) 3483 { 3484 struct inet_connection_sock *icsk = inet_csk(sk); 3485 unsigned long when; 3486 3487 if (!sysctl_tcp_wnd_shrink) 3488 return; 3489 3490 if (tcp_rtx_overflow(sk)) { 3491 when = tcp_probe0_when(sk, TCP_RTO_MAX); 3492 3493 when = tcp_clamp_probe0_to_user_timeout(sk, when); 3494 tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX); 3495 } else { 3496 /* check if recover from window shrink */ 3497 if (icsk->icsk_pending != ICSK_TIME_PROBE0) 3498 return; 3499 3500 icsk->icsk_backoff = 0; 3501 icsk->icsk_probes_tstamp = 0; 3502 inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0); 3503 if (!tcp_rtx_queue_empty(sk)) 3504 tcp_retransmit_timer(sk); 3505 } 3506 } 3507
On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > From: Menglong Dong <imagedong@tencent.com> > > > > Window shrink is not allowed and also not handled for now, but it's > > needed in some case. > > > > In the origin logic, 0 probe is triggered only when there is no any > > data in the retrans queue and the receive window can't hold the data > > of the 1th packet in the send queue. > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > - if the retrans queue has data and the 1th packet in it is not within > > the receive window > > - no data in the retrans queue and the 1th packet in the send queue is > > out of the end of the receive window > > Sorry, I do not understand. > > Please provide packetdrill tests for new behavior like that. > Yes. The problem can be reproduced easily. 1. choose a server machine, decrease it's tcp_mem with: echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem 2. call listen() and accept() on a port, such as 8888. We call accept() looply and without call recv() to make the data stay in the receive queue. 3. choose a client machine, and create 100 TCP connection to the 8888 port of the server. Then, every connection sends data about 1M. 4. we can see that some of the connection enter the 0-probe state, but some of them keep retrans again and again. As the server is up to the tcp_mem[2] and skb is dropped before the recv_buf full and the connection enter 0-probe state. Finially, some of these connection will timeout and break. With this series, all the 100 connections will enter 0-probe status and connection break won't happen. And the data trans will recover if we increase tcp_mem or call 'recv()' on the sockets in the server. > Also, such fundamental change would need IETF discussion first. > We do not want linux to cause network collapses just because billions > of devices send more zero probes. I think it maybe a good idea to make the connection enter 0-probe, rather than drop the skb silently. What 0-probe meaning is to wait for space available when the buffer of the receive queue is full. And maybe we can also use 0-probe when the "buffer" of "TCP protocol" (which means tcp_mem) is full? Am I right? Thanks! Menglong Dong
On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > needed in some case. > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > data in the retrans queue and the receive window can't hold the data > > > of the 1th packet in the send queue. > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > the receive window > > > - no data in the retrans queue and the 1th packet in the send queue is > > > out of the end of the receive window > > > > Sorry, I do not understand. > > > > Please provide packetdrill tests for new behavior like that. > > > > Yes. The problem can be reproduced easily. > > 1. choose a server machine, decrease it's tcp_mem with: > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > 2. call listen() and accept() on a port, such as 8888. We call > accept() looply and without call recv() to make the data stay > in the receive queue. > 3. choose a client machine, and create 100 TCP connection > to the 8888 port of the server. Then, every connection sends > data about 1M. > 4. we can see that some of the connection enter the 0-probe > state, but some of them keep retrans again and again. As > the server is up to the tcp_mem[2] and skb is dropped before > the recv_buf full and the connection enter 0-probe state. > Finially, some of these connection will timeout and break. > > With this series, all the 100 connections will enter 0-probe > status and connection break won't happen. And the data > trans will recover if we increase tcp_mem or call 'recv()' > on the sockets in the server. > > > Also, such fundamental change would need IETF discussion first. > > We do not want linux to cause network collapses just because billions > > of devices send more zero probes. > > I think it maybe a good idea to make the connection enter > 0-probe, rather than drop the skb silently. What 0-probe > meaning is to wait for space available when the buffer of the > receive queue is full. And maybe we can also use 0-probe > when the "buffer" of "TCP protocol" (which means tcp_mem) > is full? > > Am I right? > > Thanks! > Menglong Dong Thanks for describing the scenario in more detail. (Some kind of packetdrill script or other program to reproduce this issue would be nice, too, as Eric noted.) You mention in step (4.) above that some of the connections keep retransmitting again and again. Are those connections receiving any ACKs in response to their retransmissions? Perhaps they are receiving dupacks? If so, then perhaps we could solve this problem without depending on a violation of the TCP spec (which says the receive window should not be retracted) in the following way: when a data sender suffers a retransmission timeout, and retransmits the first unacknowledged segment, and receives a dupack for SND.UNA instead of an ACK covering the RTO-retransmitted segment, then the data sender should estimate that the receiver doesn't have enough memory to buffer the retransmitted packet. In that case, the data sender should enter the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to call tcp_probe_timer(). Basically we could try to enhance the sender-side logic to try to distinguish between two kinds of problems: (a) Repeated data packet loss caused by congestion, routing problems, or connectivity problems. In this case, the data sender uses ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only retries sysctl_tcp_retries2 times before timing out the connection (b) A receiver that is repeatedly sending dupacks but not ACKing retransmitted data because it doesn't have any memory. In this case, the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs off but keeps retrying as long as the data sender receives ACKs. AFAICT that would be another way to reach the happy state you mention: "all the 100 connections will enter 0-probe status and connection break won't happen", and we could reach that state without violating the TCP protocol spec and without requiring changes on the receiver side (so that this fix could help in scenarios where the memory-constrained receiver is an older stack without special new behavior). Eric, Yuchung, Menglong: do you think something like that would work? neal
On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > needed in some case. > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > data in the retrans queue and the receive window can't hold the data > > > > of the 1th packet in the send queue. > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > the receive window > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > out of the end of the receive window > > > > > > Sorry, I do not understand. > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > Yes. The problem can be reproduced easily. > > > > 1. choose a server machine, decrease it's tcp_mem with: > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > 2. call listen() and accept() on a port, such as 8888. We call > > accept() looply and without call recv() to make the data stay > > in the receive queue. > > 3. choose a client machine, and create 100 TCP connection > > to the 8888 port of the server. Then, every connection sends > > data about 1M. > > 4. we can see that some of the connection enter the 0-probe > > state, but some of them keep retrans again and again. As > > the server is up to the tcp_mem[2] and skb is dropped before > > the recv_buf full and the connection enter 0-probe state. > > Finially, some of these connection will timeout and break. > > > > With this series, all the 100 connections will enter 0-probe > > status and connection break won't happen. And the data > > trans will recover if we increase tcp_mem or call 'recv()' > > on the sockets in the server. > > > > > Also, such fundamental change would need IETF discussion first. > > > We do not want linux to cause network collapses just because billions > > > of devices send more zero probes. > > > > I think it maybe a good idea to make the connection enter > > 0-probe, rather than drop the skb silently. What 0-probe > > meaning is to wait for space available when the buffer of the > > receive queue is full. And maybe we can also use 0-probe > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > is full? > > > > Am I right? > > > > Thanks! > > Menglong Dong > > Thanks for describing the scenario in more detail. (Some kind of > packetdrill script or other program to reproduce this issue would be > nice, too, as Eric noted.) > > You mention in step (4.) above that some of the connections keep > retransmitting again and again. Are those connections receiving any > ACKs in response to their retransmissions? Perhaps they are receiving > dupacks? Actually, these packets are dropped without any reply, even dupacks. skb will be dropped directly when tcp_try_rmem_schedule() fails in tcp_data_queue(). That's reasonable, as it's useless to reply a ack to the sender, which will cause the sender fast retrans the packet, because we are out of memory now, and retrans can't solve the problem. > If so, then perhaps we could solve this problem without > depending on a violation of the TCP spec (which says the receive > window should not be retracted) in the following way: when a data > sender suffers a retransmission timeout, and retransmits the first > unacknowledged segment, and receives a dupack for SND.UNA instead of > an ACK covering the RTO-retransmitted segment, then the data sender > should estimate that the receiver doesn't have enough memory to buffer > the retransmitted packet. In that case, the data sender should enter > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > call tcp_probe_timer(). > > Basically we could try to enhance the sender-side logic to try to > distinguish between two kinds of problems: > > (a) Repeated data packet loss caused by congestion, routing problems, > or connectivity problems. In this case, the data sender uses > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > retries sysctl_tcp_retries2 times before timing out the connection > > (b) A receiver that is repeatedly sending dupacks but not ACKing > retransmitted data because it doesn't have any memory. In this case, > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > off but keeps retrying as long as the data sender receives ACKs. > I'm not sure if this is an ideal method, as it may be not rigorous to conclude that the receiver is oom with dupacks. A packet can loss can also cause multi dupacks. Thanks! Menglong Dong > AFAICT that would be another way to reach the happy state you mention: > "all the 100 connections will enter 0-probe status and connection > break won't happen", and we could reach that state without violating > the TCP protocol spec and without requiring changes on the receiver > side (so that this fix could help in scenarios where the > memory-constrained receiver is an older stack without special new > behavior). > > Eric, Yuchung, Menglong: do you think something like that would work? > > neal
On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > needed in some case. > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > data in the retrans queue and the receive window can't hold the data > > > > > of the 1th packet in the send queue. > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > the receive window > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > out of the end of the receive window > > > > > > > > Sorry, I do not understand. > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > 2. call listen() and accept() on a port, such as 8888. We call > > > accept() looply and without call recv() to make the data stay > > > in the receive queue. > > > 3. choose a client machine, and create 100 TCP connection > > > to the 8888 port of the server. Then, every connection sends > > > data about 1M. > > > 4. we can see that some of the connection enter the 0-probe > > > state, but some of them keep retrans again and again. As > > > the server is up to the tcp_mem[2] and skb is dropped before > > > the recv_buf full and the connection enter 0-probe state. > > > Finially, some of these connection will timeout and break. > > > > > > With this series, all the 100 connections will enter 0-probe > > > status and connection break won't happen. And the data > > > trans will recover if we increase tcp_mem or call 'recv()' > > > on the sockets in the server. > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > We do not want linux to cause network collapses just because billions > > > > of devices send more zero probes. > > > > > > I think it maybe a good idea to make the connection enter > > > 0-probe, rather than drop the skb silently. What 0-probe > > > meaning is to wait for space available when the buffer of the > > > receive queue is full. And maybe we can also use 0-probe > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > is full? > > > > > > Am I right? > > > > > > Thanks! > > > Menglong Dong > > > > Thanks for describing the scenario in more detail. (Some kind of > > packetdrill script or other program to reproduce this issue would be > > nice, too, as Eric noted.) > > > > You mention in step (4.) above that some of the connections keep > > retransmitting again and again. Are those connections receiving any > > ACKs in response to their retransmissions? Perhaps they are receiving > > dupacks? > > Actually, these packets are dropped without any reply, even dupacks. > skb will be dropped directly when tcp_try_rmem_schedule() > fails in tcp_data_queue(). That's reasonable, as it's > useless to reply a ack to the sender, which will cause the sender > fast retrans the packet, because we are out of memory now, and > retrans can't solve the problem. I'm not sure I see the problem. If retransmits can't solve the problem, then why are you proposing that data senders keep retransmitting forever (via 0-window-probes) in this kind of scenario? A single dupack without SACK blocks will not cause the sender to fast retransmit. (Only 3 dupacks would trigger fast retransmit.) Three or more dupacks without SACK blocks will cause the sender to fast retransmit the segment above SND.UNA once if the sender doesn't have SACK support. But in this case AFAICT fast-retransmitting once is a fine strategy, since the sender should keep retrying transmits (with backoff) until the receiver potentially has memory available to receive the packet. > > > If so, then perhaps we could solve this problem without > > depending on a violation of the TCP spec (which says the receive > > window should not be retracted) in the following way: when a data > > sender suffers a retransmission timeout, and retransmits the first > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > an ACK covering the RTO-retransmitted segment, then the data sender > > should estimate that the receiver doesn't have enough memory to buffer > > the retransmitted packet. In that case, the data sender should enter > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > call tcp_probe_timer(). > > > > Basically we could try to enhance the sender-side logic to try to > > distinguish between two kinds of problems: > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > or connectivity problems. In this case, the data sender uses > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > retries sysctl_tcp_retries2 times before timing out the connection > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > retransmitted data because it doesn't have any memory. In this case, > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > off but keeps retrying as long as the data sender receives ACKs. > > > > I'm not sure if this is an ideal method, as it may be not rigorous > to conclude that the receiver is oom with dupacks. A packet can > loss can also cause multi dupacks. When a data sender suffers an RTO and retransmits a single data packet, it would be very rare for the data sender to receive multiple pure dupacks without SACKs. This would only happen in the rare case where (a) the connection did not have SACK enabled, and (b) there was a hole in the received sequence space and there were still packets in flight when the (spurioius) RTO fired. But if we want to be paranoid, then this new response could be written to only trigger if SACK is enabled (the vast, vast majority of cases). If SACK is enabled, and an RTO of a data packet starting at sequence S1 results in the receiver sending only a dupack for S1 without SACK blocks, then this clearly shows the issue is not packet loss but suggests a receiver unable to buffer the given data packet, AFAICT. thanks, neal > > Thanks! > Menglong Dong > > > AFAICT that would be another way to reach the happy state you mention: > > "all the 100 connections will enter 0-probe status and connection > > break won't happen", and we could reach that state without violating > > the TCP protocol spec and without requiring changes on the receiver > > side (so that this fix could help in scenarios where the > > memory-constrained receiver is an older stack without special new > > behavior). > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > neal
On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > needed in some case. > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > the receive window > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > out of the end of the receive window > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > accept() looply and without call recv() to make the data stay > > > > in the receive queue. > > > > 3. choose a client machine, and create 100 TCP connection > > > > to the 8888 port of the server. Then, every connection sends > > > > data about 1M. > > > > 4. we can see that some of the connection enter the 0-probe > > > > state, but some of them keep retrans again and again. As > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > the recv_buf full and the connection enter 0-probe state. > > > > Finially, some of these connection will timeout and break. > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > status and connection break won't happen. And the data > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > on the sockets in the server. > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > We do not want linux to cause network collapses just because billions > > > > > of devices send more zero probes. > > > > > > > > I think it maybe a good idea to make the connection enter > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > meaning is to wait for space available when the buffer of the > > > > receive queue is full. And maybe we can also use 0-probe > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > is full? > > > > > > > > Am I right? > > > > > > > > Thanks! > > > > Menglong Dong > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > packetdrill script or other program to reproduce this issue would be > > > nice, too, as Eric noted.) > > > > > > You mention in step (4.) above that some of the connections keep > > > retransmitting again and again. Are those connections receiving any > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > dupacks? > > > > Actually, these packets are dropped without any reply, even dupacks. > > skb will be dropped directly when tcp_try_rmem_schedule() > > fails in tcp_data_queue(). That's reasonable, as it's > > useless to reply a ack to the sender, which will cause the sender > > fast retrans the packet, because we are out of memory now, and > > retrans can't solve the problem. > > I'm not sure I see the problem. If retransmits can't solve the > problem, then why are you proposing that data senders keep > retransmitting forever (via 0-window-probes) in this kind of scenario? > Because the connection will break if the count of retransmits up to tcp_retires2, but probe-0 can keep for a long time. > A single dupack without SACK blocks will not cause the sender to fast > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > Three or more dupacks without SACK blocks will cause the sender to > fast retransmit the segment above SND.UNA once if the sender doesn't > have SACK support. But in this case AFAICT fast-retransmitting once is > a fine strategy, since the sender should keep retrying transmits (with > backoff) until the receiver potentially has memory available to > receive the packet. > > > > > > If so, then perhaps we could solve this problem without > > > depending on a violation of the TCP spec (which says the receive > > > window should not be retracted) in the following way: when a data > > > sender suffers a retransmission timeout, and retransmits the first > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > should estimate that the receiver doesn't have enough memory to buffer > > > the retransmitted packet. In that case, the data sender should enter > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > call tcp_probe_timer(). > > > > > > Basically we could try to enhance the sender-side logic to try to > > > distinguish between two kinds of problems: > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > or connectivity problems. In this case, the data sender uses > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > retransmitted data because it doesn't have any memory. In this case, > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > to conclude that the receiver is oom with dupacks. A packet can > > loss can also cause multi dupacks. > > When a data sender suffers an RTO and retransmits a single data > packet, it would be very rare for the data sender to receive multiple > pure dupacks without SACKs. This would only happen in the rare case > where (a) the connection did not have SACK enabled, and (b) there was > a hole in the received sequence space and there were still packets in > flight when the (spurioius) RTO fired. > > But if we want to be paranoid, then this new response could be written > to only trigger if SACK is enabled (the vast, vast majority of cases). > If SACK is enabled, and an RTO of a data packet starting at sequence > S1 results in the receiver sending only a dupack for S1 without SACK > blocks, then this clearly shows the issue is not packet loss but > suggests a receiver unable to buffer the given data packet, AFAICT. > Yeah, you are right on this point, multi pure dupacks can mean out of memory of the receiver. But we still need to know if the receiver recovers from OOM. Without window shrink, the window in the ack of zero-window probe packet is not zero on OOM. Hi, Eric and kuba, do you have any comments on this case? Thanks! Menglong Dong > thanks, > neal > > > > > Thanks! > > Menglong Dong > > > > > AFAICT that would be another way to reach the happy state you mention: > > > "all the 100 connections will enter 0-probe status and connection > > > break won't happen", and we could reach that state without violating > > > the TCP protocol spec and without requiring changes on the receiver > > > side (so that this fix could help in scenarios where the > > > memory-constrained receiver is an older stack without special new > > > behavior). > > > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > > > neal
On Sat, May 20, 2023 at 5:08 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > > needed in some case. > > > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > > the receive window > > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > > out of the end of the receive window > > > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > > accept() looply and without call recv() to make the data stay > > > > > in the receive queue. > > > > > 3. choose a client machine, and create 100 TCP connection > > > > > to the 8888 port of the server. Then, every connection sends > > > > > data about 1M. > > > > > 4. we can see that some of the connection enter the 0-probe > > > > > state, but some of them keep retrans again and again. As > > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > > the recv_buf full and the connection enter 0-probe state. > > > > > Finially, some of these connection will timeout and break. > > > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > > status and connection break won't happen. And the data > > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > > on the sockets in the server. > > > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > > We do not want linux to cause network collapses just because billions > > > > > > of devices send more zero probes. > > > > > > > > > > I think it maybe a good idea to make the connection enter > > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > > meaning is to wait for space available when the buffer of the > > > > > receive queue is full. And maybe we can also use 0-probe > > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > > is full? > > > > > > > > > > Am I right? > > > > > > > > > > Thanks! > > > > > Menglong Dong > > > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > > packetdrill script or other program to reproduce this issue would be > > > > nice, too, as Eric noted.) > > > > > > > > You mention in step (4.) above that some of the connections keep > > > > retransmitting again and again. Are those connections receiving any > > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > > dupacks? > > > > > > Actually, these packets are dropped without any reply, even dupacks. > > > skb will be dropped directly when tcp_try_rmem_schedule() > > > fails in tcp_data_queue(). That's reasonable, as it's > > > useless to reply a ack to the sender, which will cause the sender > > > fast retrans the packet, because we are out of memory now, and > > > retrans can't solve the problem. > > > > I'm not sure I see the problem. If retransmits can't solve the > > problem, then why are you proposing that data senders keep > > retransmitting forever (via 0-window-probes) in this kind of scenario? > > > > Because the connection will break if the count of > retransmits up to tcp_retires2, but probe-0 can keep > for a long time. I see. So it sounds like you agree that retransmits can solve the problem, as long as the retransmits are using the zero-window probe state machine (ICSK_TIME_PROBE0, tcp_probe_timer()), which continues as long as the receiver is sending ACKs. And it sounds like when you said "retrans can't solve the problem" you didn't literally mean that retransmits can't solve the problem, but rather you meant that the RTO state machine, specifically (ICSK_TIME_RETRANS, tcp_retransmit_timer(), etc) can't solve the problem. I agree with that assessment that in this scenario tcp_probe_timer() seems like a solution but tcp_retransmit_timer() does not. > > A single dupack without SACK blocks will not cause the sender to fast > > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > > > Three or more dupacks without SACK blocks will cause the sender to > > fast retransmit the segment above SND.UNA once if the sender doesn't > > have SACK support. But in this case AFAICT fast-retransmitting once is > > a fine strategy, since the sender should keep retrying transmits (with > > backoff) until the receiver potentially has memory available to > > receive the packet. > > > > > > > > > If so, then perhaps we could solve this problem without > > > > depending on a violation of the TCP spec (which says the receive > > > > window should not be retracted) in the following way: when a data > > > > sender suffers a retransmission timeout, and retransmits the first > > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > > should estimate that the receiver doesn't have enough memory to buffer > > > > the retransmitted packet. In that case, the data sender should enter > > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > > call tcp_probe_timer(). > > > > > > > > Basically we could try to enhance the sender-side logic to try to > > > > distinguish between two kinds of problems: > > > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > > or connectivity problems. In this case, the data sender uses > > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > > retransmitted data because it doesn't have any memory. In this case, > > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > > to conclude that the receiver is oom with dupacks. A packet can > > > loss can also cause multi dupacks. > > > > When a data sender suffers an RTO and retransmits a single data > > packet, it would be very rare for the data sender to receive multiple > > pure dupacks without SACKs. This would only happen in the rare case > > where (a) the connection did not have SACK enabled, and (b) there was > > a hole in the received sequence space and there were still packets in > > flight when the (spurioius) RTO fired. > > > > But if we want to be paranoid, then this new response could be written > > to only trigger if SACK is enabled (the vast, vast majority of cases). > > If SACK is enabled, and an RTO of a data packet starting at sequence > > S1 results in the receiver sending only a dupack for S1 without SACK > > blocks, then this clearly shows the issue is not packet loss but > > suggests a receiver unable to buffer the given data packet, AFAICT. > > > > Yeah, you are right on this point, multi pure dupacks can > mean out of memory of the receiver. But we still need to > know if the receiver recovers from OOM. Without window > shrink, the window in the ack of zero-window probe packet > is not zero on OOM. But do we need a protocol-violating zero-window in this case? Why not use my approach suggested above: conveying the OOM condition by sending an ACK but not ACKing the retransmitted packet? Thanks, neal > Hi, Eric and kuba, do you have any comments on this > case? > > Thanks! > Menglong Dong > > > thanks, > > neal > > > > > > > > Thanks! > > > Menglong Dong > > > > > > > AFAICT that would be another way to reach the happy state you mention: > > > > "all the 100 connections will enter 0-probe status and connection > > > > break won't happen", and we could reach that state without violating > > > > the TCP protocol spec and without requiring changes on the receiver > > > > side (so that this fix could help in scenarios where the > > > > memory-constrained receiver is an older stack without special new > > > > behavior). > > > > > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > > > > > neal
On Sat, May 20, 2023 at 10:28 PM Neal Cardwell <ncardwell@google.com> wrote: > > On Sat, May 20, 2023 at 5:08 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > > > needed in some case. > > > > > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > > > the receive window > > > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > > > out of the end of the receive window > > > > > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > > > accept() looply and without call recv() to make the data stay > > > > > > in the receive queue. > > > > > > 3. choose a client machine, and create 100 TCP connection > > > > > > to the 8888 port of the server. Then, every connection sends > > > > > > data about 1M. > > > > > > 4. we can see that some of the connection enter the 0-probe > > > > > > state, but some of them keep retrans again and again. As > > > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > > > the recv_buf full and the connection enter 0-probe state. > > > > > > Finially, some of these connection will timeout and break. > > > > > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > > > status and connection break won't happen. And the data > > > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > > > on the sockets in the server. > > > > > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > > > We do not want linux to cause network collapses just because billions > > > > > > > of devices send more zero probes. > > > > > > > > > > > > I think it maybe a good idea to make the connection enter > > > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > > > meaning is to wait for space available when the buffer of the > > > > > > receive queue is full. And maybe we can also use 0-probe > > > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > > > is full? > > > > > > > > > > > > Am I right? > > > > > > > > > > > > Thanks! > > > > > > Menglong Dong > > > > > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > > > packetdrill script or other program to reproduce this issue would be > > > > > nice, too, as Eric noted.) > > > > > > > > > > You mention in step (4.) above that some of the connections keep > > > > > retransmitting again and again. Are those connections receiving any > > > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > > > dupacks? > > > > > > > > Actually, these packets are dropped without any reply, even dupacks. > > > > skb will be dropped directly when tcp_try_rmem_schedule() > > > > fails in tcp_data_queue(). That's reasonable, as it's > > > > useless to reply a ack to the sender, which will cause the sender > > > > fast retrans the packet, because we are out of memory now, and > > > > retrans can't solve the problem. > > > > > > I'm not sure I see the problem. If retransmits can't solve the > > > problem, then why are you proposing that data senders keep > > > retransmitting forever (via 0-window-probes) in this kind of scenario? > > > > > > > Because the connection will break if the count of > > retransmits up to tcp_retires2, but probe-0 can keep > > for a long time. > > I see. So it sounds like you agree that retransmits can solve the > problem, as long as the retransmits are using the zero-window probe > state machine (ICSK_TIME_PROBE0, tcp_probe_timer()), which continues > as long as the receiver is sending ACKs. And it sounds like when you > said "retrans can't solve the problem" you didn't literally mean that > retransmits can't solve the problem, but rather you meant that the RTO > state machine, specifically (ICSK_TIME_RETRANS, > tcp_retransmit_timer(), etc) can't solve the problem. I agree with > that assessment that in this scenario tcp_probe_timer() seems like a > solution but tcp_retransmit_timer() does not. > Yes, that is indeed what I want to express. > > > A single dupack without SACK blocks will not cause the sender to fast > > > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > > > > > Three or more dupacks without SACK blocks will cause the sender to > > > fast retransmit the segment above SND.UNA once if the sender doesn't > > > have SACK support. But in this case AFAICT fast-retransmitting once is > > > a fine strategy, since the sender should keep retrying transmits (with > > > backoff) until the receiver potentially has memory available to > > > receive the packet. > > > > > > > > > > > > If so, then perhaps we could solve this problem without > > > > > depending on a violation of the TCP spec (which says the receive > > > > > window should not be retracted) in the following way: when a data > > > > > sender suffers a retransmission timeout, and retransmits the first > > > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > > > should estimate that the receiver doesn't have enough memory to buffer > > > > > the retransmitted packet. In that case, the data sender should enter > > > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > > > call tcp_probe_timer(). > > > > > > > > > > Basically we could try to enhance the sender-side logic to try to > > > > > distinguish between two kinds of problems: > > > > > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > > > or connectivity problems. In this case, the data sender uses > > > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > > > retransmitted data because it doesn't have any memory. In this case, > > > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > > > to conclude that the receiver is oom with dupacks. A packet can > > > > loss can also cause multi dupacks. > > > > > > When a data sender suffers an RTO and retransmits a single data > > > packet, it would be very rare for the data sender to receive multiple > > > pure dupacks without SACKs. This would only happen in the rare case > > > where (a) the connection did not have SACK enabled, and (b) there was > > > a hole in the received sequence space and there were still packets in > > > flight when the (spurioius) RTO fired. > > > > > > But if we want to be paranoid, then this new response could be written > > > to only trigger if SACK is enabled (the vast, vast majority of cases). > > > If SACK is enabled, and an RTO of a data packet starting at sequence > > > S1 results in the receiver sending only a dupack for S1 without SACK > > > blocks, then this clearly shows the issue is not packet loss but > > > suggests a receiver unable to buffer the given data packet, AFAICT. > > > > > > > Yeah, you are right on this point, multi pure dupacks can > > mean out of memory of the receiver. But we still need to > > know if the receiver recovers from OOM. Without window > > shrink, the window in the ack of zero-window probe packet > > is not zero on OOM. > > But do we need a protocol-violating zero-window in this case? Why not > use my approach suggested above: conveying the OOM condition by > sending an ACK but not ACKing the retransmitted packet? > I agree with you about the approach you mentioned about conveying the OOM condition. But that approach can't convey the recovery from OOM, can it? Let's see the process. With 3 pure dupack for SND.UNA, we deem the OOM of the receiver and make the sender enter zero-window probe state. The sender will keep sending probe0 packets, and the receiver will reply an ack. However, as we don't shrink the window actually, the window in the ack is not zero on OOM, so we can't know if the receiver has recovered from OOM and retransmit the data in retransmit queue. BTW, the probe0 will send the last byte that was already acked, so the ack of the probe0 will be a pure dupack. Did I miss something? BTW, a previous patch has explained the need to support window shrink, which should satisfy the RFC of TCP protocol: https://lore.kernel.org/netdev/20230308053353.675086-1-mfreemon@cloudflare.com/ Thanks! Menglong Dong > Thanks, > neal > > > Hi, Eric and kuba, do you have any comments on this > > case? > > > > Thanks! > > Menglong Dong > > > > > thanks, > > > neal > > > > > > > > > > > Thanks! > > > > Menglong Dong > > > > > > > > > AFAICT that would be another way to reach the happy state you mention: > > > > > "all the 100 connections will enter 0-probe status and connection > > > > > break won't happen", and we could reach that state without violating > > > > > the TCP protocol spec and without requiring changes on the receiver > > > > > side (so that this fix could help in scenarios where the > > > > > memory-constrained receiver is an older stack without special new > > > > > behavior). > > > > > > > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > > > > > > > neal
On Sun, May 21, 2023 at 10:55 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Sat, May 20, 2023 at 10:28 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > On Sat, May 20, 2023 at 5:08 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > > > > needed in some case. > > > > > > > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > > > > the receive window > > > > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > > > > out of the end of the receive window > > > > > > > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > > > > accept() looply and without call recv() to make the data stay > > > > > > > in the receive queue. > > > > > > > 3. choose a client machine, and create 100 TCP connection > > > > > > > to the 8888 port of the server. Then, every connection sends > > > > > > > data about 1M. > > > > > > > 4. we can see that some of the connection enter the 0-probe > > > > > > > state, but some of them keep retrans again and again. As > > > > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > > > > the recv_buf full and the connection enter 0-probe state. > > > > > > > Finially, some of these connection will timeout and break. > > > > > > > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > > > > status and connection break won't happen. And the data > > > > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > > > > on the sockets in the server. > > > > > > > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > > > > We do not want linux to cause network collapses just because billions > > > > > > > > of devices send more zero probes. > > > > > > > > > > > > > > I think it maybe a good idea to make the connection enter > > > > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > > > > meaning is to wait for space available when the buffer of the > > > > > > > receive queue is full. And maybe we can also use 0-probe > > > > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > > > > is full? > > > > > > > > > > > > > > Am I right? > > > > > > > > > > > > > > Thanks! > > > > > > > Menglong Dong > > > > > > > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > > > > packetdrill script or other program to reproduce this issue would be > > > > > > nice, too, as Eric noted.) > > > > > > > > > > > > You mention in step (4.) above that some of the connections keep > > > > > > retransmitting again and again. Are those connections receiving any > > > > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > > > > dupacks? > > > > > > > > > > Actually, these packets are dropped without any reply, even dupacks. > > > > > skb will be dropped directly when tcp_try_rmem_schedule() > > > > > fails in tcp_data_queue(). That's reasonable, as it's > > > > > useless to reply a ack to the sender, which will cause the sender > > > > > fast retrans the packet, because we are out of memory now, and > > > > > retrans can't solve the problem. > > > > > > > > I'm not sure I see the problem. If retransmits can't solve the > > > > problem, then why are you proposing that data senders keep > > > > retransmitting forever (via 0-window-probes) in this kind of scenario? > > > > > > > > > > Because the connection will break if the count of > > > retransmits up to tcp_retires2, but probe-0 can keep > > > for a long time. > > > > I see. So it sounds like you agree that retransmits can solve the > > problem, as long as the retransmits are using the zero-window probe > > state machine (ICSK_TIME_PROBE0, tcp_probe_timer()), which continues > > as long as the receiver is sending ACKs. And it sounds like when you > > said "retrans can't solve the problem" you didn't literally mean that > > retransmits can't solve the problem, but rather you meant that the RTO > > state machine, specifically (ICSK_TIME_RETRANS, > > tcp_retransmit_timer(), etc) can't solve the problem. I agree with > > that assessment that in this scenario tcp_probe_timer() seems like a > > solution but tcp_retransmit_timer() does not. > > > > Yes, that is indeed what I want to express. > > > > > A single dupack without SACK blocks will not cause the sender to fast > > > > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > > > > > > > Three or more dupacks without SACK blocks will cause the sender to > > > > fast retransmit the segment above SND.UNA once if the sender doesn't > > > > have SACK support. But in this case AFAICT fast-retransmitting once is > > > > a fine strategy, since the sender should keep retrying transmits (with > > > > backoff) until the receiver potentially has memory available to > > > > receive the packet. > > > > > > > > > > > > > > > If so, then perhaps we could solve this problem without > > > > > > depending on a violation of the TCP spec (which says the receive > > > > > > window should not be retracted) in the following way: when a data > > > > > > sender suffers a retransmission timeout, and retransmits the first > > > > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > > > > should estimate that the receiver doesn't have enough memory to buffer > > > > > > the retransmitted packet. In that case, the data sender should enter > > > > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > > > > call tcp_probe_timer(). > > > > > > > > > > > > Basically we could try to enhance the sender-side logic to try to > > > > > > distinguish between two kinds of problems: > > > > > > > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > > > > or connectivity problems. In this case, the data sender uses > > > > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > > > > retransmitted data because it doesn't have any memory. In this case, > > > > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > > > > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > > > > to conclude that the receiver is oom with dupacks. A packet can > > > > > loss can also cause multi dupacks. > > > > > > > > When a data sender suffers an RTO and retransmits a single data > > > > packet, it would be very rare for the data sender to receive multiple > > > > pure dupacks without SACKs. This would only happen in the rare case > > > > where (a) the connection did not have SACK enabled, and (b) there was > > > > a hole in the received sequence space and there were still packets in > > > > flight when the (spurioius) RTO fired. > > > > > > > > But if we want to be paranoid, then this new response could be written > > > > to only trigger if SACK is enabled (the vast, vast majority of cases). > > > > If SACK is enabled, and an RTO of a data packet starting at sequence > > > > S1 results in the receiver sending only a dupack for S1 without SACK > > > > blocks, then this clearly shows the issue is not packet loss but > > > > suggests a receiver unable to buffer the given data packet, AFAICT. > > > > > > > > > > Yeah, you are right on this point, multi pure dupacks can > > > mean out of memory of the receiver. But we still need to > > > know if the receiver recovers from OOM. Without window > > > shrink, the window in the ack of zero-window probe packet > > > is not zero on OOM. > > > > But do we need a protocol-violating zero-window in this case? Why not > > use my approach suggested above: conveying the OOM condition by > > sending an ACK but not ACKing the retransmitted packet? > > > > I agree with you about the approach you mentioned > about conveying the OOM condition. But that approach > can't convey the recovery from OOM, can it? Yes, my suggested approach can convey the recovery from OOM. The data receiver conveys the recovery from OOM by buffering and ACKing the retransmitted data packet. > Let's see the process. With 3 pure dupack for SND.UNA, > we deem the OOM of the receiver and make the sender > enter zero-window probe state. AFAICT the data sender does not need to wait for 3 pure dpacks for SND.UNA. AFAICT a data sender that suffers an RTO and finds that its RTO gets a response that is a single dupack for SND.UNA could estimate that the receiver is OOM, and enter the zero-window probe state. > The sender will keep sending probe0 packets, and the > receiver will reply an ack. However, as we don't > shrink the window actually, the window in the ack is > not zero on OOM, so we can't know if the receiver has > recovered from OOM and retransmit the data in retransmit > queue. As I noted above, in my proposal the data receiver conveys the recovery from OOM by buffering and ACKing the retransmitted data packet. > BTW, the probe0 will send the last byte that was already > acked, so the ack of the probe0 will be a pure dupack. > > Did I miss something? I don't think that's the case. My read of tcp_write_wakeup() is that it will send an skb that is whatever prefix of 1 MSS is allowed by the receiver window. In the scenario we are discussing, that would mean that it sends a full 1 MSS. So AFAICT tcp_write_wakeup() could be used for sending a data "probe" packet for this OOM case. > BTW, a previous patch has explained the need to > support window shrink, which should satisfy the RFC > of TCP protocol: > > https://lore.kernel.org/netdev/20230308053353.675086-1-mfreemon@cloudflare.com/ Let's see what Eric thinks about that patch. neal > Thanks! > Menglong Dong > > > Thanks, > > neal > > > > > Hi, Eric and kuba, do you have any comments on this > > > case? > > > > > > Thanks! > > > Menglong Dong > > > > > > > thanks, > > > > neal > > > > > > > > > > > > > > Thanks! > > > > > Menglong Dong > > > > > > > > > > > AFAICT that would be another way to reach the happy state you mention: > > > > > > "all the 100 connections will enter 0-probe status and connection > > > > > > break won't happen", and we could reach that state without violating > > > > > > the TCP protocol spec and without requiring changes on the receiver > > > > > > side (so that this fix could help in scenarios where the > > > > > > memory-constrained receiver is an older stack without special new > > > > > > behavior). > > > > > > > > > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > > > > > > > > > neal
On Mon, May 22, 2023 at 11:04 PM Neal Cardwell <ncardwell@google.com> wrote: > > On Sun, May 21, 2023 at 10:55 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > On Sat, May 20, 2023 at 10:28 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > On Sat, May 20, 2023 at 5:08 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > > > > > needed in some case. > > > > > > > > > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > > > > > the receive window > > > > > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > > > > > out of the end of the receive window > > > > > > > > > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > > > > > accept() looply and without call recv() to make the data stay > > > > > > > > in the receive queue. > > > > > > > > 3. choose a client machine, and create 100 TCP connection > > > > > > > > to the 8888 port of the server. Then, every connection sends > > > > > > > > data about 1M. > > > > > > > > 4. we can see that some of the connection enter the 0-probe > > > > > > > > state, but some of them keep retrans again and again. As > > > > > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > > > > > the recv_buf full and the connection enter 0-probe state. > > > > > > > > Finially, some of these connection will timeout and break. > > > > > > > > > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > > > > > status and connection break won't happen. And the data > > > > > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > > > > > on the sockets in the server. > > > > > > > > > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > > > > > We do not want linux to cause network collapses just because billions > > > > > > > > > of devices send more zero probes. > > > > > > > > > > > > > > > > I think it maybe a good idea to make the connection enter > > > > > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > > > > > meaning is to wait for space available when the buffer of the > > > > > > > > receive queue is full. And maybe we can also use 0-probe > > > > > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > > > > > is full? > > > > > > > > > > > > > > > > Am I right? > > > > > > > > > > > > > > > > Thanks! > > > > > > > > Menglong Dong > > > > > > > > > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > > > > > packetdrill script or other program to reproduce this issue would be > > > > > > > nice, too, as Eric noted.) > > > > > > > > > > > > > > You mention in step (4.) above that some of the connections keep > > > > > > > retransmitting again and again. Are those connections receiving any > > > > > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > > > > > dupacks? > > > > > > > > > > > > Actually, these packets are dropped without any reply, even dupacks. > > > > > > skb will be dropped directly when tcp_try_rmem_schedule() > > > > > > fails in tcp_data_queue(). That's reasonable, as it's > > > > > > useless to reply a ack to the sender, which will cause the sender > > > > > > fast retrans the packet, because we are out of memory now, and > > > > > > retrans can't solve the problem. > > > > > > > > > > I'm not sure I see the problem. If retransmits can't solve the > > > > > problem, then why are you proposing that data senders keep > > > > > retransmitting forever (via 0-window-probes) in this kind of scenario? > > > > > > > > > > > > > Because the connection will break if the count of > > > > retransmits up to tcp_retires2, but probe-0 can keep > > > > for a long time. > > > > > > I see. So it sounds like you agree that retransmits can solve the > > > problem, as long as the retransmits are using the zero-window probe > > > state machine (ICSK_TIME_PROBE0, tcp_probe_timer()), which continues > > > as long as the receiver is sending ACKs. And it sounds like when you > > > said "retrans can't solve the problem" you didn't literally mean that > > > retransmits can't solve the problem, but rather you meant that the RTO > > > state machine, specifically (ICSK_TIME_RETRANS, > > > tcp_retransmit_timer(), etc) can't solve the problem. I agree with > > > that assessment that in this scenario tcp_probe_timer() seems like a > > > solution but tcp_retransmit_timer() does not. > > > > > > > Yes, that is indeed what I want to express. > > > > > > > A single dupack without SACK blocks will not cause the sender to fast > > > > > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > > > > > > > > > Three or more dupacks without SACK blocks will cause the sender to > > > > > fast retransmit the segment above SND.UNA once if the sender doesn't > > > > > have SACK support. But in this case AFAICT fast-retransmitting once is > > > > > a fine strategy, since the sender should keep retrying transmits (with > > > > > backoff) until the receiver potentially has memory available to > > > > > receive the packet. > > > > > > > > > > > > > > > > > > If so, then perhaps we could solve this problem without > > > > > > > depending on a violation of the TCP spec (which says the receive > > > > > > > window should not be retracted) in the following way: when a data > > > > > > > sender suffers a retransmission timeout, and retransmits the first > > > > > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > > > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > > > > > should estimate that the receiver doesn't have enough memory to buffer > > > > > > > the retransmitted packet. In that case, the data sender should enter > > > > > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > > > > > call tcp_probe_timer(). > > > > > > > > > > > > > > Basically we could try to enhance the sender-side logic to try to > > > > > > > distinguish between two kinds of problems: > > > > > > > > > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > > > > > or connectivity problems. In this case, the data sender uses > > > > > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > > > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > > > > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > > > > > retransmitted data because it doesn't have any memory. In this case, > > > > > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > > > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > > > > > > > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > > > > > to conclude that the receiver is oom with dupacks. A packet can > > > > > > loss can also cause multi dupacks. > > > > > > > > > > When a data sender suffers an RTO and retransmits a single data > > > > > packet, it would be very rare for the data sender to receive multiple > > > > > pure dupacks without SACKs. This would only happen in the rare case > > > > > where (a) the connection did not have SACK enabled, and (b) there was > > > > > a hole in the received sequence space and there were still packets in > > > > > flight when the (spurioius) RTO fired. > > > > > > > > > > But if we want to be paranoid, then this new response could be written > > > > > to only trigger if SACK is enabled (the vast, vast majority of cases). > > > > > If SACK is enabled, and an RTO of a data packet starting at sequence > > > > > S1 results in the receiver sending only a dupack for S1 without SACK > > > > > blocks, then this clearly shows the issue is not packet loss but > > > > > suggests a receiver unable to buffer the given data packet, AFAICT. > > > > > > > > > > > > > Yeah, you are right on this point, multi pure dupacks can > > > > mean out of memory of the receiver. But we still need to > > > > know if the receiver recovers from OOM. Without window > > > > shrink, the window in the ack of zero-window probe packet > > > > is not zero on OOM. > > > > > > But do we need a protocol-violating zero-window in this case? Why not > > > use my approach suggested above: conveying the OOM condition by > > > sending an ACK but not ACKing the retransmitted packet? > > > > > > > I agree with you about the approach you mentioned > > about conveying the OOM condition. But that approach > > can't convey the recovery from OOM, can it? > > Yes, my suggested approach can convey the recovery from OOM. The data > receiver conveys the recovery from OOM by buffering and ACKing the > retransmitted data packet. Oh, I understand what you mean now. You are saying that retransmit that first packet in the retransmit queue instead of zero-window probe packet when OOM of the receiver, isn't it? In other word, retransmit the unacked data and ignore the tcp_retries2 when we find the receiver is in OOM state. That's an option, and we can make the length of the data we send to 1 byte, which means we keep retransmitting the first byte that has not be acked in the retransmit queue. > > > Let's see the process. With 3 pure dupack for SND.UNA, > > we deem the OOM of the receiver and make the sender > > enter zero-window probe state. > > AFAICT the data sender does not need to wait for 3 pure dpacks for > SND.UNA. AFAICT a data sender that suffers an RTO and finds that its > RTO gets a response that is a single dupack for SND.UNA could estimate > that the receiver is OOM, and enter the zero-window probe state. > > > The sender will keep sending probe0 packets, and the > > receiver will reply an ack. However, as we don't > > shrink the window actually, the window in the ack is > > not zero on OOM, so we can't know if the receiver has > > recovered from OOM and retransmit the data in retransmit > > queue. > > As I noted above, in my proposal the data receiver conveys the > recovery from OOM by buffering and ACKing the retransmitted data > packet. > > > BTW, the probe0 will send the last byte that was already > > acked, so the ack of the probe0 will be a pure dupack. > > > > Did I miss something? > > I don't think that's the case. My read of tcp_write_wakeup() is that > it will send an skb that is whatever prefix of 1 MSS is allowed by the > receiver window. In the scenario we are discussing, that would mean > that it sends a full 1 MSS. So AFAICT tcp_write_wakeup() could be used > for sending a data "probe" packet for this OOM case. > > > BTW, a previous patch has explained the need to > > support window shrink, which should satisfy the RFC > > of TCP protocol: > > > > https://lore.kernel.org/netdev/20230308053353.675086-1-mfreemon@cloudflare.com/ > > Let's see what Eric thinks about that patch. > > neal > > > > Thanks! > > Menglong Dong > > > > > Thanks, > > > neal > > > > > > > Hi, Eric and kuba, do you have any comments on this > > > > case? > > > > > > > > Thanks! > > > > Menglong Dong > > > > > > > > > thanks, > > > > > neal > > > > > > > > > > > > > > > > > Thanks! > > > > > > Menglong Dong > > > > > > > > > > > > > AFAICT that would be another way to reach the happy state you mention: > > > > > > > "all the 100 connections will enter 0-probe status and connection > > > > > > > break won't happen", and we could reach that state without violating > > > > > > > the TCP protocol spec and without requiring changes on the receiver > > > > > > > side (so that this fix could help in scenarios where the > > > > > > > memory-constrained receiver is an older stack without special new > > > > > > > behavior). > > > > > > > > > > > > > > Eric, Yuchung, Menglong: do you think something like that would work? > > > > > > > > > > > > > > neal
On Mon, May 22, 2023 at 5:04 PM Neal Cardwell <ncardwell@google.com> wrote: > > On Sun, May 21, 2023 at 10:55 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > BTW, a previous patch has explained the need to > > support window shrink, which should satisfy the RFC > > of TCP protocol: > > > > https://lore.kernel.org/netdev/20230308053353.675086-1-mfreemon@cloudflare.com/ > > Let's see what Eric thinks about that patch. I have not received a copy of the patch, so I will not comment on it.
On Tue, May 23, 2023 at 4:59 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Mon, May 22, 2023 at 11:04 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > On Sun, May 21, 2023 at 10:55 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > On Sat, May 20, 2023 at 10:28 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > On Sat, May 20, 2023 at 5:08 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > On Fri, May 19, 2023 at 12:03 AM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > > > On Thu, May 18, 2023 at 10:12 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > On Thu, May 18, 2023 at 9:40 PM Neal Cardwell <ncardwell@google.com> wrote: > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 10:35 PM Menglong Dong <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 10:47 PM Eric Dumazet <edumazet@google.com> wrote: > > > > > > > > > > > > > > > > > > > > On Wed, May 17, 2023 at 2:42 PM <menglong8.dong@gmail.com> wrote: > > > > > > > > > > > > > > > > > > > > > > From: Menglong Dong <imagedong@tencent.com> > > > > > > > > > > > > > > > > > > > > > > Window shrink is not allowed and also not handled for now, but it's > > > > > > > > > > > needed in some case. > > > > > > > > > > > > > > > > > > > > > > In the origin logic, 0 probe is triggered only when there is no any > > > > > > > > > > > data in the retrans queue and the receive window can't hold the data > > > > > > > > > > > of the 1th packet in the send queue. > > > > > > > > > > > > > > > > > > > > > > Now, let's change it and trigger the 0 probe in such cases: > > > > > > > > > > > > > > > > > > > > > > - if the retrans queue has data and the 1th packet in it is not within > > > > > > > > > > > the receive window > > > > > > > > > > > - no data in the retrans queue and the 1th packet in the send queue is > > > > > > > > > > > out of the end of the receive window > > > > > > > > > > > > > > > > > > > > Sorry, I do not understand. > > > > > > > > > > > > > > > > > > > > Please provide packetdrill tests for new behavior like that. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Yes. The problem can be reproduced easily. > > > > > > > > > > > > > > > > > > 1. choose a server machine, decrease it's tcp_mem with: > > > > > > > > > echo '1024 1500 2048' > /proc/sys/net/ipv4/tcp_mem > > > > > > > > > 2. call listen() and accept() on a port, such as 8888. We call > > > > > > > > > accept() looply and without call recv() to make the data stay > > > > > > > > > in the receive queue. > > > > > > > > > 3. choose a client machine, and create 100 TCP connection > > > > > > > > > to the 8888 port of the server. Then, every connection sends > > > > > > > > > data about 1M. > > > > > > > > > 4. we can see that some of the connection enter the 0-probe > > > > > > > > > state, but some of them keep retrans again and again. As > > > > > > > > > the server is up to the tcp_mem[2] and skb is dropped before > > > > > > > > > the recv_buf full and the connection enter 0-probe state. > > > > > > > > > Finially, some of these connection will timeout and break. > > > > > > > > > > > > > > > > > > With this series, all the 100 connections will enter 0-probe > > > > > > > > > status and connection break won't happen. And the data > > > > > > > > > trans will recover if we increase tcp_mem or call 'recv()' > > > > > > > > > on the sockets in the server. > > > > > > > > > > > > > > > > > > > Also, such fundamental change would need IETF discussion first. > > > > > > > > > > We do not want linux to cause network collapses just because billions > > > > > > > > > > of devices send more zero probes. > > > > > > > > > > > > > > > > > > I think it maybe a good idea to make the connection enter > > > > > > > > > 0-probe, rather than drop the skb silently. What 0-probe > > > > > > > > > meaning is to wait for space available when the buffer of the > > > > > > > > > receive queue is full. And maybe we can also use 0-probe > > > > > > > > > when the "buffer" of "TCP protocol" (which means tcp_mem) > > > > > > > > > is full? > > > > > > > > > > > > > > > > > > Am I right? > > > > > > > > > > > > > > > > > > Thanks! > > > > > > > > > Menglong Dong > > > > > > > > > > > > > > > > Thanks for describing the scenario in more detail. (Some kind of > > > > > > > > packetdrill script or other program to reproduce this issue would be > > > > > > > > nice, too, as Eric noted.) > > > > > > > > > > > > > > > > You mention in step (4.) above that some of the connections keep > > > > > > > > retransmitting again and again. Are those connections receiving any > > > > > > > > ACKs in response to their retransmissions? Perhaps they are receiving > > > > > > > > dupacks? > > > > > > > > > > > > > > Actually, these packets are dropped without any reply, even dupacks. > > > > > > > skb will be dropped directly when tcp_try_rmem_schedule() > > > > > > > fails in tcp_data_queue(). That's reasonable, as it's > > > > > > > useless to reply a ack to the sender, which will cause the sender > > > > > > > fast retrans the packet, because we are out of memory now, and > > > > > > > retrans can't solve the problem. > > > > > > > > > > > > I'm not sure I see the problem. If retransmits can't solve the > > > > > > problem, then why are you proposing that data senders keep > > > > > > retransmitting forever (via 0-window-probes) in this kind of scenario? > > > > > > > > > > > > > > > > Because the connection will break if the count of > > > > > retransmits up to tcp_retires2, but probe-0 can keep > > > > > for a long time. > > > > > > > > I see. So it sounds like you agree that retransmits can solve the > > > > problem, as long as the retransmits are using the zero-window probe > > > > state machine (ICSK_TIME_PROBE0, tcp_probe_timer()), which continues > > > > as long as the receiver is sending ACKs. And it sounds like when you > > > > said "retrans can't solve the problem" you didn't literally mean that > > > > retransmits can't solve the problem, but rather you meant that the RTO > > > > state machine, specifically (ICSK_TIME_RETRANS, > > > > tcp_retransmit_timer(), etc) can't solve the problem. I agree with > > > > that assessment that in this scenario tcp_probe_timer() seems like a > > > > solution but tcp_retransmit_timer() does not. > > > > > > > > > > Yes, that is indeed what I want to express. > > > > > > > > > A single dupack without SACK blocks will not cause the sender to fast > > > > > > retransmit. (Only 3 dupacks would trigger fast retransmit.) > > > > > > > > > > > > Three or more dupacks without SACK blocks will cause the sender to > > > > > > fast retransmit the segment above SND.UNA once if the sender doesn't > > > > > > have SACK support. But in this case AFAICT fast-retransmitting once is > > > > > > a fine strategy, since the sender should keep retrying transmits (with > > > > > > backoff) until the receiver potentially has memory available to > > > > > > receive the packet. > > > > > > > > > > > > > > > > > > > > > If so, then perhaps we could solve this problem without > > > > > > > > depending on a violation of the TCP spec (which says the receive > > > > > > > > window should not be retracted) in the following way: when a data > > > > > > > > sender suffers a retransmission timeout, and retransmits the first > > > > > > > > unacknowledged segment, and receives a dupack for SND.UNA instead of > > > > > > > > an ACK covering the RTO-retransmitted segment, then the data sender > > > > > > > > should estimate that the receiver doesn't have enough memory to buffer > > > > > > > > the retransmitted packet. In that case, the data sender should enter > > > > > > > > the 0-probe state and repeatedly set the ICSK_TIME_PROBE0 timer to > > > > > > > > call tcp_probe_timer(). > > > > > > > > > > > > > > > > Basically we could try to enhance the sender-side logic to try to > > > > > > > > distinguish between two kinds of problems: > > > > > > > > > > > > > > > > (a) Repeated data packet loss caused by congestion, routing problems, > > > > > > > > or connectivity problems. In this case, the data sender uses > > > > > > > > ICSK_TIME_RETRANS and tcp_retransmit_timer(), and backs off and only > > > > > > > > retries sysctl_tcp_retries2 times before timing out the connection > > > > > > > > > > > > > > > > (b) A receiver that is repeatedly sending dupacks but not ACKing > > > > > > > > retransmitted data because it doesn't have any memory. In this case, > > > > > > > > the data sender uses ICSK_TIME_PROBE0 and tcp_probe_timer(), and backs > > > > > > > > off but keeps retrying as long as the data sender receives ACKs. > > > > > > > > > > > > > > > > > > > > > > I'm not sure if this is an ideal method, as it may be not rigorous > > > > > > > to conclude that the receiver is oom with dupacks. A packet can > > > > > > > loss can also cause multi dupacks. > > > > > > > > > > > > When a data sender suffers an RTO and retransmits a single data > > > > > > packet, it would be very rare for the data sender to receive multiple > > > > > > pure dupacks without SACKs. This would only happen in the rare case > > > > > > where (a) the connection did not have SACK enabled, and (b) there was > > > > > > a hole in the received sequence space and there were still packets in > > > > > > flight when the (spurioius) RTO fired. > > > > > > > > > > > > But if we want to be paranoid, then this new response could be written > > > > > > to only trigger if SACK is enabled (the vast, vast majority of cases). > > > > > > If SACK is enabled, and an RTO of a data packet starting at sequence > > > > > > S1 results in the receiver sending only a dupack for S1 without SACK > > > > > > blocks, then this clearly shows the issue is not packet loss but > > > > > > suggests a receiver unable to buffer the given data packet, AFAICT. > > > > > > > > > > > > > > > > Yeah, you are right on this point, multi pure dupacks can > > > > > mean out of memory of the receiver. But we still need to > > > > > know if the receiver recovers from OOM. Without window > > > > > shrink, the window in the ack of zero-window probe packet > > > > > is not zero on OOM. > > > > > > > > But do we need a protocol-violating zero-window in this case? Why not > > > > use my approach suggested above: conveying the OOM condition by > > > > sending an ACK but not ACKing the retransmitted packet? > > > > > > > > > > I agree with you about the approach you mentioned > > > about conveying the OOM condition. But that approach > > > can't convey the recovery from OOM, can it? > > > > Yes, my suggested approach can convey the recovery from OOM. The data > > receiver conveys the recovery from OOM by buffering and ACKing the > > retransmitted data packet. > > Oh, I understand what you mean now. You are saying that > retransmit that first packet in the retransmit queue instead > of zero-window probe packet when OOM of the receiver, > isn't it? In other word, retransmit the unacked data and ignore > the tcp_retries2 when we find the receiver is in OOM state. Yes. The idea would be to use a heuristic to estimate the receiver is currently OOM and use ICSK_TIME_PROBE0 / tcp_probe_timer() / tcp_write_wakeup() in this case instead of ICSK_TIME_RETRANS / tcp_retransmit_timer(). > That's an option, and we can make the length of the data we > send to 1 byte, which means we keep retransmitting the first > byte that has not be acked in the retransmit queue. I don't think it would be worth adding special-case code to only send 1 byte. If the data receiver is not OOM then for CPU and memory efficiency reasons (as well as simplicity) the data sender should send it a full MSS. So for those reasons I would suggest that in this potential approach tcp_write_wakeup() should stay the same. neal
On Tue, May 23, 2023 at 9:27 PM Neal Cardwell <ncardwell@google.com> wrote: > > Oh, I understand what you mean now. You are saying that > > retransmit that first packet in the retransmit queue instead > > of zero-window probe packet when OOM of the receiver, > > isn't it? In other word, retransmit the unacked data and ignore > > the tcp_retries2 when we find the receiver is in OOM state. > > Yes. The idea would be to use a heuristic to estimate the receiver is > currently OOM and use ICSK_TIME_PROBE0 / tcp_probe_timer() / > tcp_write_wakeup() in this case instead of ICSK_TIME_RETRANS / > tcp_retransmit_timer(). > Well, I think that maybe we should use ICSK_TIME_PROBE0 / tcp_probe_timer() / tcp_retransmit_skb(), isn't it? What tcp_write_wakeup() does is send new data if the receive window available, which means push new data into the retransmit queue. However, what we need to do now is retransmit the first packet in the rtx queue, isn't it? In the tcp_ack(), we estimate that if the receiver is OOM and mark the sk with OOM state, and raise ICSK_TIME_PROBE0. When new data is acked, we leave the OOM state. The OOM state can only happen when the rtx queue is not empty, otherwise the tcp connection will enter normal zero-window probe state. So when the timeout of ICSK_TIME_PROBE0, we need retransmit the skb in the rtx queue. tcp_write_wakeup() don't do the job the retransmit packet, but send new data. Am I right? Thanks! Menglong Dong > > That's an option, and we can make the length of the data we > > send to 1 byte, which means we keep retransmitting the first > > byte that has not be acked in the retransmit queue. > > I don't think it would be worth adding special-case code to only send > 1 byte. If the data receiver is not OOM then for CPU and memory > efficiency reasons (as well as simplicity) the data sender should send > it a full MSS. So for those reasons I would suggest that in this > potential approach tcp_write_wakeup() should stay the same. > > neal
On Wed, May 24, 2023 at 8:16 AM Menglong Dong <menglong8.dong@gmail.com> wrote: > > On Tue, May 23, 2023 at 9:27 PM Neal Cardwell <ncardwell@google.com> wrote: > > > Oh, I understand what you mean now. You are saying that > > > retransmit that first packet in the retransmit queue instead > > > of zero-window probe packet when OOM of the receiver, > > > isn't it? In other word, retransmit the unacked data and ignore > > > the tcp_retries2 when we find the receiver is in OOM state. > > > > Yes. The idea would be to use a heuristic to estimate the receiver is > > currently OOM and use ICSK_TIME_PROBE0 / tcp_probe_timer() / > > tcp_write_wakeup() in this case instead of ICSK_TIME_RETRANS / > > tcp_retransmit_timer(). > > > > Well, I think that maybe we should use ICSK_TIME_PROBE0 / > tcp_probe_timer() / tcp_retransmit_skb(), isn't it? > > What tcp_write_wakeup() does is send new data if the receive > window available, which means push new data into the retransmit > queue. However, what we need to do now is retransmit the first > packet in the rtx queue, isn't it? > > In the tcp_ack(), we estimate that if the receiver is OOM and > mark the sk with OOM state, and raise ICSK_TIME_PROBE0. > When new data is acked, we leave the OOM state. > > The OOM state can only happen when the rtx queue is not empty, > otherwise the tcp connection will enter normal zero-window probe > state. So when the timeout of ICSK_TIME_PROBE0, we need > retransmit the skb in the rtx queue. > > tcp_write_wakeup() don't do the job the retransmit packet, but > send new data. > > Am I right? Yes, that's a good point that the tcp_write_wakeup() code is not currently a good fit for the OOM case, since it currently can only send unsent data. cheers, neal > Thanks! > Menglong Dong > > > > That's an option, and we can make the length of the data we > > > send to 1 byte, which means we keep retransmitting the first > > > byte that has not be acked in the retransmit queue. > > > > I don't think it would be worth adding special-case code to only send > > 1 byte. If the data receiver is not OOM then for CPU and memory > > efficiency reasons (as well as simplicity) the data sender should send > > it a full MSS. So for those reasons I would suggest that in this > > potential approach tcp_write_wakeup() should stay the same. > > > > neal
diff --git a/include/net/tcp.h b/include/net/tcp.h index a6cf6d823e34..9625d0bf00e1 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1910,6 +1910,27 @@ static inline void tcp_add_write_queue_tail(struct sock *sk, struct sk_buff *skb tcp_chrono_start(sk, TCP_CHRONO_BUSY); } +static inline bool tcp_rtx_overflow(const struct sock *sk) +{ + struct sk_buff *rtx_head = tcp_rtx_queue_head(sk); + + return rtx_head && after(TCP_SKB_CB(rtx_head)->end_seq, + tcp_wnd_end(tcp_sk(sk))); +} + +static inline bool tcp_probe0_needed(const struct sock *sk) +{ + /* for the normal case */ + if (!tcp_sk(sk)->packets_out && !tcp_write_queue_empty(sk)) + return true; + + if (!sysctl_tcp_wnd_shrink) + return false; + + /* for the window shrink case */ + return tcp_rtx_overflow(sk); +} + /* Insert new before skb on the write queue of sk. */ static inline void tcp_insert_write_queue_before(struct sk_buff *new, struct sk_buff *skb, diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 56e395cb4554..a9ac295502ee 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3188,6 +3188,14 @@ void tcp_rearm_rto(struct sock *sk) /* Try to schedule a loss probe; if that doesn't work, then schedule an RTO. */ static void tcp_set_xmit_timer(struct sock *sk) { + /* Check if we are already in probe0 state, which means it's + * not needed to schedule the RTO. The normal probe0 can't reach + * here, so it must be window-shrink probe0 case here. + */ + if (unlikely(inet_csk(sk)->icsk_pending == ICSK_TIME_PROBE0) && + sysctl_tcp_wnd_shrink) + return; + if (!tcp_schedule_loss_probe(sk, true)) tcp_rearm_rto(sk); } @@ -3465,6 +3473,38 @@ static void tcp_ack_probe(struct sock *sk) } } +/** + * This function is called only when there are packets in the rtx queue, + * which means that the packets out is not 0. + * + * NOTE: we only handle window shrink case in this part. + */ +static void tcp_ack_probe_shrink(struct sock *sk) +{ + struct inet_connection_sock *icsk = inet_csk(sk); + unsigned long when; + + if (!sysctl_tcp_wnd_shrink) + return; + + if (tcp_rtx_overflow(sk)) { + when = tcp_probe0_when(sk, TCP_RTO_MAX); + + when = tcp_clamp_probe0_to_user_timeout(sk, when); + tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX); + } else { + /* check if recover from window shrink */ + if (icsk->icsk_pending != ICSK_TIME_PROBE0) + return; + + icsk->icsk_backoff = 0; + icsk->icsk_probes_tstamp = 0; + inet_csk_clear_xmit_timer(sk, ICSK_TIME_PROBE0); + if (!tcp_rtx_queue_empty(sk)) + tcp_retransmit_timer(sk); + } +} + static inline bool tcp_ack_is_dubious(const struct sock *sk, const int flag) { return !(flag & FLAG_NOT_DUP) || (flag & FLAG_CA_ALERT) || @@ -3908,6 +3948,7 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag) if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) sk_dst_confirm(sk); + tcp_ack_probe_shrink(sk); delivered = tcp_newly_delivered(sk, delivered, flag); lost = tp->lost - lost; /* freshly marked lost */ rs.is_ack_delayed = !!(flag & FLAG_ACK_MAYBE_DELAYED); diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 21dc4f7e0a12..eac0532edb61 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -4089,14 +4089,13 @@ int tcp_write_wakeup(struct sock *sk, int mib) void tcp_send_probe0(struct sock *sk) { struct inet_connection_sock *icsk = inet_csk(sk); - struct tcp_sock *tp = tcp_sk(sk); struct net *net = sock_net(sk); unsigned long timeout; int err; err = tcp_write_wakeup(sk, LINUX_MIB_TCPWINPROBE); - if (tp->packets_out || tcp_write_queue_empty(sk)) { + if (!tcp_probe0_needed(sk)) { /* Cancel probe timer, if it is not required. */ icsk->icsk_probes_out = 0; icsk->icsk_backoff = 0; diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index b839c2f91292..a28606291b7e 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -350,11 +350,9 @@ static void tcp_delack_timer(struct timer_list *t) static void tcp_probe_timer(struct sock *sk) { struct inet_connection_sock *icsk = inet_csk(sk); - struct sk_buff *skb = tcp_send_head(sk); - struct tcp_sock *tp = tcp_sk(sk); int max_probes; - if (tp->packets_out || !skb) { + if (!tcp_probe0_needed(sk)) { icsk->icsk_probes_out = 0; icsk->icsk_probes_tstamp = 0; return;