Message ID | 39b2e9fd-601b-189d-39a9-914e5574524c@sberdevices.ru |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1649138wrn; Sat, 17 Dec 2022 11:42:57 -0800 (PST) X-Google-Smtp-Source: AA0mqf52z2NRFVTPraFLRfh8NbNsaTtBxoQtnD6B96TDJfOnt5RVOCSA8qNROcEMPb8t6Fy5ZC0t X-Received: by 2002:a05:6a20:b91d:b0:ac:a2bb:96e6 with SMTP id fe29-20020a056a20b91d00b000aca2bb96e6mr38741695pzb.56.1671306177215; Sat, 17 Dec 2022 11:42:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671306177; cv=none; d=google.com; s=arc-20160816; b=QOOIqdru74isYRxyXYuDxjULpyAxL/uAStIkUR//z++W10/s/wEazEIWfkCWDnz89K vx23iVDo5er0ew8bJ8Rcqzte5ot6jbb/i5uns+xaR9jH+/Nmg2OdmvK9nYI/IG4fwIqw vua+zv1lfMx5Eh7VERhhEr1lmHVqFU8CdBNBE9+Fia1HUqkNky2VM7UwHwSZgOUak+om DyIyCC8Xfdd5drwDIFsQVad4E2/mWAcVqAtCprMTRq74BM0gD4wvPyC8QFx58dvQWvG+ fPFjfNOXLoJi+T8KLyJLui1UkMACljoVnvA3AiwEdJ5pwwyf87MTnWp2zzAvfc7JerZq 8pLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :content-id:content-language:accept-language:message-id:date :thread-index:thread-topic:subject:cc:to:from:dkim-signature; bh=MglaPv/ZJAQYTBRWx0CWkDk0JuAmC526ZvJW04XE1lM=; b=HUvbz4gpTG1eblv4bSRm2lI/rR21EEyP5UBoFn+9YAZYqviVexIe2kcxNkqNcVd8wi 9HxvNWtyKNT4A8NcTfw1dN6gIdRN1EiguN08g1+jF4aufv1tCV7/iR8fFxpTXOiZFo2q EqNvY1A3hofeWBCjnxFU3bhuvfWjZpV+x0fEu9E6dbNKoWza4QHZFWyv012BJIw3MpXg od6cQsecnRZvFsKIEYeNUu0MynGjfWk44NSJPe70vwqx+8H6JKQBH5lZ8rv7h7XZhSkJ RSqQrgh3AXFMsE816S6rvHkgo/m/rsIYfD7D0nyJtBSu2OlLbzMHScyXFy9Ke/pclWNc zzaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b=JhvYuHbF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l63-20020a638842000000b004785400ad55si6488497pgd.423.2022.12.17.11.42.44; Sat, 17 Dec 2022 11:42:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b=JhvYuHbF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229624AbiLQTmQ (ORCPT <rfc822;markus.c.watson@gmail.com> + 99 others); Sat, 17 Dec 2022 14:42:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229480AbiLQTmO (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sat, 17 Dec 2022 14:42:14 -0500 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 090DFE08A; Sat, 17 Dec 2022 11:42:11 -0800 (PST) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 949035FD03; Sat, 17 Dec 2022 22:42:08 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1671306128; bh=MglaPv/ZJAQYTBRWx0CWkDk0JuAmC526ZvJW04XE1lM=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=JhvYuHbFPy55vMW5EU76Z1K6K4ZCJ+8sv8ciNgVIg8pEeCJq2uLv68QCc45cUa8cy 5zxFBjPhigL2CdhB6SSwmgA+mhCJ04fbCdcQKWZYM25Y6F53WT2R8cgR94Udg4ckJX LB0mK3g/Q+TRmsPmP7LYBtF60q3BDLvkcNe9TULQjxif1n0uHuCVXhwLlKZ1eDhyH0 tPrly3AsoCOHZxlPsHxLwSgLVfgShl8uq3FBeLuuSjugf9sj9cOjlwZDBTmd4K0jSX KL2t3ZSXvkvxqJ6EDHHDt18ZZ18oDFyu+0uGPpnQgeV0gC09DWsREaFq/7lJaHN1oM C9WQG/15XdT7Q== Received: from S-MS-EXCH02.sberdevices.ru (S-MS-EXCH02.sberdevices.ru [172.16.1.5]) by mx.sberdevices.ru (Postfix) with ESMTP; Sat, 17 Dec 2022 22:42:04 +0300 (MSK) From: Arseniy Krasnov <AVKrasnov@sberdevices.ru> To: Stefano Garzarella <sgarzare@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>, "edumazet@google.com" <edumazet@google.com>, "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com> CC: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>, "virtualization@lists.linux-foundation.org" <virtualization@lists.linux-foundation.org>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>, kernel <kernel@sberdevices.ru>, Krasnov Arseniy <oxffffaa@gmail.com>, Arseniy Krasnov <AVKrasnov@sberdevices.ru> Subject: [RFC PATCH v1 0/2] virtio/vsock: fix mutual rx/tx hungup Thread-Topic: [RFC PATCH v1 0/2] virtio/vsock: fix mutual rx/tx hungup Thread-Index: AQHZEk+jG/1mxrs/wUyXm0IW7LxCRw== Date: Sat, 17 Dec 2022 19:42:04 +0000 Message-ID: <39b2e9fd-601b-189d-39a9-914e5574524c@sberdevices.ru> Accept-Language: en-US, ru-RU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-Type: text/plain; charset="utf-8" Content-ID: <B630B2DE33D4A447A205C54F3C3BD444@sberdevices.ru> Content-Transfer-Encoding: base64 MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/12/17 15:49:00 #20678428 X-KSMG-AntiVirus-Status: Clean, skipped X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1752491546387707723?= X-GMAIL-MSGID: =?utf-8?q?1752491546387707723?= |
Series |
virtio/vsock: fix mutual rx/tx hungup
|
|
Message
Arseniy Krasnov
Dec. 17, 2022, 7:42 p.m. UTC
Hello, seems I found strange thing(may be a bug) where sender('tx' later) and receiver('rx' later) could stuck forever. Potential fix is in the first patch, second patch contains reproducer, based on vsock test suite. Reproducer is simple: tx just sends data to rx by 'write() syscall, rx dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run server in host and client in guest. rx side params: 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). 2) SO_RCVLOWAT is 128Kb. What happens in the reproducer step by step: 1) tx tries to send 256Kb + 1 byte (in a single 'write()') 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) 3) tx waits for space in 'write()' to send last 1 byte 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set 5) rx reads 64Kb, credit update is not sent due to * 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set 7) rx reads 64Kb, credit update is not sent due to * 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set 9) rx reads 64Kb, credit update is not sent due to * 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() * is optimization in 'virtio_transport_stream_do_dequeue()' which sends OP_CREDIT_UPDATE only when we have not too much space - less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. Now tx side waits for space inside write() and rx waits in poll() for 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I think, possible fix is to send credit update not only when we have too small space, but also when number of bytes in receive queue is smaller than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not sure about correctness of this idea, but anyway - I think that problem above exists. What do You think? Patchset was rebased and tested on skbuff v7 patch from Bobby Eshleman: https://lore.kernel.org/netdev/20221213192843.421032-1-bobby.eshleman@bytedance.com/ Arseniy Krasnov(2): virtio/vsock: send credit update depending on SO_RCVLOWAT vsock_test: mutual hungup reproducer net/vmw_vsock/virtio_transport_common.c | 9 +++- tools/testing/vsock/vsock_test.c | 78 +++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 2 deletions(-) -- 2.25.1
Comments
Hi Arseniy, On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: > > Hello, > > seems I found strange thing(may be a bug) where sender('tx' later) and > receiver('rx' later) could stuck forever. Potential fix is in the first > patch, second patch contains reproducer, based on vsock test suite. > Reproducer is simple: tx just sends data to rx by 'write() syscall, rx > dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run > server in host and client in guest. > > rx side params: > 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). > 2) SO_RCVLOWAT is 128Kb. > > What happens in the reproducer step by step: > I put the values of the variables involved to facilitate understanding: RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB The credit update is sent if free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] > 1) tx tries to send 256Kb + 1 byte (in a single 'write()') > 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) > 3) tx waits for space in 'write()' to send last 1 byte > 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set > 5) rx reads 64Kb, credit update is not sent due to * RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; free_space = 192 KB > 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set > 7) rx reads 64Kb, credit update is not sent due to * RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; free_space = 128 KB > 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set > 9) rx reads 64Kb, credit update is not sent due to * Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; free_space = 64 KB > 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() I agree that the TX is stuck because we are not sending the credit update, but also if RX sends the credit update at step 9, RX won't be woken up at step 10, right? > > * is optimization in 'virtio_transport_stream_do_dequeue()' which > sends OP_CREDIT_UPDATE only when we have not too much space - > less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. > > Now tx side waits for space inside write() and rx waits in poll() for > 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I > think, possible fix is to send credit update not only when we have too > small space, but also when number of bytes in receive queue is smaller > than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not > sure about correctness of this idea, but anyway - I think that problem > above exists. What do You think? I'm not sure, I have to think more about it, but if RX reads less than SO_RCVLOWAT, I expect it's normal to get to a case of stuck. In this case we are only unstucking TX, but even if it sends that single byte, RX is still stuck and not consuming it, so it was useless to wake up TX if RX won't consume it anyway, right? If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, then it would still send the credit update even without this patch and TX will send the 1 byte. Thanks, Stefano
On 19.12.2022 18:41, Stefano Garzarella wrote: Hello! > Hi Arseniy, > > On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: >> >> Hello, >> >> seems I found strange thing(may be a bug) where sender('tx' later) and >> receiver('rx' later) could stuck forever. Potential fix is in the first >> patch, second patch contains reproducer, based on vsock test suite. >> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx >> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run >> server in host and client in guest. >> >> rx side params: >> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). >> 2) SO_RCVLOWAT is 128Kb. >> >> What happens in the reproducer step by step: >> > > I put the values of the variables involved to facilitate understanding: > > RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; > free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB > > The credit update is sent if > free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] > >> 1) tx tries to send 256Kb + 1 byte (in a single 'write()') >> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) >> 3) tx waits for space in 'write()' to send last 1 byte >> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set >> 5) rx reads 64Kb, credit update is not sent due to * > > RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; > free_space = 192 KB > >> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set >> 7) rx reads 64Kb, credit update is not sent due to * > > RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; > free_space = 128 KB > >> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set >> 9) rx reads 64Kb, credit update is not sent due to * > > Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. > > RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; > free_space = 64 KB > >> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() > > I agree that the TX is stuck because we are not sending the credit > update, but also if RX sends the credit update at step 9, RX won't be > woken up at step 10, right? Yes, RX will sleep, but TX will wake up and as we inform TX how much free space we have, now there are two cases for TX: 1) send "small" rest of data(e.g. without blocking again), leave 'write()' and continue execution. RX still waits in 'poll()'. Later TX will send enough data to wake up RX. 2) send "big" rest of data - if rest is too big to leave 'write()' and TX will wait again for the free space - it will be able to send enough data to wake up RX as we compared 'rx_bytes' with rcvlowat value in RX. > >> >> * is optimization in 'virtio_transport_stream_do_dequeue()' which >> sends OP_CREDIT_UPDATE only when we have not too much space - >> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. >> >> Now tx side waits for space inside write() and rx waits in poll() for >> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I >> think, possible fix is to send credit update not only when we have too >> small space, but also when number of bytes in receive queue is smaller >> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not >> sure about correctness of this idea, but anyway - I think that problem >> above exists. What do You think? > > I'm not sure, I have to think more about it, but if RX reads less than > SO_RCVLOWAT, I expect it's normal to get to a case of stuck. > > In this case we are only unstucking TX, but even if it sends that single > byte, RX is still stuck and not consuming it, so it was useless to wake > up TX if RX won't consume it anyway, right? 1) I think it is not useless, because we inform(not just wake up) TX that there is free space at RX side - as i mentioned above. 2) Anyway i think that this situation is a little bit strange: TX thinks that there is no free space at RX and waits for it, but there is free space at RX! At the same time, RX waits in poll() forever - it is ready to get new portion of data to return POLLIN, but TX "thinks" exactly opposite thing - RX is full of data. Of course, if there will be just stalls in TX data handling - it will be ok - just performance degradation, but TX stucks forever. > > If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, > then it would still send the credit update even without this patch and > TX will send the 1 byte. But how RX will wake up in this case? E.g. it calls poll() without timeout, connection is established, RX ignores signal Thanks, Arseniy > > Thanks, > Stefano >
On Tue, Dec 20, 2022 at 07:14:27AM +0000, Arseniy Krasnov wrote: >On 19.12.2022 18:41, Stefano Garzarella wrote: > >Hello! > >> Hi Arseniy, >> >> On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: >>> >>> Hello, >>> >>> seems I found strange thing(may be a bug) where sender('tx' later) and >>> receiver('rx' later) could stuck forever. Potential fix is in the first >>> patch, second patch contains reproducer, based on vsock test suite. >>> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx >>> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run >>> server in host and client in guest. >>> >>> rx side params: >>> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). >>> 2) SO_RCVLOWAT is 128Kb. >>> >>> What happens in the reproducer step by step: >>> >> >> I put the values of the variables involved to facilitate understanding: >> >> RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; >> free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB >> >> The credit update is sent if >> free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] >> >>> 1) tx tries to send 256Kb + 1 byte (in a single 'write()') >>> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) >>> 3) tx waits for space in 'write()' to send last 1 byte >>> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set >>> 5) rx reads 64Kb, credit update is not sent due to * >> >> RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; >> free_space = 192 KB >> >>> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set >>> 7) rx reads 64Kb, credit update is not sent due to * >> >> RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; >> free_space = 128 KB >> >>> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set >>> 9) rx reads 64Kb, credit update is not sent due to * >> >> Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. >> >> RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; >> free_space = 64 KB >> >>> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() >> >> I agree that the TX is stuck because we are not sending the credit >> update, but also if RX sends the credit update at step 9, RX won't be >> woken up at step 10, right? > >Yes, RX will sleep, but TX will wake up and as we inform TX how much >free space we have, now there are two cases for TX: >1) send "small" rest of data(e.g. without blocking again), leave 'write()' > and continue execution. RX still waits in 'poll()'. Later TX will > send enough data to wake up RX. >2) send "big" rest of data - if rest is too big to leave 'write()' and TX > will wait again for the free space - it will be able to send enough data > to wake up RX as we compared 'rx_bytes' with rcvlowat value in RX. Right, so I'd update the test to behave like this. And I'd explain better the problem we are going to fix in the commit message. >> >>> >>> * is optimization in 'virtio_transport_stream_do_dequeue()' which >>> sends OP_CREDIT_UPDATE only when we have not too much space - >>> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. >>> >>> Now tx side waits for space inside write() and rx waits in poll() for >>> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I >>> think, possible fix is to send credit update not only when we have too >>> small space, but also when number of bytes in receive queue is smaller >>> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not >>> sure about correctness of this idea, but anyway - I think that problem >>> above exists. What do You think? >> >> I'm not sure, I have to think more about it, but if RX reads less >> than >> SO_RCVLOWAT, I expect it's normal to get to a case of stuck. >> >> In this case we are only unstucking TX, but even if it sends that single >> byte, RX is still stuck and not consuming it, so it was useless to wake >> up TX if RX won't consume it anyway, right? > >1) I think it is not useless, because we inform(not just wake up) TX that >there is free space at RX side - as i mentioned above. >2) Anyway i think that this situation is a little bit strange: TX thinks that >there is no free space at RX and waits for it, but there is free space at RX! >At the same time, RX waits in poll() forever - it is ready to get new portion >of data to return POLLIN, but TX "thinks" exactly opposite thing - RX is full >of data. Of course, if there will be just stalls in TX data handling - it will >be ok - just performance degradation, but TX stucks forever. We did it to avoid a lot of credit update messages. Anyway I think here the main point is why RX is setting SO_RCVLOWAT to 128 KB and then reads only half of it? So I think if the users set SO_RCVLOWAT to a value and then RX reads less then it, is expected to get stuck. Anyway, since the change will not impact the default behaviour (SO_RCVLOWAT = 1) we can merge this patch, but IMHO we need to explain the case better and improve the test. > >> >> If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, >> then it would still send the credit update even without this patch and >> TX will send the 1 byte. > >But how RX will wake up in this case? E.g. it calls poll() without timeout, >connection is established, RX ignores signal RX will wake up because SO_RCVLOWAT is 64KB and there are 64 KB in the buffer. Then RX will read it and send the credit update to TX because free_space is 0. Thanks, Stefano
On 20.12.2022 11:33, Stefano Garzarella wrote: > On Tue, Dec 20, 2022 at 07:14:27AM +0000, Arseniy Krasnov wrote: >> On 19.12.2022 18:41, Stefano Garzarella wrote: >> >> Hello! >> >>> Hi Arseniy, >>> >>> On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: >>>> >>>> Hello, >>>> >>>> seems I found strange thing(may be a bug) where sender('tx' later) and >>>> receiver('rx' later) could stuck forever. Potential fix is in the first >>>> patch, second patch contains reproducer, based on vsock test suite. >>>> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx >>>> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run >>>> server in host and client in guest. >>>> >>>> rx side params: >>>> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). >>>> 2) SO_RCVLOWAT is 128Kb. >>>> >>>> What happens in the reproducer step by step: >>>> >>> >>> I put the values of the variables involved to facilitate understanding: >>> >>> RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; >>> free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB >>> >>> The credit update is sent if >>> free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] >>> >>>> 1) tx tries to send 256Kb + 1 byte (in a single 'write()') >>>> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) >>>> 3) tx waits for space in 'write()' to send last 1 byte >>>> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set >>>> 5) rx reads 64Kb, credit update is not sent due to * >>> >>> RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; >>> free_space = 192 KB >>> >>>> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set >>>> 7) rx reads 64Kb, credit update is not sent due to * >>> >>> RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; >>> free_space = 128 KB >>> >>>> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set >>>> 9) rx reads 64Kb, credit update is not sent due to * >>> >>> Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. >>> >>> RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; >>> free_space = 64 KB >>> >>>> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() >>> >>> I agree that the TX is stuck because we are not sending the credit >>> update, but also if RX sends the credit update at step 9, RX won't be >>> woken up at step 10, right? >> >> Yes, RX will sleep, but TX will wake up and as we inform TX how much >> free space we have, now there are two cases for TX: >> 1) send "small" rest of data(e.g. without blocking again), leave 'write()' >> and continue execution. RX still waits in 'poll()'. Later TX will >> send enough data to wake up RX. >> 2) send "big" rest of data - if rest is too big to leave 'write()' and TX >> will wait again for the free space - it will be able to send enough data >> to wake up RX as we compared 'rx_bytes' with rcvlowat value in RX. > > Right, so I'd update the test to behave like this. Sorry, You mean vsock_test? To cover TX waiting for free space at RX, thus checking this kernel patch logic? > And I'd explain better the problem we are going to fix in the commit message. Ok > >>> >>>> >>>> * is optimization in 'virtio_transport_stream_do_dequeue()' which >>>> sends OP_CREDIT_UPDATE only when we have not too much space - >>>> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. >>>> >>>> Now tx side waits for space inside write() and rx waits in poll() for >>>> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I >>>> think, possible fix is to send credit update not only when we have too >>>> small space, but also when number of bytes in receive queue is smaller >>>> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not >>>> sure about correctness of this idea, but anyway - I think that problem >>>> above exists. What do You think? >>> >>> I'm not sure, I have to think more about it, but if RX reads less than >>> SO_RCVLOWAT, I expect it's normal to get to a case of stuck. >>> >>> In this case we are only unstucking TX, but even if it sends that single >>> byte, RX is still stuck and not consuming it, so it was useless to wake >>> up TX if RX won't consume it anyway, right? >> >> 1) I think it is not useless, because we inform(not just wake up) TX that >> there is free space at RX side - as i mentioned above. >> 2) Anyway i think that this situation is a little bit strange: TX thinks that >> there is no free space at RX and waits for it, but there is free space at RX! >> At the same time, RX waits in poll() forever - it is ready to get new portion >> of data to return POLLIN, but TX "thinks" exactly opposite thing - RX is full >> of data. Of course, if there will be just stalls in TX data handling - it will >> be ok - just performance degradation, but TX stucks forever. > > We did it to avoid a lot of credit update messages. Yes, i see > Anyway I think here the main point is why RX is setting SO_RCVLOWAT to 128 KB and then reads only half of it? > > So I think if the users set SO_RCVLOWAT to a value and then RX reads less then it, is expected to get stuck. That a really interesting question, I've found nothing about this case in Google(not sure for 100%) or POSIX. But, i can modify reproducer: it sets SO_RCVLOWAT to 128Kb BEFORE entering its last poll where it will stuck. In this case behaviour looks more legal: it uses default SO_RCVLOWAT of 1, read 64Kb each time. Finally it sets SO_RCVLOWAT to 128Kb(and imagine that it prepares 128Kb 'read()' buffer) and enters poll() - we will get same effect: TX will wait for space, RX waits in 'poll()'. > > Anyway, since the change will not impact the default behaviour (SO_RCVLOWAT = 1) we can merge this patch, but IMHO we need to explain the case better and improve the test. I see, of course I'm not sure about this change, just want to ask someone who knows this code better > >> >>> >>> If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, >>> then it would still send the credit update even without this patch and >>> TX will send the 1 byte. >> >> But how RX will wake up in this case? E.g. it calls poll() without timeout, >> connection is established, RX ignores signal > > RX will wake up because SO_RCVLOWAT is 64KB and there are 64 KB in the buffer. Then RX will read it and send the credit update to TX because > free_space is 0. IIUC, i'm talking about 10 steps above, e.g. RX will never wake up, because TX is waiting for space. > > Thanks, > Stefano > Thanks, Arseniy
On Tue, Dec 20, 2022 at 09:23:17AM +0000, Arseniy Krasnov wrote: >On 20.12.2022 11:33, Stefano Garzarella wrote: >> On Tue, Dec 20, 2022 at 07:14:27AM +0000, Arseniy Krasnov wrote: >>> On 19.12.2022 18:41, Stefano Garzarella wrote: >>> >>> Hello! >>> >>>> Hi Arseniy, >>>> >>>> On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: >>>>> >>>>> Hello, >>>>> >>>>> seems I found strange thing(may be a bug) where sender('tx' later) and >>>>> receiver('rx' later) could stuck forever. Potential fix is in the first >>>>> patch, second patch contains reproducer, based on vsock test suite. >>>>> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx >>>>> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run >>>>> server in host and client in guest. >>>>> >>>>> rx side params: >>>>> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). >>>>> 2) SO_RCVLOWAT is 128Kb. >>>>> >>>>> What happens in the reproducer step by step: >>>>> >>>> >>>> I put the values of the variables involved to facilitate understanding: >>>> >>>> RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; >>>> free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB >>>> >>>> The credit update is sent if >>>> free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] >>>> >>>>> 1) tx tries to send 256Kb + 1 byte (in a single 'write()') >>>>> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) >>>>> 3) tx waits for space in 'write()' to send last 1 byte >>>>> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set >>>>> 5) rx reads 64Kb, credit update is not sent due to * >>>> >>>> RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; >>>> free_space = 192 KB >>>> >>>>> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set >>>>> 7) rx reads 64Kb, credit update is not sent due to * >>>> >>>> RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; >>>> free_space = 128 KB >>>> >>>>> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set >>>>> 9) rx reads 64Kb, credit update is not sent due to * >>>> >>>> Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. >>>> >>>> RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; >>>> free_space = 64 KB >>>> >>>>> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() >>>> >>>> I agree that the TX is stuck because we are not sending the credit >>>> update, but also if RX sends the credit update at step 9, RX won't be >>>> woken up at step 10, right? >>> >>> Yes, RX will sleep, but TX will wake up and as we inform TX how much >>> free space we have, now there are two cases for TX: >>> 1) send "small" rest of data(e.g. without blocking again), leave 'write()' >>> and continue execution. RX still waits in 'poll()'. Later TX will >>> send enough data to wake up RX. >>> 2) send "big" rest of data - if rest is too big to leave 'write()' and TX >>> will wait again for the free space - it will be able to send enough data >>> to wake up RX as we compared 'rx_bytes' with rcvlowat value in RX. >> >> Right, so I'd update the test to behave like this. >Sorry, You mean vsock_test? To cover TX waiting for free space at RX, thus checking >this kernel patch logic? Yep, I mean the test that you added in this series. >> And I'd explain better the problem we are going to fix in the commit message. >Ok >> >>>> >>>>> >>>>> * is optimization in 'virtio_transport_stream_do_dequeue()' which >>>>> sends OP_CREDIT_UPDATE only when we have not too much space - >>>>> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. >>>>> >>>>> Now tx side waits for space inside write() and rx waits in poll() for >>>>> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I >>>>> think, possible fix is to send credit update not only when we have too >>>>> small space, but also when number of bytes in receive queue is smaller >>>>> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not >>>>> sure about correctness of this idea, but anyway - I think that problem >>>>> above exists. What do You think? >>>> >>>> I'm not sure, I have to think more about it, but if RX reads less than >>>> SO_RCVLOWAT, I expect it's normal to get to a case of stuck. >>>> >>>> In this case we are only unstucking TX, but even if it sends that single >>>> byte, RX is still stuck and not consuming it, so it was useless to wake >>>> up TX if RX won't consume it anyway, right? >>> >>> 1) I think it is not useless, because we inform(not just wake up) TX that >>> there is free space at RX side - as i mentioned above. >>> 2) Anyway i think that this situation is a little bit strange: TX thinks that >>> there is no free space at RX and waits for it, but there is free space at RX! >>> At the same time, RX waits in poll() forever - it is ready to get new portion >>> of data to return POLLIN, but TX "thinks" exactly opposite thing - RX is full >>> of data. Of course, if there will be just stalls in TX data handling - it will >>> be ok - just performance degradation, but TX stucks forever. >> >> We did it to avoid a lot of credit update messages. >Yes, i see >> Anyway I think here the main point is why RX is setting SO_RCVLOWAT to 128 KB and then reads only half of it? >> >> So I think if the users set SO_RCVLOWAT to a value and then RX reads less then it, is expected to get stuck. >That a really interesting question, I've found nothing about this case in Google(not sure for 100%) or POSIX. But, >i can modify reproducer: it sets SO_RCVLOWAT to 128Kb BEFORE entering its last poll where it will stuck. In this >case behaviour looks more legal: it uses default SO_RCVLOWAT of 1, read 64Kb each time. Finally it sets SO_RCVLOWAT >to 128Kb(and imagine that it prepares 128Kb 'read()' buffer) and enters poll() - we will get same effect: TX will wait >for space, RX waits in 'poll()'. Good point! >> >> Anyway, since the change will not impact the default behaviour (SO_RCVLOWAT = 1) we can merge this patch, but IMHO we need to explain the case better and improve the test. >I see, of course I'm not sure about this change, just want to ask >someone who knows this code better Yes, it's an RFC, so you did well! :-) >> >>> >>>> >>>> If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, >>>> then it would still send the credit update even without this patch and >>>> TX will send the 1 byte. >>> >>> But how RX will wake up in this case? E.g. it calls poll() without timeout, >>> connection is established, RX ignores signal >> >> RX will wake up because SO_RCVLOWAT is 64KB and there are 64 KB in the buffer. Then RX will read it and send the credit update to TX because >> free_space is 0. >IIUC, i'm talking about 10 steps above, e.g. RX will never wake up, >because TX is waiting for space. Yep, but if RX uses SO_RCVLOWAT = 64 KB instead of 128 KB (I mean if RX reads all the bytes that it's waiting as it specified in SO_RCVLOWAT), then RX will send the credit message. But there is the case that you mentioned, when SO_RCVLOWAT is chagend while executing. Thanks, Stefano
On 20.12.2022 13:43, Stefano Garzarella wrote: > On Tue, Dec 20, 2022 at 09:23:17AM +0000, Arseniy Krasnov wrote: >> On 20.12.2022 11:33, Stefano Garzarella wrote: >>> On Tue, Dec 20, 2022 at 07:14:27AM +0000, Arseniy Krasnov wrote: >>>> On 19.12.2022 18:41, Stefano Garzarella wrote: >>>> >>>> Hello! >>>> >>>>> Hi Arseniy, >>>>> >>>>> On Sat, Dec 17, 2022 at 8:42 PM Arseniy Krasnov <AVKrasnov@sberdevices.ru> wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> seems I found strange thing(may be a bug) where sender('tx' later) and >>>>>> receiver('rx' later) could stuck forever. Potential fix is in the first >>>>>> patch, second patch contains reproducer, based on vsock test suite. >>>>>> Reproducer is simple: tx just sends data to rx by 'write() syscall, rx >>>>>> dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run >>>>>> server in host and client in guest. >>>>>> >>>>>> rx side params: >>>>>> 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). >>>>>> 2) SO_RCVLOWAT is 128Kb. >>>>>> >>>>>> What happens in the reproducer step by step: >>>>>> >>>>> >>>>> I put the values of the variables involved to facilitate understanding: >>>>> >>>>> RX: buf_alloc = 256 KB; fwd_cnt = 0; last_fwd_cnt = 0; >>>>> free_space = buf_alloc - (fwd_cnt - last_fwd_cnt) = 256 KB >>>>> >>>>> The credit update is sent if >>>>> free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE [64 KB] >>>>> >>>>>> 1) tx tries to send 256Kb + 1 byte (in a single 'write()') >>>>>> 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) >>>>>> 3) tx waits for space in 'write()' to send last 1 byte >>>>>> 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set >>>>>> 5) rx reads 64Kb, credit update is not sent due to * >>>>> >>>>> RX: buf_alloc = 256 KB; fwd_cnt = 64 KB; last_fwd_cnt = 0; >>>>> free_space = 192 KB >>>>> >>>>>> 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set >>>>>> 7) rx reads 64Kb, credit update is not sent due to * >>>>> >>>>> RX: buf_alloc = 256 KB; fwd_cnt = 128 KB; last_fwd_cnt = 0; >>>>> free_space = 128 KB >>>>> >>>>>> 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set >>>>>> 9) rx reads 64Kb, credit update is not sent due to * >>>>> >>>>> Right, (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) is still false. >>>>> >>>>> RX: buf_alloc = 256 KB; fwd_cnt = 196 KB; last_fwd_cnt = 0; >>>>> free_space = 64 KB >>>>> >>>>>> 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() >>>>> >>>>> I agree that the TX is stuck because we are not sending the credit >>>>> update, but also if RX sends the credit update at step 9, RX won't be >>>>> woken up at step 10, right? >>>> >>>> Yes, RX will sleep, but TX will wake up and as we inform TX how much >>>> free space we have, now there are two cases for TX: >>>> 1) send "small" rest of data(e.g. without blocking again), leave 'write()' >>>> and continue execution. RX still waits in 'poll()'. Later TX will >>>> send enough data to wake up RX. >>>> 2) send "big" rest of data - if rest is too big to leave 'write()' and TX >>>> will wait again for the free space - it will be able to send enough data >>>> to wake up RX as we compared 'rx_bytes' with rcvlowat value in RX. >>> >>> Right, so I'd update the test to behave like this. >> Sorry, You mean vsock_test? To cover TX waiting for free space at RX, thus checking >> this kernel patch logic? > > Yep, I mean the test that you added in this series. Ok > >>> And I'd explain better the problem we are going to fix in the commit message. >> Ok >>> >>>>> >>>>>> >>>>>> * is optimization in 'virtio_transport_stream_do_dequeue()' which >>>>>> sends OP_CREDIT_UPDATE only when we have not too much space - >>>>>> less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. >>>>>> >>>>>> Now tx side waits for space inside write() and rx waits in poll() for >>>>>> 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I >>>>>> think, possible fix is to send credit update not only when we have too >>>>>> small space, but also when number of bytes in receive queue is smaller >>>>>> than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not >>>>>> sure about correctness of this idea, but anyway - I think that problem >>>>>> above exists. What do You think? >>>>> >>>>> I'm not sure, I have to think more about it, but if RX reads less than >>>>> SO_RCVLOWAT, I expect it's normal to get to a case of stuck. >>>>> >>>>> In this case we are only unstucking TX, but even if it sends that single >>>>> byte, RX is still stuck and not consuming it, so it was useless to wake >>>>> up TX if RX won't consume it anyway, right? >>>> >>>> 1) I think it is not useless, because we inform(not just wake up) TX that >>>> there is free space at RX side - as i mentioned above. >>>> 2) Anyway i think that this situation is a little bit strange: TX thinks that >>>> there is no free space at RX and waits for it, but there is free space at RX! >>>> At the same time, RX waits in poll() forever - it is ready to get new portion >>>> of data to return POLLIN, but TX "thinks" exactly opposite thing - RX is full >>>> of data. Of course, if there will be just stalls in TX data handling - it will >>>> be ok - just performance degradation, but TX stucks forever. >>> >>> We did it to avoid a lot of credit update messages. >> Yes, i see >>> Anyway I think here the main point is why RX is setting SO_RCVLOWAT to 128 KB and then reads only half of it? >>> >>> So I think if the users set SO_RCVLOWAT to a value and then RX reads less then it, is expected to get stuck. >> That a really interesting question, I've found nothing about this case in Google(not sure for 100%) or POSIX. But, >> i can modify reproducer: it sets SO_RCVLOWAT to 128Kb BEFORE entering its last poll where it will stuck. In this >> case behaviour looks more legal: it uses default SO_RCVLOWAT of 1, read 64Kb each time. Finally it sets SO_RCVLOWAT >> to 128Kb(and imagine that it prepares 128Kb 'read()' buffer) and enters poll() - we will get same effect: TX will wait >> for space, RX waits in 'poll()'. > > Good point! > >>> >>> Anyway, since the change will not impact the default behaviour (SO_RCVLOWAT = 1) we can merge this patch, but IMHO we need to explain the case better and improve the test. >> I see, of course I'm not sure about this change, just want to ask someone who knows this code better > > Yes, it's an RFC, so you did well! :-) So ok, I'll prepare RFC version of this patchset(e.g. CV with explanation, kernel patch and test for it) > >>> >>>> >>>>> >>>>> If RX woke up (e.g. SO_RCVLOWAT = 64KB) and read the remaining 64KB, >>>>> then it would still send the credit update even without this patch and >>>>> TX will send the 1 byte. >>>> >>>> But how RX will wake up in this case? E.g. it calls poll() without timeout, >>>> connection is established, RX ignores signal >>> >>> RX will wake up because SO_RCVLOWAT is 64KB and there are 64 KB in the buffer. Then RX will read it and send the credit update to TX because >>> free_space is 0. >> IIUC, i'm talking about 10 steps above, e.g. RX will never wake up, because TX is waiting for space. > > Yep, but if RX uses SO_RCVLOWAT = 64 KB instead of 128 KB (I mean if RX reads all the bytes that it's waiting as it specified in SO_RCVLOWAT), then RX will send the credit message. > > But there is the case that you mentioned, when SO_RCVLOWAT is chagend while executing. I'll use this case for test > > Thanks, > Stefano > Thanks, Arseniy