[RESEND,net-next,5/5] net/sched: taprio: dump class stats for the actual q->qdiscs[]
Message ID | 20230602103750.2290132-6-vladimir.oltean@nxp.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp953790vqr; Fri, 2 Jun 2023 04:26:31 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Mx5VCpwDyv5EtZ8i1TF3PV63wqNgockIVTDiwBle7qPHIPn17hjj57Y7SPERxagCJ3YWU X-Received: by 2002:a05:6830:1e96:b0:6af:6e7c:e7b2 with SMTP id n22-20020a0568301e9600b006af6e7ce7b2mr2320289otr.10.1685705191190; Fri, 02 Jun 2023 04:26:31 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1685705191; cv=pass; d=google.com; s=arc-20160816; b=T+qChaK9DNfnJj2XR7lsvtMurz75UGcmEcyKNN8TYYHQqGSGnPHKPUqGfVZJHUBm6q ivVdkm7YxuaXBbssuVSyIFnWY4+QMNqlYvOLPwf+0HYc2F0Vk9Wz89am9qdyOgSuagaY Ix46mvMcoEiFmU0Z//nYJEPp8VR7OAwms3u+kLm8od01y2ifUHvyoWflTlKgc4fGnZrB nWbvXI8ZgiZ6FsipBii6giaUGlCPXYcaF3BShw5LxL+C9dE7n3y8GUl2AMtC+O6W5fmd +CFQq5CA5MkpcyB3pms0vjjPa/IlMlagP8k8CaVskWvuwic7I3paR1//r/9oHJdY8Cg5 urzA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GXbdk2K45vsjE10Ud3k19fgs/8bgShOPBxRnB3axMrA=; b=QnPRUigujov4Za8VR7kwXzMAfKNekK04lhjnVghpchY7dEWgUM4iELRXi9oF2izz5D n+0OFNvzX1TggLw7C2mvKOssncjv9Y1S27kQL24NoVfm7vcj8GREd9GiynJduVAsE3l4 HlCV3s2UI5fL39WxwkKydSYgToXxF+vjs7NyRiVNwMV3Ji09Z/3/Eo4riFJRZQ+r1FkO QTUowt6nQS4w21xk6tHiSnw1jFr2GrhcLOrXBE+lgNnARkHepfWPxPGNvqFOOZI9M1Ek WUwIkYtfyZE5bBnngHNp6AQ7WOzKT+sz19jGTXnp1V2bj9uK+wgB9+2HMVBwERroyb8Z U/dA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@nxp.com header.s=selector2 header.b=WYY+gta4; arc=pass (i=1 spf=pass spfdomain=nxp.com dkim=pass dkdomain=nxp.com dmarc=pass fromdomain=nxp.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s3-20020aa78bc3000000b0063d28eb37a8si537638pfd.402.2023.06.02.04.26.10; Fri, 02 Jun 2023 04:26:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@nxp.com header.s=selector2 header.b=WYY+gta4; arc=pass (i=1 spf=pass spfdomain=nxp.com dkim=pass dkdomain=nxp.com dmarc=pass fromdomain=nxp.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235589AbjFBKlH (ORCPT <rfc822;limurcpp@gmail.com> + 99 others); Fri, 2 Jun 2023 06:41:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235977AbjFBKkL (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 2 Jun 2023 06:40:11 -0400 Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2046.outbound.protection.outlook.com [40.107.20.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA20EE55; Fri, 2 Jun 2023 03:39:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CE4R6C1or3anRr8Q9ArEY3vbp59Agqkr7LorWhephfsnfKxrTP2zZKSvm1K+4cjvAe1tR+vMCFZysPPicl2YUmPlVPnUqQTuWtUQhxcU07bTTrmoZqMffLei1qNwLXnrbDjXXYLimnx5NT8yF0ezxEzZR/lwckvBNJdoQQruIGvVJHFG+nht9WVbBbHiVPKk/9+DE40oq3ImO5NGilS+7yo7Y1BAuADIPvTNEikvh6WDjm1DFPyHvhM8OBFpN0aGj9/Z5Zwet7oRRYLnlIAeDY7p2SYH5GkZ+ieV/tDOZ5Am3UyqAYrcH06auiVAluYQ/huJnSWSU5UtUYFfUjXGYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GXbdk2K45vsjE10Ud3k19fgs/8bgShOPBxRnB3axMrA=; b=Zd55bgMm9mEVNyhCE75svQ6sHfcsDFuy/QY51VHpAaHtPl6DQsTnJLDZ8Prlx71jDRIkvdrrJKAYHwKMyjcW8QBBXdzJLM48akI40suzku5V9lNZgSnOrfrz9g+/C8SOjr9Lo4YuAkFCbGrvvIBeDw/E09mcmrv8g0YruU1SRquouJ6X90GgDTbvFxmREDeyrtNcrHdphFoYE3PmTAAeb2si70XlxK1suAF2ricKK+0rWig05UKPZOY+A6w+QKLN8rJ8WgS084mgjuKuWsoxvNSip9N974E+oYqE7fa7d8OhIiHQ3My4OvJCZu7BjcUrACrcZwAeZtoFbvalbbARMg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GXbdk2K45vsjE10Ud3k19fgs/8bgShOPBxRnB3axMrA=; b=WYY+gta4xnQsAyJKQyUGjfUB0PlGgDzHyCvjpR2VoKEw5QhVzoJbgmeCjKLOZ/xRkh/dUjtRWR/S/wdHeEZI65h+uG7j3+SrgCxz3hEIdaGHeZfbjX629xCkas7n10VZ/cmBo7hlhwTibf89COeG1PM479MQyKFxDnA/nuwZl1Q= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM0PR04MB6452.eurprd04.prod.outlook.com (2603:10a6:208:16d::21) by PAXPR04MB8926.eurprd04.prod.outlook.com (2603:10a6:102:20d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.24; Fri, 2 Jun 2023 10:38:22 +0000 Received: from AM0PR04MB6452.eurprd04.prod.outlook.com ([fe80::47e4:eb1:e83c:fa4a]) by AM0PR04MB6452.eurprd04.prod.outlook.com ([fe80::47e4:eb1:e83c:fa4a%4]) with mapi id 15.20.6455.020; Fri, 2 Jun 2023 10:38:22 +0000 From: Vladimir Oltean <vladimir.oltean@nxp.com> To: netdev@vger.kernel.org Cc: "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, Jamal Hadi Salim <jhs@mojatatu.com>, Cong Wang <xiyou.wangcong@gmail.com>, Jiri Pirko <jiri@resnulli.us>, Vinicius Costa Gomes <vinicius.gomes@intel.com>, linux-kernel@vger.kernel.org, intel-wired-lan@lists.osuosl.org, Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com>, Peilin Ye <yepeilin.cs@gmail.com>, Pedro Tammela <pctammela@mojatatu.com> Subject: [PATCH RESEND net-next 5/5] net/sched: taprio: dump class stats for the actual q->qdiscs[] Date: Fri, 2 Jun 2023 13:37:50 +0300 Message-Id: <20230602103750.2290132-6-vladimir.oltean@nxp.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230602103750.2290132-1-vladimir.oltean@nxp.com> References: <20230602103750.2290132-1-vladimir.oltean@nxp.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: BE1P281CA0095.DEUP281.PROD.OUTLOOK.COM (2603:10a6:b10:79::10) To AM0PR04MB6452.eurprd04.prod.outlook.com (2603:10a6:208:16d::21) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM0PR04MB6452:EE_|PAXPR04MB8926:EE_ X-MS-Office365-Filtering-Correlation-Id: 11dd662d-114c-4e56-4bde-08db63557d11 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: i/EA/YngRemxgmilNbLISY7FYng1HF6O5BKknEQGVvMguISk0hXWxWipEj72dao3dw45j1cbqEhyRSi58ux5toetuHYa5Zj2YP2jJMGdtVAxl3FOWCE7Ei6T2NxsApe9yzJIjwhbCOJmBUg8jCk9mwL1eNoDgVJsKlR+CRa0mFcwdaUBkozJkEDKoFF+kQMdbaAq4IjzwVNMO6m8vVEifgTCc26NuyGrkXcokPrOJ0xLnDwqG8/mqrIF1axx2Cmn3yRCNm2VE14bACfvWD5PLR07VtS/x+PeV/93K5lZnCFxcFWVkRNl7mXnbEVxv92ctQTxXBV45IeXgOoA/5lq+qtEwniIGZepWfEbbo5Y26G6GCBIDCubBh7SLRqIvbk4Nvl1cFvoIqs5je/an0DcuzcZaIVere3Ykqr+uA/XU91Ryc3YK8kkaMqWYMRD18MPDnDixuQYTil17thm1AOfUayNPpqdAutitpNpuOv58KFrC8xsWcMdSH4i9diVRey1JaS+mei+BTXCMe4e0I7ESjEt3Q8HPJ/3Ev0ZzKZL7PurdkxPgrZU+4AoaydyetIbMwYwE3R0z1L21kjHQDmoPUHZ5F9MCvwEerJiSiEh/AlGWL1mYYuR2OOAd+wF+KoZ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB6452.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(451199021)(8676002)(478600001)(6506007)(1076003)(26005)(186003)(6512007)(6666004)(52116002)(6486002)(2616005)(41300700001)(83380400001)(316002)(2906002)(44832011)(8936002)(5660300002)(7416002)(66476007)(66556008)(66946007)(6916009)(54906003)(86362001)(36756003)(4326008)(38350700002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 3xq34pNfi1VKmkqLT+lddgGRF6VNY88Vtk22Mgq4r2MX4U/jYlTYefYfaXvV9c2O8OrXOTvzpUZv1BUnpMPu4//a1CrlmeE5g0LHoHifL38Z7RK5pa8PoOkp57QXtz1cX23w79xsArCtYWDd7kPamwpBhAYBamL+HoeAnKikV0WYCKaMtriEgS0osFskW37OT7As2Xla0K+TYYqxVzDO7/g9QF5rF8MNljC5EmldMO2Hj2/vOT/NmU7rg20t/U3dNdDhmXjwohgUhWLG7G1YI1H6UZIPMtyCluFZr7vExpyibehAN62WPRrQ4GjCtZIzu72V2a/F2QNRmuIt9LvAnZCeL0GL3P4ERHdS5mHFn7aKbhkZxPnOFZnnuqCB9l8h/G3zP2ODR1yx3XQ2YM0iMfg5bLcurumORDed5GJYg6bJjTZGmtW7mNf+1gTgHf7B8hCA5B11m795BN3hAENm/UTGFM6ZRR8FGkKmGj7VAsNfNEN0deKJVVVWzjmXl0vVhxIAZ3H6QLNMB5StcrFYYKKoDtUP/NfcS7n9k9woUny0l/25IaTW88d96joEgpeLSD/zG6udn24nEgc6f+Lbmf5yKso7wcDcFM/8/sOONsI04x6DtRLHzsCHlTVvxyfD1NxjEExKzNpphAHxVL+0Z2jEPMUKv4h4ISdHHM75dS8A33o5R1Sh4HufAOeXDH4S71EifZwcwIjctO0k1HTEDNs+LaFpe9SPxZ3cZuv5fMBcrXHeMbSryOUpp5eXsU4vmjYfY2LD+3p0B4xinpldC8xuCBv37qea4qBzaNO1l7ixDsjdqyY/zbMVfQ/tkZqZYQG9+E91q7jHfcSg0elscHd8oYbYaMYb9Uz+0Lzh0EItbf2qo9KUzPn8poTwdwVduWXPCJhdiC9NAjbMs9dLbVaU8NXuvJ2xxSMT4zItcVqgiI4Qz/FDAV2W70tVZ9/LpNom3R7037vEFPmhz24onKBPgNiKSD8e+BBSwlNy1nKB6y5Iu0kJsZS+fhxzc1/eGM7SOLzsxyRiWCZWEwlb/7LUjXuAEPZTG2OoKAVqNTDLizAOoAYetC6fbBlAQcPe0cX+kONxVqTHuKfgbWK5XNlER0BmdL3zcIt7DwrC5F+9HBxkcWLc5gscCF5B/nz3CmGB+Fe+QrnKW+/U/b7Fz3HKWXSR0ubOSW0ZGyG+2WjdZN/qQQVjBSwsFCxKosLJJPZm68iI7ND4MgyFqY1ZrGvUZUDT+58v8UxyAwZ+/spyD7d+f0tToiV7ZgC8vv6PYAKFN8+vuplxGyv7xsmVOU6MNkOHzWSnIaY3i/1KdbKI0BcI7aqxlajURUv+CFc0jBK0muQ9XK9x+7RQSXxXCzU0y92TRZo72ldy9Xx/hNFzxkUcJ4urZjgGykLMlaEIvDBnkVdUxfi6nr+HNxJ4xdbYcrLwb+fh09OvrBQklA/DRwYdVr5iVtiOfLku1tHCZVr5b5KrsdHMc6zAe1k6KDp61PQhWkpT125MF9u8ftEDlMcjV1MErb2nViUJK6M56L0Ug4iDYAVdbbJlZuhNgZ2JuZYQjZJVsFZFts9u0nWOrrdnt68UnV4dRbt8UljktbRnYA/LWDET784Cg+zOLQ== X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 11dd662d-114c-4e56-4bde-08db63557d11 X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB6452.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jun 2023 10:38:22.7460 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: e164Z5SrPKEjdbMveHmgiKqXd+t0Rw+jVUAUN8ul24EJY8HeV3IGpYQMulVfoeNdQf3XXjO08qhxB0LpGN4RzQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR04MB8926 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767590006321186101?= X-GMAIL-MSGID: =?utf-8?q?1767590006321186101?= |
Series |
Improve the taprio qdisc's relationship with its children
|
|
Commit Message
Vladimir Oltean
June 2, 2023, 10:37 a.m. UTC
This makes a difference for the software scheduling mode, where
dev_queue->qdisc_sleeping is the same as the taprio root Qdisc itself,
but when we're talking about what Qdisc and stats get reported for a
traffic class, the root taprio isn't what comes to mind, but q->qdiscs[]
is.
To understand the difference, I've attempted to send 100 packets in
software mode through traffic class 0 (they are in the Qdisc's backlog),
and recorded the stats before and after the change.
Here is before:
$ tc -s class show dev eth0
class taprio 8001:1 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:2 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:3 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:4 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:5 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:6 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:7 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:8 root leaf 8001:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
and here is after:
class taprio 8001:1 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 9400b 100p requeues 0
Window drops: 0
class taprio 8001:2 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:3 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:4 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:5 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:6 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:7 root leaf 8010:
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
class taprio 8001:8 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Window drops: 0
The most glaring (and expected) difference is that before, all class
stats reported the global stats, whereas now, they really report just
the counters for that traffic class.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
---
net/sched/sch_taprio.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
Comments
On Fri, Jun 2, 2023 at 6:38 AM Vladimir Oltean <vladimir.oltean@nxp.com> wrote: > > This makes a difference for the software scheduling mode, where > dev_queue->qdisc_sleeping is the same as the taprio root Qdisc itself, > but when we're talking about what Qdisc and stats get reported for a > traffic class, the root taprio isn't what comes to mind, but q->qdiscs[] > is. > > To understand the difference, I've attempted to send 100 packets in > software mode through traffic class 0 (they are in the Qdisc's backlog), > and recorded the stats before and after the change. > Other than the refcount issue i think the approach looks reasonable to me. The stats before/after you are showing below though are interesting; are you showing a transient phase where packets are temporarily in the backlog. Typically the backlog is a transient phase which lasts a very short period. Maybe it works differently for taprio? I took a quick look at the code and do see to decrement the backlog in the dequeue, so if it is not transient then some code path is not being hit. Aside: I realize you are busy - but if you get time and provide some sample tc command lines for testing we could help create the tests for you, at least the first time. The advantage of putting these tests in tools/testing/selftests/tc-testing/ is that there are test tools out there that run these tests and so regressions are easier to catch sooner. cheers, jamal > Here is before: > > $ tc -s class show dev eth0 > class taprio 8001:1 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:2 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:3 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:4 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:5 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:6 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:7 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:8 root leaf 8001: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > > and here is after: > > class taprio 8001:1 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 9400b 100p requeues 0 > Window drops: 0 > class taprio 8001:2 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:3 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:4 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:5 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:6 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:7 root leaf 8010: > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > class taprio 8001:8 root > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > Window drops: 0 > > The most glaring (and expected) difference is that before, all class > stats reported the global stats, whereas now, they really report just > the counters for that traffic class. > > Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> > --- > net/sched/sch_taprio.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c > index cc7ff98e5e86..23b98c3af8b2 100644 > --- a/net/sched/sch_taprio.c > +++ b/net/sched/sch_taprio.c > @@ -2452,11 +2452,11 @@ static unsigned long taprio_find(struct Qdisc *sch, u32 classid) > static int taprio_dump_class(struct Qdisc *sch, unsigned long cl, > struct sk_buff *skb, struct tcmsg *tcm) > { > - struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); > + struct Qdisc *child = taprio_leaf(sch, cl); > > tcm->tcm_parent = TC_H_ROOT; > tcm->tcm_handle |= TC_H_MIN(cl); > - tcm->tcm_info = dev_queue->qdisc_sleeping->handle; > + tcm->tcm_info = child->handle; > > return 0; > } > @@ -2466,8 +2466,7 @@ static int taprio_dump_class_stats(struct Qdisc *sch, unsigned long cl, > __releases(d->lock) > __acquires(d->lock) > { > - struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); > - struct Qdisc *child = dev_queue->qdisc_sleeping; > + struct Qdisc *child = taprio_leaf(sch, cl); > struct tc_taprio_qopt_offload offload = { > .cmd = TAPRIO_CMD_TC_STATS, > .tc_stats = { > -- > 2.34.1 >
On Thu, Jun 08, 2023 at 02:44:46PM -0400, Jamal Hadi Salim wrote: > Other than the refcount issue i think the approach looks reasonable to > me. The stats before/after you are showing below though are > interesting; are you showing a transient phase where packets are > temporarily in the backlog. Typically the backlog is a transient phase > which lasts a very short period. Maybe it works differently for > taprio? I took a quick look at the code and do see to decrement the > backlog in the dequeue, so if it is not transient then some code path > is not being hit. It's a fair concern. The thing is that I put very aggressive time slots in the schedule that I'm testing with, and my kernel has a lot of debugging stuff which bogs it down (kasan, kmemleak, lockdep, DMA API debug etc). Not to mention that the CPU isn't the fastest to begin with. The way taprio works is that there's a hrtimer which fires at the expiration time of the current schedule entry and sets up the gates for the next one. Each schedule entry has a gate for each traffic class which determines what traffic classes are eligible for dequeue() and which ones aren't. The dequeue() procedure, though also invoked by the advance_schedule() hrtimer -> __netif_schedule(), is also time-sensitive. By the time taprio_dequeue() runs, taprio_entry_allows_tx() function might return false when the system is so bogged down that it wasn't able to make enough progress to dequeue() an skb in time. When that happens, there is no mechanism, currently, to age out packets that stood too much in the TX queues (what does "too much" mean?). Whereas enqueue() is technically not time-sensitive, i.e. you can enqueue whenever you want and the Qdisc will dequeue whenever it can. Though in practice, to make this scheduling technique useful, the user space enqueue should also be time-aware (though you can't capture this with ping). If I increase all my sched-entry intervals by a factor of 100, the backlog issue goes away and the system can make forward progress. So yeah, sorry, I didn't pay too much attention to the data I was presenting for illustrative purposes. > Aside: I realize you are busy - but if you get time and provide some > sample tc command lines for testing we could help create the tests for > you, at least the first time. The advantage of putting these tests in > tools/testing/selftests/tc-testing/ is that there are test tools out > there that run these tests and so regressions are easier to catch > sooner. Yeah, ok. The script posted in a reply on the cover letter is still what I'm working with. The things it intends to capture are: - attaching a custom Qdisc to one of taprio's classes doesn't fail - attaching taprio to one of taprio's classes fails - sending packets through one queue increases the counters (any counters) of just that queue All the above, replicated once for the software scheduling case and once for the offload case. Currently netdevsim doesn't attempt to emulate taprio offload. Is there a way to skip tests? I may look into tdc, but I honestly don't have time for unrelated stuff such as figuring out why my kernel isn't configured for the other tests to pass - and it seems that once one test fails, the others are completely skipped, see below. Also, by which rule are the test IDs created? root@debian:~# cd selftests/tc-testing/ root@debian:~/selftests/tc-testing# ./tdc.sh considering category qdisc -- ns/SubPlugin.__init__ Test 0582: Create QFQ with default setting Test c9a3: Create QFQ with class weight setting Test d364: Test QFQ with max class weight setting Test 8452: Create QFQ with class maxpkt setting Test 22df: Test QFQ class maxpkt setting lower bound Test 92ee: Test QFQ class maxpkt setting upper bound Test d920: Create QFQ with multiple class setting Test 0548: Delete QFQ with handle Test 5901: Show QFQ class Test 0385: Create DRR with default setting Test 2375: Delete DRR with handle Test 3092: Show DRR class Test 3460: Create CBQ with default setting exit: 2 exit: 0 Error: Specified qdisc kind is unknown. -----> teardown stage *** Could not execute: "$TC qdisc del dev $DUMMY handle 1: root" -----> teardown stage *** Error message: "Error: Invalid handle. " returncode 2; expected [0] -----> teardown stage *** Aborting test run. <_io.BufferedReader name=3> *** stdout *** <_io.BufferedReader name=5> *** stderr *** "-----> teardown stage" did not complete successfully Exception <class '__main__.PluginMgrTestFail'> ('teardown', 'Error: Specified qdisc kind is unknown.\n', '"-----> teardown stage" did not complete successfully') (caught in test_runner, running test 14 3460 Create CBQ with default setting stage teardown) --------------- traceback File "/root/selftests/tc-testing/./tdc.py", line 495, in test_runner res = run_one_test(pm, args, index, tidx) File "/root/selftests/tc-testing/./tdc.py", line 434, in run_one_test prepare_env(args, pm, 'teardown', '-----> teardown stage', tidx['teardown'], procout) File "/root/selftests/tc-testing/./tdc.py", line 245, in prepare_env raise PluginMgrTestFail( --------------- accumulated output for this test: Error: Specified qdisc kind is unknown. --------------- All test results: 1..336 ok 1 0582 - Create QFQ with default setting ok 2 c9a3 - Create QFQ with class weight setting ok 3 d364 - Test QFQ with max class weight setting ok 4 8452 - Create QFQ with class maxpkt setting ok 5 22df - Test QFQ class maxpkt setting lower bound ok 6 92ee - Test QFQ class maxpkt setting upper bound ok 7 d920 - Create QFQ with multiple class setting ok 8 0548 - Delete QFQ with handle ok 9 5901 - Show QFQ class ok 10 0385 - Create DRR with default setting ok 11 2375 - Delete DRR with handle ok 12 3092 - Show DRR class ok 13 3460 - Create CBQ with default setting # skipped - "-----> teardown stage" did not complete successfully ok 14 0592 - Create CBQ with mpu # skipped - skipped - previous teardown failed 14 3460 ok 15 4684 - Create CBQ with valid cell num # skipped - skipped - previous teardown failed 14 3460 ok 16 4345 - Create CBQ with invalid cell num # skipped - skipped - previous teardown failed 14 3460 ok 17 4525 - Create CBQ with valid ewma # skipped - skipped - previous teardown failed 14 3460 ok 18 6784 - Create CBQ with invalid ewma # skipped - skipped - previous teardown failed 14 3460 ok 19 5468 - Delete CBQ with handle # skipped - skipped - previous teardown failed 14 3460 ok 20 492a - Show CBQ class # skipped - skipped - previous teardown failed 14 3460 ok 21 9903 - Add mqprio Qdisc to multi-queue device (8 queues) # skipped - skipped - previous teardown failed 14 3460 ok 22 453a - Delete nonexistent mqprio Qdisc # skipped - skipped - previous teardown failed 14 3460 ok 23 5292 - Delete mqprio Qdisc twice # skipped - skipped - previous teardown failed 14 3460 ok 24 45a9 - Add mqprio Qdisc to single-queue device # skipped - skipped - previous teardown failed 14 3460 ok 25 2ba9 - Show mqprio class # skipped - skipped - previous teardown failed 14 3460 ok 26 4812 - Create HHF with default setting # skipped - skipped - previous teardown failed 14 3460 ok 27 8a92 - Create HHF with limit setting # skipped - skipped - previous teardown failed 14 3460 ok 28 3491 - Create HHF with quantum setting # skipped - skipped - previous teardown failed 14 3460 (...)
On 09/06/2023 09:10, Vladimir Oltean wrote: > On Thu, Jun 08, 2023 at 02:44:46PM -0400, Jamal Hadi Salim wrote: >> Other than the refcount issue i think the approach looks reasonable to >> me. The stats before/after you are showing below though are >> interesting; are you showing a transient phase where packets are >> temporarily in the backlog. Typically the backlog is a transient phase >> which lasts a very short period. Maybe it works differently for >> taprio? I took a quick look at the code and do see to decrement the >> backlog in the dequeue, so if it is not transient then some code path >> is not being hit. > > It's a fair concern. The thing is that I put very aggressive time slots > in the schedule that I'm testing with, and my kernel has a lot of > debugging stuff which bogs it down (kasan, kmemleak, lockdep, DMA API > debug etc). Not to mention that the CPU isn't the fastest to begin with. > > The way taprio works is that there's a hrtimer which fires at the > expiration time of the current schedule entry and sets up the gates for > the next one. Each schedule entry has a gate for each traffic class > which determines what traffic classes are eligible for dequeue() and > which ones aren't. > > The dequeue() procedure, though also invoked by the advance_schedule() > hrtimer -> __netif_schedule(), is also time-sensitive. By the time > taprio_dequeue() runs, taprio_entry_allows_tx() function might return > false when the system is so bogged down that it wasn't able to make > enough progress to dequeue() an skb in time. When that happens, there is > no mechanism, currently, to age out packets that stood too much in the > TX queues (what does "too much" mean?). > > Whereas enqueue() is technically not time-sensitive, i.e. you can > enqueue whenever you want and the Qdisc will dequeue whenever it can. > Though in practice, to make this scheduling technique useful, the user > space enqueue should also be time-aware (though you can't capture this > with ping). > > If I increase all my sched-entry intervals by a factor of 100, the > backlog issue goes away and the system can make forward progress. > > So yeah, sorry, I didn't pay too much attention to the data I was > presenting for illustrative purposes. > >> Aside: I realize you are busy - but if you get time and provide some >> sample tc command lines for testing we could help create the tests for >> you, at least the first time. The advantage of putting these tests in >> tools/testing/selftests/tc-testing/ is that there are test tools out >> there that run these tests and so regressions are easier to catch >> sooner. > > Yeah, ok. The script posted in a reply on the cover letter is still what > I'm working with. The things it intends to capture are: > - attaching a custom Qdisc to one of taprio's classes doesn't fail > - attaching taprio to one of taprio's classes fails > - sending packets through one queue increases the counters (any counters) > of just that queue > > All the above, replicated once for the software scheduling case and once > for the offload case. Currently netdevsim doesn't attempt to emulate > taprio offload. > > Is there a way to skip tests? I may look into tdc, but I honestly don't > have time for unrelated stuff such as figuring out why my kernel isn't > configured for the other tests to pass - and it seems that once one test > fails, the others are completely skipped, see below. You can tell tdc to run a specific test file by providing the "-f" option. For example, if you want to run only taprio tests, you can issue the following command: ./tdc.py -f tc-tests/qdiscs/taprio.json This is also described in tdc's README. > Also, by which rule are the test IDs created? When creating a test case in tdc, you must have an ID field. What we do to generate the IDs is leave the "id" field as an empty string on the test case description, for example: { "id": "", "name": "My dummy test case", ... } and run the following: ./tdc.py -i This will automatically fill up the "id" field in the JSON with an appropriate ID. > root@debian:~# cd selftests/tc-testing/ > root@debian:~/selftests/tc-testing# ./tdc.sh > considering category qdisc > -- ns/SubPlugin.__init__ > Test 0582: Create QFQ with default setting > Test c9a3: Create QFQ with class weight setting > Test d364: Test QFQ with max class weight setting > Test 8452: Create QFQ with class maxpkt setting > Test 22df: Test QFQ class maxpkt setting lower bound > Test 92ee: Test QFQ class maxpkt setting upper bound > Test d920: Create QFQ with multiple class setting > Test 0548: Delete QFQ with handle > Test 5901: Show QFQ class > Test 0385: Create DRR with default setting > Test 2375: Delete DRR with handle > Test 3092: Show DRR class > Test 3460: Create CBQ with default setting > exit: 2 > exit: 0 > Error: Specified qdisc kind is unknown. As you stated, this likely means you are missing a config option. However you can just ask tdc to run a specific test file to avoid this. > (...) cheers, Victor
On Fri, Jun 9, 2023 at 8:10 AM Vladimir Oltean <vladimir.oltean@nxp.com> wrote: > > On Thu, Jun 08, 2023 at 02:44:46PM -0400, Jamal Hadi Salim wrote: > > Other than the refcount issue i think the approach looks reasonable to > > me. The stats before/after you are showing below though are > > interesting; are you showing a transient phase where packets are > > temporarily in the backlog. Typically the backlog is a transient phase > > which lasts a very short period. Maybe it works differently for > > taprio? I took a quick look at the code and do see to decrement the > > backlog in the dequeue, so if it is not transient then some code path > > is not being hit. > > It's a fair concern. The thing is that I put very aggressive time slots > in the schedule that I'm testing with, and my kernel has a lot of > debugging stuff which bogs it down (kasan, kmemleak, lockdep, DMA API > debug etc). Not to mention that the CPU isn't the fastest to begin with. > > The way taprio works is that there's a hrtimer which fires at the > expiration time of the current schedule entry and sets up the gates for > the next one. Each schedule entry has a gate for each traffic class > which determines what traffic classes are eligible for dequeue() and > which ones aren't. > > The dequeue() procedure, though also invoked by the advance_schedule() > hrtimer -> __netif_schedule(), is also time-sensitive. By the time > taprio_dequeue() runs, taprio_entry_allows_tx() function might return > false when the system is so bogged down that it wasn't able to make > enough progress to dequeue() an skb in time. When that happens, there is > no mechanism, currently, to age out packets that stood too much in the > TX queues (what does "too much" mean?). > > Whereas enqueue() is technically not time-sensitive, i.e. you can > enqueue whenever you want and the Qdisc will dequeue whenever it can. > Though in practice, to make this scheduling technique useful, the user > space enqueue should also be time-aware (though you can't capture this > with ping). > > If I increase all my sched-entry intervals by a factor of 100, the > backlog issue goes away and the system can make forward progress. > > So yeah, sorry, I didn't pay too much attention to the data I was > presenting for illustrative purposes. > So it seems to me it is a transient phase and that at some point the backlog will clear up and the sent stats will go up. Maybe just say so in your commit or show the final result after the packet is gone. I have to admit, I dont know much about taprio - that's why i am asking all these leading questions. You spoke of gates etc and thats klingon to me; but iiuc there's some time sensitive stuff that needs to be sent out within a deadline. Q: What should happen to skbs that are no longer valid? On the aging thing which you say is missing, shouldnt the hrtimer or schedule kick not be able to dequeue timestamped packets and just drop them? cheers, jamal cheers, jamal > > Aside: I realize you are busy - but if you get time and provide some > > sample tc command lines for testing we could help create the tests for > > you, at least the first time. The advantage of putting these tests in > > tools/testing/selftests/tc-testing/ is that there are test tools out > > there that run these tests and so regressions are easier to catch > > sooner. > > Yeah, ok. The script posted in a reply on the cover letter is still what > I'm working with. The things it intends to capture are: > - attaching a custom Qdisc to one of taprio's classes doesn't fail > - attaching taprio to one of taprio's classes fails > - sending packets through one queue increases the counters (any counters) > of just that queue > > All the above, replicated once for the software scheduling case and once > for the offload case. Currently netdevsim doesn't attempt to emulate > taprio offload. > > Is there a way to skip tests? I may look into tdc, but I honestly don't > have time for unrelated stuff such as figuring out why my kernel isn't > configured for the other tests to pass - and it seems that once one test > fails, the others are completely skipped, see below. > > Also, by which rule are the test IDs created? > > root@debian:~# cd selftests/tc-testing/ > root@debian:~/selftests/tc-testing# ./tdc.sh > considering category qdisc > -- ns/SubPlugin.__init__ > Test 0582: Create QFQ with default setting > Test c9a3: Create QFQ with class weight setting > Test d364: Test QFQ with max class weight setting > Test 8452: Create QFQ with class maxpkt setting > Test 22df: Test QFQ class maxpkt setting lower bound > Test 92ee: Test QFQ class maxpkt setting upper bound > Test d920: Create QFQ with multiple class setting > Test 0548: Delete QFQ with handle > Test 5901: Show QFQ class > Test 0385: Create DRR with default setting > Test 2375: Delete DRR with handle > Test 3092: Show DRR class > Test 3460: Create CBQ with default setting > exit: 2 > exit: 0 > Error: Specified qdisc kind is unknown. > > > -----> teardown stage *** Could not execute: "$TC qdisc del dev $DUMMY handle 1: root" > > -----> teardown stage *** Error message: "Error: Invalid handle. > " > returncode 2; expected [0] > > -----> teardown stage *** Aborting test run. > > > <_io.BufferedReader name=3> *** stdout *** > > > <_io.BufferedReader name=5> *** stderr *** > "-----> teardown stage" did not complete successfully > Exception <class '__main__.PluginMgrTestFail'> ('teardown', 'Error: Specified qdisc kind is unknown.\n', '"-----> teardown stage" did not complete successfully') (caught in test_runner, running test 14 3460 Create CBQ with default setting stage teardown) > --------------- > traceback > File "/root/selftests/tc-testing/./tdc.py", line 495, in test_runner > res = run_one_test(pm, args, index, tidx) > File "/root/selftests/tc-testing/./tdc.py", line 434, in run_one_test > prepare_env(args, pm, 'teardown', '-----> teardown stage', tidx['teardown'], procout) > File "/root/selftests/tc-testing/./tdc.py", line 245, in prepare_env > raise PluginMgrTestFail( > --------------- > accumulated output for this test: > Error: Specified qdisc kind is unknown. > --------------- > > All test results: > > 1..336 > ok 1 0582 - Create QFQ with default setting > ok 2 c9a3 - Create QFQ with class weight setting > ok 3 d364 - Test QFQ with max class weight setting > ok 4 8452 - Create QFQ with class maxpkt setting > ok 5 22df - Test QFQ class maxpkt setting lower bound > ok 6 92ee - Test QFQ class maxpkt setting upper bound > ok 7 d920 - Create QFQ with multiple class setting > ok 8 0548 - Delete QFQ with handle > ok 9 5901 - Show QFQ class > ok 10 0385 - Create DRR with default setting > ok 11 2375 - Delete DRR with handle > ok 12 3092 - Show DRR class > ok 13 3460 - Create CBQ with default setting # skipped - "-----> teardown stage" did not complete successfully > > ok 14 0592 - Create CBQ with mpu # skipped - skipped - previous teardown failed 14 3460 > > ok 15 4684 - Create CBQ with valid cell num # skipped - skipped - previous teardown failed 14 3460 > > ok 16 4345 - Create CBQ with invalid cell num # skipped - skipped - previous teardown failed 14 3460 > > ok 17 4525 - Create CBQ with valid ewma # skipped - skipped - previous teardown failed 14 3460 > > ok 18 6784 - Create CBQ with invalid ewma # skipped - skipped - previous teardown failed 14 3460 > > ok 19 5468 - Delete CBQ with handle # skipped - skipped - previous teardown failed 14 3460 > > ok 20 492a - Show CBQ class # skipped - skipped - previous teardown failed 14 3460 > > ok 21 9903 - Add mqprio Qdisc to multi-queue device (8 queues) # skipped - skipped - previous teardown failed 14 3460 > > ok 22 453a - Delete nonexistent mqprio Qdisc # skipped - skipped - previous teardown failed 14 3460 > > ok 23 5292 - Delete mqprio Qdisc twice # skipped - skipped - previous teardown failed 14 3460 > > ok 24 45a9 - Add mqprio Qdisc to single-queue device # skipped - skipped - previous teardown failed 14 3460 > > ok 25 2ba9 - Show mqprio class # skipped - skipped - previous teardown failed 14 3460 > > ok 26 4812 - Create HHF with default setting # skipped - skipped - previous teardown failed 14 3460 > > ok 27 8a92 - Create HHF with limit setting # skipped - skipped - previous teardown failed 14 3460 > > ok 28 3491 - Create HHF with quantum setting # skipped - skipped - previous teardown failed 14 3460 > (...)
On Fri, Jun 09, 2023 at 12:19:12PM -0400, Jamal Hadi Salim wrote: > So it seems to me it is a transient phase and that at some point the > backlog will clear up and the sent stats will go up. Maybe just say so > in your commit or show the final result after the packet is gone. I will re-collect some stats where there is nothing backlogged. > I have to admit, I dont know much about taprio - that's why i am > asking all these leading questions. You spoke of gates etc and thats > klingon to me; but iiuc there's some time sensitive stuff that needs > to be sent out within a deadline. If sch_taprio.c is klingon to you, you can just imagine how sch_api.c reads to me :) > Q: What should happen to skbs that are no longer valid? > On the aging thing which you say is missing, shouldnt the hrtimer or > schedule kick not be able to dequeue timestamped packets and just drop > them? I think the skbs being "valid" is an application-defined metric (except in the txtime assist mode, where skbs do truly have a transmit deadline). The user space could reasonably enqueue 100 packets at a time, fully aware of the fact that the cycle length is 1 second and that there's only room in one cycle to send one packet, thus it would take 100 seconds for all those packets to be dequeued and sent. It could also be that this isn't the case. I guess what could be auto-detected and warned about is when a cycle passes (sort of like an RCU grace period), the backlog was non-zero, the gates were open, but still, no skb was dequeued. After one cycle, the schedule repeats itself identically. But then do what? why drop? it's a system issue..
On Fri, Jun 09, 2023 at 11:56:35AM -0300, Victor Nogueira wrote: > You can tell tdc to run a specific test file by providing the "-f" option. > For example, if you want to run only taprio tests, you can issue the > following command: > > ./tdc.py -f tc-tests/qdiscs/taprio.json Thanks. I've been able to make some progress with this. Unfortunately the updated series conflicts with this in-flight patch set, so I wouldn't want to post it just yet. https://patchwork.kernel.org/project/netdevbpf/cover/20230609135917.1084327-1-vladimir.oltean@nxp.com/ However, I did add taprio offload to the netdevsim driver so that this code path could be covered with tdc. I'd like Jamal and Paolo, who asked for it, to comment on whether this is what they were looking to see. https://github.com/vladimiroltean/linux/commits/sch-taprio-relationship-children-v2
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index cc7ff98e5e86..23b98c3af8b2 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -2452,11 +2452,11 @@ static unsigned long taprio_find(struct Qdisc *sch, u32 classid) static int taprio_dump_class(struct Qdisc *sch, unsigned long cl, struct sk_buff *skb, struct tcmsg *tcm) { - struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); + struct Qdisc *child = taprio_leaf(sch, cl); tcm->tcm_parent = TC_H_ROOT; tcm->tcm_handle |= TC_H_MIN(cl); - tcm->tcm_info = dev_queue->qdisc_sleeping->handle; + tcm->tcm_info = child->handle; return 0; } @@ -2466,8 +2466,7 @@ static int taprio_dump_class_stats(struct Qdisc *sch, unsigned long cl, __releases(d->lock) __acquires(d->lock) { - struct netdev_queue *dev_queue = taprio_queue_get(sch, cl); - struct Qdisc *child = dev_queue->qdisc_sleeping; + struct Qdisc *child = taprio_leaf(sch, cl); struct tc_taprio_qopt_offload offload = { .cmd = TAPRIO_CMD_TC_STATS, .tc_stats = {