From patchwork Mon Jun 26 11:56:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112909 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7441155vqr; Mon, 26 Jun 2023 05:22:35 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5dBPDZSlSRwLSxqL10dT+GHRG9xc1bZNuZGgYJeNIPASOSVa1Uxk5r3Tgy9pOda4KZnCbS X-Received: by 2002:a17:90a:7892:b0:259:d0f2:3576 with SMTP id x18-20020a17090a789200b00259d0f23576mr21885001pjk.19.1687782155080; Mon, 26 Jun 2023 05:22:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782155; cv=none; d=google.com; s=arc-20160816; b=DPU+ZM30m4q9DsggTMZVVOnh2eb6WWDPdNmX4cqJj8RSczGxm8SRM20Qs9HDj5Pp2P bvh+gPex8oBhrzSKk9pUWoXVCpcjUFfvihGF2UUwwd91At3uTr0fcwezCuOBEXlzihzZ X00x9hOSCEXrO26/XxC7i5w8dv/zoarZS9OkTmCBTu/YfJwFBFEfEnfMx4TnNpJCpxV/ nlwA9RXdjCH3WvW4El1n2T/vS2uHnewj2j+zz4TJ6o8zuRrFG5+jAjuk0Hqu3xk6nxvT XVFKnZo7EwU0GxIA4fgT0g3WIzUzm7e++4fmqE1dICnVEzdMEOayKJFDFSGGFJgE+xY2 CS1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=qcE9reRLded+uAiSuhESI5/OhPGtYw72bo9DZngV7UQ=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=hmeFVG4MaA6Kb+UC+8UnLvgFZqAIJPO3joYP9eKUlwprI1GaUYL4vV5GayS27SViHb v7jmfiXSyENWrtFEcZbV/hA3Tj/lhOvHa/jTiPYD3jVJSCdHycQYMyAKktRtMFTstIvC OCi412RV2kUnqo2mDuQrzgLOp4oRsxNal0JuAa92ReqGMNPbthKhFqM70TYiqxtNZRSc Z+DowxCZU1ROjG1zvn4NzKwipr4G4qJ7wcWcYrjEYT/GIszGG6zAAaEySZIaJ0+HBgzP lqiQp2C/li2rF8ghHJTlnTN8b2vNS+UbEK3Dq+CYZuY+iWS+bSoJLrYE+B9iYonmDJkO MYUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b10-20020a17090a9bca00b0025eae2cef4bsi9017223pjw.1.2023.06.26.05.22.22; Mon, 26 Jun 2023 05:22:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230206AbjFZMOX (ORCPT + 99 others); Mon, 26 Jun 2023 08:14:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230085AbjFZMOC (ORCPT ); Mon, 26 Jun 2023 08:14:02 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07684E73; Mon, 26 Jun 2023 05:13:55 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-95-64997d6a6eda From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 01/25] llist: Move llist_{head,node} definition to types.h Date: Mon, 26 Jun 2023 20:56:36 +0900 Message-Id: <20230626115700.13873-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSW0xTWRSG2eeyz6FSc1KNc0Sj2IRgmIBg0Kw4aCTRuDXRGH0x44NUe2Zo hGKKgExCwqV4KaCgQhVRoJgOAdTa8oADh+kAIohCHQjTMRW1maiMIIq2WsFLqfFl5cv61/89 LZ5W/clG8jr9Ucmg16SrsYJRTEVY4g7nX9QmDA3yUFmWAL53JxmovdGKwXW9BUFrWyEFE7e3 wT/+SQSz94dpMFe5EDQ8fURDW984ArmpCMPIfwth1DeNYaCqFENx4w0MD17OUeCpPktBi30n DFZYKHAGnjNgnsBwyVxMBccLCgLWZg6sBdHgbarhYO5pIgyMj7EgP/wRLl7xYOiUBxjoa/dS MPJHLYbx1i8sDPb1M+CqLGfh2isLhpd+Kw1W3zQHfzvrKbAZg6L/52QKjr/9zMKdcmeQrt6k YPTfDgRdJ59QYG8dw9Djm6TAYa+i4ePvtxF4T09xUFIW4OBS4WkEpSXVDAx/usOC0bMOZj/U 4s0/kZ7JaZoYHblE9tcz5K5FJLdqHnHE2PWQI/X2bOJoiiWNnRMUaZjxscTefAoT+8xZjpim RiniGevE5NXQEEf6L8wyu5f/rEjWSum6HMmwZlOqIu3M2GV05LLi2PkSG1uAungT4nlRSBLr ntAmFB5Cb4WRmWcsxIhudyC0XyxEiY7yZ6wJKXhaaFwgPu/v5eaDRcIu0f1ODjEjRIt33z8L lZXCOvFEkcx+k64UW2zOkChcWC923LOgeVYFb4o83XheKgql4WJv8zn8rbBU/KvJzVQgZT0K a0YqnT4nQ6NLT4pPy9PrjsUfysywo+BbWfPn9rejGdfebiTwSB2hTFhxQatiNTlZeRndSORp 9WLlkg9mrUqp1eT9JhkyDxiy06WsbrSMZ9Q/KNf6c7Uq4VfNUemwJB2RDN9Tig+PLEAHU/x1 MdH++KhNq1Yn9cSoT9lsFVand6Y3T3a69HVxOam/GKIyxRF56x53L309rrhT9TjCZKC25IaV 1Wxr3Jn/OmxfILkuRVu4OvZA9gt+YO2OycpdyY6NsbL3/spIRUGpsYgUjwy9ySzcEHnrWuqe jup8194rZm5RirS9T+bVTFaaJjGWNmRpvgKDLJEiUgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfX/PHWc/p/FbJG6alklN2cfDzGzWb7aMabPlD27uxx13J3dE hvVw0bNq1Ull18WVinLXH1GXdimVh6KWtBNuJilFukgP3GX+ee+1vT+f119vBpeYSB9GqTkj aDUylZQSEaK92xI3nLhUIA+eTQmC7PRgcE0kE1BUXUVB171KBFW18RgMtYTD68kRBNPPO3Ew 5HUhKPnwFofa1gEEtvIECro/LoYe1xgF7XlpFCSWVlPwcngGA0d+DgaVlgh4mmXCoGlqkADD EAWFhkTMHZ8xmDJX0GCO8wdn+Q0aZj6EQPtALwnNxe0k2PrXQ8FNBwUNtnYCWuucGHQ/LKJg oOoPCU9b2wjoys4g4e6oiYLhSTMOZtcYDa+ajBjU6N22LzM2DK78mCPhSUaTm27dx6DnTT2C xuT3GFiqeilodo1gYLXk4fC7rAWBM/MrDUnpUzQUxmciSEvKJ6Bz9gkJekcYTP8qonZu55tH xnBebz3H2yaNBN9h4vgHN97SvL6xn+aNlrO8tTyQL20YwviScRfJWypSKN4ynkPzqV97MN7R 20Dxoy9e0Hzb9Wlin2+UaLtcUCljBO3GHUdEimu9xSi6WHQ+N6mGjEONTCryYjg2lHNm6QkP U+w6rq9vCvewN7uas2Z8IlORiMHZ0oXcYNtj2lMsZfdyfRO2eSZYf67j56f5ZzEbxl1NsJH/ pH5cZU3TvMiL3czVPzMhD0vcNwkOO5WFREa0oAJ5KzUxaplSFRakO6mI1SjPBx09pbYg93LM l2ay69BEd7gdsQySLhIHr7oul5CyGF2s2o44Bpd6i5f9MsglYrks9oKgPXVYe1Yl6OxoBUNI l4v3HBSOSNjjsjPCSUGIFrT/W4zx8olDzpYtNfFWdpP22JLIH6cvzoUNWu60DNvfdT8PTFsj Kwl5F3AgP9OLe+n7zO+0oj567YOjzDH/jq3NF3Mybi//lrR+X1ms2tAwQupuce+jVIFD6ZVR +wsGCg91+n//vnvxXZWPJiYgV7GyTtJRFr01UpTuN16NIi47ri6ofjQxatwVKiV0CllIIK7V yf4Ci/xDFDUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769767861076127411?= X-GMAIL-MSGID: =?utf-8?q?1769767861076127411?= llist_head and llist_node can be used by very primitives. For example, Dept for tracking dependency uses llist things in its header. To avoid header dependency, move those to types.h. Signed-off-by: Byungchul Park --- include/linux/llist.h | 8 -------- include/linux/types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/llist.h b/include/linux/llist.h index 85bda2d02d65..99cc3c30f79c 100644 --- a/include/linux/llist.h +++ b/include/linux/llist.h @@ -53,14 +53,6 @@ #include #include -struct llist_head { - struct llist_node *first; -}; - -struct llist_node { - struct llist_node *next; -}; - #define LLIST_HEAD_INIT(name) { NULL } #define LLIST_HEAD(name) struct llist_head name = LLIST_HEAD_INIT(name) diff --git a/include/linux/types.h b/include/linux/types.h index ea8cf60a8a79..b12a44400877 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -187,6 +187,14 @@ struct hlist_node { struct hlist_node *next, **pprev; }; +struct llist_head { + struct llist_node *first; +}; + +struct llist_node { + struct llist_node *next; +}; + struct ustat { __kernel_daddr_t f_tfree; #ifdef CONFIG_ARCH_32BIT_USTAT_F_TINODE From patchwork Mon Jun 26 11:56:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112924 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7456357vqr; Mon, 26 Jun 2023 05:49:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6jMb78LIYO5/nMe4uoEicpClYx5Bpw4U4dO/BEJ0jJRt+y2AFXpg0P2lsfsKc3kq47KBTP X-Received: by 2002:a17:907:2da1:b0:988:7428:2c1a with SMTP id gt33-20020a1709072da100b0098874282c1amr22964424ejc.7.1687783740193; Mon, 26 Jun 2023 05:49:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783740; cv=none; d=google.com; s=arc-20160816; b=mkjnzfQc5VHhSVJp36B5T2Ap6bx/bOccZzNd+yVgbwAxGvqizpuHx346GiLVcswU59 ISin+UbEq0AUkzy+aTpLbL2a6V7X8Njbn4Mm8fBP6w/k02Wajyi63loyhvQiVLqS7Tta j803pDYFaFcQkxPiw0He+yZZm2gJlJl1AV1sUU0hkBOkwThdvn0KcmVHV3SO3u+vMjSn HJytq2UvRuqSTki+e8NKqcewUAGJIEUJ35zoT6C0TbPB8Ndp51un+kcLlVf8s7RZoiVM 3c2xifEtQULDRBtlo4s4BSKFPfqMIza5Z7VzSf0M64hBfWyEEDw67If5vfCjsTOstyyA OKmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=pKLj2vYtoAudyOeenGxdWKUoi5sQxiy1C2CAc1q2bs4=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=tQmzBsRdpUFgIf5U7QOq9R1Memce3bzWcgdtarFvAlRMerFjIfATwaRBqf8UWC2eUY EiZk3i0tYD+tinqHBvXYTGDtKMeGnopFZ3hHXtV6v9+rXv3L8Xz9DjoHVDDMBcM3Kga2 PoLo2Pb7+7rMJnvfcMmGoYpC/JbvG4HRRMRNiVwzGvYw+tfz50KCE1WzwjNweIjs6WSM /ZwMFNoh1X2RpVff80Sl4j0nd5JscMvysxgGoUQpAS8AvzhnIp1cffjX7yF1behudegf EFkeN5uJwYGP4UUd8A3LLIo7YhkaR1j05mGOBP9xhTk0tl6B1+PUXNuSVKXUZ/bFqCzr 94eQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y27-20020a170906071b00b0098dfb10f3a6si2440850ejb.107.2023.06.26.05.48.33; Mon, 26 Jun 2023 05:49:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229663AbjFZMpA (ORCPT + 99 others); Mon, 26 Jun 2023 08:45:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231334AbjFZMoU (ORCPT ); Mon, 26 Jun 2023 08:44:20 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F254CE60; Mon, 26 Jun 2023 05:44:12 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-a6-64997d6b352d From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 02/25] dept: Implement Dept(Dependency Tracker) Date: Mon, 26 Jun 2023 20:56:37 +0900 Message-Id: <20230626115700.13873-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0zMcRjH+3y+PzvOvjuNr5gf17AxhOgZZoytrz8Mw0xsuXVfOircKWUz 4bKUENVRrV2XnVud5GpW6q4r9EO46JbTrhDi6MfEtVJ+3Gn+efba87yf119vlpDZqGBWFX9C VMcrYuW0hJT0Ty5acuT0TWWosXoeZF0KBe+PNBIK7pppaCsrRWCuPIvB8zgCXg33IRh75iBA l9OGoOhdFwGVjd0IrKZzNLR/mAJO7yANLTkZNJwvvkvDi6/jGNy51zCUWrZC61UDBvvoJxJ0 Hhrydeexb3zGMGosYcCYMh96THkMjL9bDi3dHRRYOxfDzUI3DbXWFhIaq3owtD8ooKHb/IeC 1sZmEtqyMim4M2Cg4euwkQCjd5CBl3Y9hnKtT/Rl3IrhwvffFDRl2n106x4G5+saBLa0txgs 5g4aHnr7MFRYcgj4efsxgp7L/QykXhplIP/sZQQZqbkkOH41UaB1r4KxkQJ6w1rhYd8gIWgr TgrWYT0pPDHwQnVeFyNobZ2MoLckCBWmRUJxrQcLRUNeSrCUXKQFy9A1Rkjvd2LB3VFLCwPP nzNC840xcvusSMk6pRirShTVy9YfkMSU6xzksazfVFJvYymZgtLayXTEsjwXxpu0u9JR4D98 +6MJ+5nmFvIu1yjh5yBuLl+R2UulIwlLcMWT+E/Njxj/71RuE3/fu9GfIbn5/JVUG+FfS7lV vC0jYkI5hy8tt//TBHKr+ZqnBuRnmS9yzt1AT2TyA/ncuoQJnsHXm1zkVSTVo4ASJFPFJ8Yp VLFhS2OS41VJS6OPxlmQr1XG0+P7qtBQ284GxLFIPlkaOvuGUkYpEjXJcQ2IZwl5kHTaiE4p kyoVyadE9dEodUKsqGlAM1lSPl26YvikUsYdUpwQj4jiMVH9/4rZwOAUtDvbme0oCNmxs32S 3qXZ5G66Xld0/db6biJpTX29UbfRJl37KnIkPC8m4PCCVnWnwdx1b+6Qa08krnzh+Gbak1YT Hjy4d4tTk+VRGg/u//XGhP8EeajsAVPZx+gZZvuZEDCkhm8+HhIxe2Sl3RYQ1vKere48EwVi 2bbCD/biXjmpiVEsX0SoNYq/IfU9Z1EDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxiGfd9zznsOnTWHDvVE8SNNkIQpooJ5zNyiMYajUXT+2LIly2js 2WgohbRYxciCApMVUWqCFfkIFNMR6AYrbn5AGQEtZUxASxBNJUpURKtEbBldq66d2Z8nV+77 zvXr4ShFI7OM0+jyJb1OpVUSGS3L+Lh4XXZhtTrlt8/AfCoFAv4yGmrb7ARGfmlFYL90HMP0 jXS4M+dDELo5TIGlagRB48P7FFxyTSBwNp8g4Hm0CEYDMwQGqsoJFDe1Ebj1PIzBe+4shlbH XhistGLoCU7RYJkmUGMpxpHzFEPQ1sKCrSgBJpsvsBB+uAEGJsYY6KsbYMB57yOorvcS6HIO 0OC6MonBc62WwIT9HQODLjcNI+YKBn5+aSXwfM5GgS0ww8LtngYM7SUR27OwE8MPr98y0F/R E6GLv2IYvduJoLvsAQaHfYxAX8CHocNRRcE/P91AMHn6BQulp4Is1Bw/jaC89BwNw2/6GSjx pkFovpZs2yr2+WYosaTjsOica6DFP62CePXCfVYs6b7Hig2OQ2JHc5LY1DWNxcbZACM6Wn4k omP2LCuaXoxi0TvWRcSXQ0Os6D4fovev+Eq2VS1pNUZJv/7TTFlWu2WYzjO/ZY48cbXSRajM Q5tQDCfwqcIDfz+OMuEThfHxIBXlOH610FHxhDEhGUfxTR8IU+7rrAlx3If8DuH3wPbohuYT hDOl3VQ0lvNpQnd5+nvlKqG1vec/TQy/Wej8y4qirIhMTnh7SSWSNaAFLShOozPmqDTatGRD dlaBTnMk+WBujgNFHsdWGDZfQX5Pei/iOaRcKE9ZeV6tYFRGQ0FOLxI4ShknXzJvUSvkalXB UUmf+43+kFYy9KLlHK1cKt/9hZSp4L9T5UvZkpQn6f9vMRezrAjFv1rLzN2ORcSklR2zunzu 9Ot391tk+fU5bPzC6VCiP6ETx+7x7TOP9+9KXbPz+0FX6fznxu3my4WPV+iOnnydkYe3xC92 GNweezguM/eg+mbM7tAzRcvXy6sbv93n33Hsj0+KNh6YGkmtXb/lqjdYadxz8kuKGno0W9/3 d3KdfpOSNmSpNiRReoPqX9BW9iY0AwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769523091346646?= X-GMAIL-MSGID: =?utf-8?q?1769769523091346646?= CURRENT STATUS -------------- Lockdep tracks acquisition order of locks in order to detect deadlock, and IRQ and IRQ enable/disable state as well to take accident acquisitions into account. Lockdep should be turned off once it detects and reports a deadlock since the data structure and algorithm are not reusable after detection because of the complex design. PROBLEM ------- *Waits* and their *events* that never reach eventually cause deadlock. However, Lockdep is only interested in lock acquisition order, forcing to emulate lock acqusition even for just waits and events that have nothing to do with real lock. Even worse, no one likes Lockdep's false positive detection because that prevents further one that might be more valuable. That's why all the kernel developers are sensitive to Lockdep's false positive. Besides those, by tracking acquisition order, it cannot correctly deal with read lock and cross-event e.g. wait_for_completion()/complete() for deadlock detection. Lockdep is no longer a good tool for that purpose. SOLUTION -------- Again, *waits* and their *events* that never reach eventually cause deadlock. The new solution, Dept(DEPendency Tracker), focuses on waits and events themselves. Dept tracks waits and events and report it if any event would be never reachable. Dept does: . Works with read lock in the right way. . Works with any wait and event e.i. cross-event. . Continue to work even after reporting multiple times. . Provides simple and intuitive APIs. . Does exactly what dependency checker should do. Q & A ----- Q. Is this the first try ever to address the problem? A. No. Cross-release feature (b09be676e0ff2 locking/lockdep: Implement the 'crossrelease' feature) addressed it 2 years ago that was a Lockdep extension and merged but reverted shortly because: Cross-release started to report valuable hidden problems but started to give report false positive reports as well. For sure, no one likes Lockdep's false positive reports since it makes Lockdep stop, preventing reporting further real problems. Q. Why not Dept was developed as an extension of Lockdep? A. Lockdep definitely includes all the efforts great developers have made for a long time so as to be quite stable enough. But I had to design and implement newly because of the following: 1) Lockdep was designed to track lock acquisition order. The APIs and implementation do not fit on wait-event model. 2) Lockdep is turned off on detection including false positive. Which is terrible and prevents developing any extension for stronger detection. Q. Do you intend to totally replace Lockdep? A. No. Lockdep also checks if lock usage is correct. Of course, the dependency check routine should be replaced but the other functions should be still there. Q. Do you mean the dependency check routine should be replaced right away? A. No. I admit Lockdep is stable enough thanks to great efforts kernel developers have made. Lockdep and Dept, both should be in the kernel until Dept gets considered stable. Q. Stronger detection capability would give more false positive report. Which was a big problem when cross-release was introduced. Is it ok with Dept? A. It's ok. Dept allows multiple reporting thanks to simple and quite generalized design. Of course, false positive reports should be fixed anyway but it's no longer as a critical problem as it was. Signed-off-by: Byungchul Park --- include/linux/dept.h | 577 ++++++ include/linux/hardirq.h | 3 + include/linux/sched.h | 3 + init/init_task.c | 2 + init/main.c | 2 + kernel/Makefile | 1 + kernel/dependency/Makefile | 3 + kernel/dependency/dept.c | 3009 +++++++++++++++++++++++++++++++ kernel/dependency/dept_hash.h | 10 + kernel/dependency/dept_object.h | 13 + kernel/exit.c | 1 + kernel/fork.c | 2 + kernel/module/main.c | 2 + kernel/sched/core.c | 9 + lib/Kconfig.debug | 27 + lib/locking-selftest.c | 2 + 16 files changed, 3666 insertions(+) create mode 100644 include/linux/dept.h create mode 100644 kernel/dependency/Makefile create mode 100644 kernel/dependency/dept.c create mode 100644 kernel/dependency/dept_hash.h create mode 100644 kernel/dependency/dept_object.h diff --git a/include/linux/dept.h b/include/linux/dept.h new file mode 100644 index 000000000000..b6d45b4b1fd6 --- /dev/null +++ b/include/linux/dept.h @@ -0,0 +1,577 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DEPT(DEPendency Tracker) - runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_H +#define __LINUX_DEPT_H + +#ifdef CONFIG_DEPT + +#include + +struct task_struct; + +#define DEPT_MAX_STACK_ENTRY 16 +#define DEPT_MAX_WAIT_HIST 64 +#define DEPT_MAX_ECXT_HELD 48 + +#define DEPT_MAX_SUBCLASSES 16 +#define DEPT_MAX_SUBCLASSES_EVT 2 +#define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) +#define DEPT_MAX_SUBCLASSES_CACHE 2 + +#define DEPT_SIRQ 0 +#define DEPT_HIRQ 1 +#define DEPT_IRQS_NR 2 +#define DEPT_SIRQF (1UL << DEPT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_HIRQ) + +struct dept_ecxt; +struct dept_iecxt { + struct dept_ecxt *ecxt; + int enirq; + /* + * for preventing to add a new ecxt + */ + bool staled; +}; + +struct dept_wait; +struct dept_iwait { + struct dept_wait *wait; + int irq; + /* + * for preventing to add a new wait + */ + bool staled; + bool touched; +}; + +struct dept_class { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * unique information about the class + */ + const char *name; + unsigned long key; + int sub_id; + + /* + * for BFS + */ + unsigned int bfs_gen; + int bfs_dist; + struct dept_class *bfs_parent; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking all classes + */ + struct list_head all_node; + + /* + * for associating its dependencies + */ + struct list_head dep_head; + struct list_head dep_rev_head; + + /* + * for tracking IRQ dependencies + */ + struct dept_iecxt iecxt[DEPT_IRQS_NR]; + struct dept_iwait iwait[DEPT_IRQS_NR]; + + /* + * classified by a map embedded in task_struct, + * not an explicit map + */ + bool sched_map; + }; + }; +}; + +struct dept_key { + union { + /* + * Each byte-wise address will be used as its key. + */ + char base[DEPT_MAX_SUBCLASSES]; + + /* + * for caching the main class pointer + */ + struct dept_class *classes[DEPT_MAX_SUBCLASSES_CACHE]; + }; +}; + +struct dept_map { + const char *name; + struct dept_key *keys; + + /* + * subclass that can be set from user + */ + int sub_u; + + /* + * It's local copy for fast access to the associated classes. + * Also used for dept_key for static maps. + */ + struct dept_key map_key; + + /* + * wait timestamp associated to this map + */ + unsigned int wgen; + + /* + * whether this map should be going to be checked or not + */ + bool nocheck; +}; + +#define DEPT_MAP_INITIALIZER(n, k) \ +{ \ + .name = #n, \ + .keys = (struct dept_key *)(k), \ + .sub_u = 0, \ + .map_key = { .classes = { NULL, } }, \ + .wgen = 0U, \ + .nocheck = false, \ +} + +struct dept_stack { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * backtrace entries + */ + unsigned long raw[DEPT_MAX_STACK_ENTRY]; + int nr; + }; + }; +}; + +struct dept_ecxt { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function that entered to this ecxt + */ + const char *ecxt_fn; + + /* + * event function + */ + const char *event_fn; + + /* + * associated class + */ + struct dept_class *class; + + /* + * flag indicating which IRQ has been + * enabled within the event context + */ + unsigned long enirqf; + + /* + * where the IRQ-enabled happened + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + + /* + * where the event context started + */ + unsigned long ecxt_ip; + struct dept_stack *ecxt_stack; + + /* + * where the event triggered + */ + unsigned long event_ip; + struct dept_stack *event_stack; + }; + }; +}; + +struct dept_wait { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * function causing this wait + */ + const char *wait_fn; + + /* + * the associated class + */ + struct dept_class *class; + + /* + * which IRQ the wait was placed in + */ + unsigned long irqf; + + /* + * where the IRQ wait happened + */ + unsigned long irq_ip[DEPT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_IRQS_NR]; + + /* + * where the wait happened + */ + unsigned long wait_ip; + struct dept_stack *wait_stack; + + /* + * whether this wait is for commit in scheduler + */ + bool sched_sleep; + }; + }; +}; + +struct dept_dep { + union { + struct llist_node pool_node; + struct { + /* + * reference counter for object management + */ + atomic_t ref; + + /* + * key data of dependency + */ + struct dept_ecxt *ecxt; + struct dept_wait *wait; + + /* + * This object can be referred without dept_lock + * held but with IRQ disabled, e.g. for hash + * lookup. So deferred deletion is needed. + */ + struct rcu_head rh; + + /* + * for BFS + */ + struct list_head bfs_node; + + /* + * for hashing this object + */ + struct hlist_node hash_node; + + /* + * for linking to a class object + */ + struct list_head dep_node; + struct list_head dep_rev_node; + }; + }; +}; + +struct dept_hash { + /* + * hash table + */ + struct hlist_head *table; + + /* + * size of the table e.i. 2^bits + */ + int bits; +}; + +struct dept_pool { + const char *name; + + /* + * object size + */ + size_t obj_sz; + + /* + * the number of the static array + */ + atomic_t obj_nr; + + /* + * offset of ->pool_node + */ + size_t node_off; + + /* + * pointer to the pool + */ + void *spool; + struct llist_head boot_pool; + struct llist_head __percpu *lpool; +}; + +struct dept_ecxt_held { + /* + * associated event context + */ + struct dept_ecxt *ecxt; + + /* + * unique key for this dept_ecxt_held + */ + struct dept_map *map; + + /* + * class of the ecxt of this dept_ecxt_held + */ + struct dept_class *class; + + /* + * the wgen when the event context started + */ + unsigned int wgen; + + /* + * subclass that only works in the local context + */ + int sub_l; +}; + +struct dept_wait_hist { + /* + * associated wait + */ + struct dept_wait *wait; + + /* + * unique id of all waits system-wise until wrapped + */ + unsigned int wgen; + + /* + * local context id to identify IRQ context + */ + unsigned int ctxt_id; +}; + +struct dept_task { + /* + * all event contexts that have entered and before exiting + */ + struct dept_ecxt_held ecxt_held[DEPT_MAX_ECXT_HELD]; + int ecxt_held_pos; + + /* + * ring buffer holding all waits that have happened + */ + struct dept_wait_hist wait_hist[DEPT_MAX_WAIT_HIST]; + int wait_hist_pos; + + /* + * sequential id to identify each IRQ context + */ + unsigned int irq_id[DEPT_IRQS_NR]; + + /* + * for tracking IRQ-enabled points with cross-event + */ + unsigned int wgen_enirq[DEPT_IRQS_NR]; + + /* + * for keeping up-to-date IRQ-enabled points + */ + unsigned long enirq_ip[DEPT_IRQS_NR]; + + /* + * current effective IRQ-enabled flag + */ + unsigned long eff_enirqf; + + /* + * for reserving a current stack instance at each operation + */ + struct dept_stack *stack; + + /* + * for preventing recursive call into DEPT engine + */ + int recursive; + + /* + * for staging data to commit a wait + */ + struct dept_map stage_m; + bool stage_sched_map; + const char *stage_w_fn; + unsigned long stage_ip; + + /* + * the number of missing ecxts + */ + int missing_ecxt; + + /* + * for tracking IRQ-enable state + */ + bool hardirqs_enabled; + bool softirqs_enabled; + + /* + * whether the current is on do_exit() + */ + bool task_exit; + + /* + * whether the current is running __schedule() + */ + bool in_sched; +}; + +#define DEPT_TASK_INITIALIZER(t) \ +{ \ + .wait_hist = { { .wait = NULL, } }, \ + .ecxt_held_pos = 0, \ + .wait_hist_pos = 0, \ + .irq_id = { 0U }, \ + .wgen_enirq = { 0U }, \ + .enirq_ip = { 0UL }, \ + .eff_enirqf = 0UL, \ + .stack = NULL, \ + .recursive = 0, \ + .stage_m = DEPT_MAP_INITIALIZER((t)->stage_m, NULL), \ + .stage_sched_map = false, \ + .stage_w_fn = NULL, \ + .stage_ip = 0UL, \ + .missing_ecxt = 0, \ + .hardirqs_enabled = false, \ + .softirqs_enabled = false, \ + .task_exit = false, \ + .in_sched = false, \ +} + +extern void dept_on(void); +extern void dept_off(void); +extern void dept_init(void); +extern void dept_task_init(struct task_struct *t); +extern void dept_task_exit(struct task_struct *t); +extern void dept_free_range(void *start, unsigned int sz); +extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_map_copy(struct dept_map *to, struct dept_map *from); + +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_request_event_wait_commit(void); +extern void dept_clean_stage(void); +extern void dept_stage_event(struct task_struct *t, unsigned long ip); +extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); +extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); +extern void dept_request_event(struct dept_map *m); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); +extern void dept_sched_enter(void); +extern void dept_sched_exit(void); + +static inline void dept_ecxt_enter_nokeep(struct dept_map *m) +{ + dept_ecxt_enter(m, 0UL, 0UL, NULL, NULL, 0); +} + +/* + * for users who want to manage external keys + */ +extern void dept_key_init(struct dept_key *k); +extern void dept_key_destroy(struct dept_key *k); +extern void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, struct dept_key *new_k, unsigned long new_e_f, unsigned long new_ip, const char *new_c_fn, const char *new_e_fn, int new_sub_l); + +extern void dept_softirq_enter(void); +extern void dept_hardirq_enter(void); +extern void dept_softirqs_on_ip(unsigned long ip); +extern void dept_hardirqs_on(void); +extern void dept_hardirqs_on_ip(unsigned long ip); +extern void dept_softirqs_off_ip(unsigned long ip); +extern void dept_hardirqs_off(void); +extern void dept_hardirqs_off_ip(unsigned long ip); +#else /* !CONFIG_DEPT */ +struct dept_key { }; +struct dept_map { }; +struct dept_task { }; + +#define DEPT_MAP_INITIALIZER(n, k) { } +#define DEPT_TASK_INITIALIZER(t) { } + +#define dept_on() do { } while (0) +#define dept_off() do { } while (0) +#define dept_init() do { } while (0) +#define dept_task_init(t) do { } while (0) +#define dept_task_exit(t) do { } while (0) +#define dept_free_range(s, sz) do { } while (0) +#define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_map_copy(t, f) do { } while (0) + +#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_request_event_wait_commit() do { } while (0) +#define dept_clean_stage() do { } while (0) +#define dept_stage_event(t, ip) do { } while (0) +#define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) +#define dept_ecxt_holding(m, e_f) false +#define dept_request_event(m) do { } while (0) +#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_ecxt_exit(m, e_f, ip) do { } while (0) +#define dept_sched_enter() do { } while (0) +#define dept_sched_exit() do { } while (0) +#define dept_ecxt_enter_nokeep(m) do { } while (0) +#define dept_key_init(k) do { (void)(k); } while (0) +#define dept_key_destroy(k) do { (void)(k); } while (0) +#define dept_map_ecxt_modify(m, e_f, n_k, n_e_f, n_ip, n_c_fn, n_e_fn, n_sl) do { (void)(n_k); (void)(n_c_fn); (void)(n_e_fn); } while (0) + +#define dept_softirq_enter() do { } while (0) +#define dept_hardirq_enter() do { } while (0) +#define dept_softirqs_on_ip(ip) do { } while (0) +#define dept_hardirqs_on() do { } while (0) +#define dept_hardirqs_on_ip(ip) do { } while (0) +#define dept_softirqs_off_ip(ip) do { } while (0) +#define dept_hardirqs_off() do { } while (0) +#define dept_hardirqs_off_ip(ip) do { } while (0) +#endif +#endif /* __LINUX_DEPT_H */ diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index d57cab4d4c06..bb279dbbe748 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -106,6 +107,7 @@ void irq_exit_rcu(void); */ #define __nmi_enter() \ do { \ + dept_off(); \ lockdep_off(); \ arch_nmi_enter(); \ BUG_ON(in_nmi() == NMI_MASK); \ @@ -128,6 +130,7 @@ void irq_exit_rcu(void); __preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \ arch_nmi_exit(); \ lockdep_on(); \ + dept_on(); \ } while (0) #define nmi_exit() \ diff --git a/include/linux/sched.h b/include/linux/sched.h index 853d08f7562b..fcb009900134 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -37,6 +37,7 @@ #include #include #include +#include /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -1168,6 +1169,8 @@ struct task_struct { struct held_lock held_locks[MAX_LOCK_DEPTH]; #endif + struct dept_task dept_task; + #if defined(CONFIG_UBSAN) && !defined(CONFIG_UBSAN_TRAP) unsigned int in_ubsan; #endif diff --git a/init/init_task.c b/init/init_task.c index ff6c4b9bfe6b..eb36ad68c912 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -194,6 +195,7 @@ struct task_struct init_task .curr_chain_key = INITIAL_CHAIN_KEY, .lockdep_recursion = 0, #endif + .dept_task = DEPT_TASK_INITIALIZER(init_task), #ifdef CONFIG_FUNCTION_GRAPH_TRACER .ret_stack = NULL, .tracing_graph_pause = ATOMIC_INIT(0), diff --git a/init/main.c b/init/main.c index e1c3911d7c70..6e5b492640b2 100644 --- a/init/main.c +++ b/init/main.c @@ -66,6 +66,7 @@ #include #include #include +#include #include #include #include @@ -1080,6 +1081,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) panic_param); lockdep_init(); + dept_init(); /* * Need to run this when irqs are enabled, because it wants diff --git a/kernel/Makefile b/kernel/Makefile index 10ef068f598d..d1eb49eaa739 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -51,6 +51,7 @@ obj-y += livepatch/ obj-y += dma/ obj-y += entry/ obj-$(CONFIG_MODULES) += module/ +obj-y += dependency/ obj-$(CONFIG_KCMP) += kcmp.o obj-$(CONFIG_FREEZER) += freezer.o diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile new file mode 100644 index 000000000000..b5cfb8a03c0c --- /dev/null +++ b/kernel/dependency/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_DEPT) += dept.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c new file mode 100644 index 000000000000..8ec638254e5f --- /dev/null +++ b/kernel/dependency/dept.c @@ -0,0 +1,3009 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DEPT(DEPendency Tracker) - Runtime dependency tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + * + * DEPT provides a general way to detect deadlock possibility in runtime + * and the interest is not limited to typical lock but to every + * syncronization primitives. + * + * The following ideas were borrowed from LOCKDEP: + * + * 1) Use a graph to track relationship between classes. + * 2) Prevent performance regression using hash. + * + * The following items were enhanced from LOCKDEP: + * + * 1) Cover more deadlock cases. + * 2) Allow muliple reports. + * + * TODO: Both LOCKDEP and DEPT should co-exist until DEPT is considered + * stable. Then the dependency check routine should be replaced with + * DEPT after. It should finally look like: + * + * + * + * As is: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | +-------------------------------------+ | + * | | Dependency check | | + * | | (by tracking lock acquisition order)| | + * | +-------------------------------------+ | + * | | + * +-----------------------------------------+ + * + * DEPT + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + * + * + * + * To be: + * + * LOCKDEP + * +-----------------------------------------+ + * | Lock usage correctness check | <-> locks + * | | + * | | + * | (Request dependency check) | + * | T | + * +--------------------|--------------------+ + * | + * DEPT V + * +-----------------------------------------+ + * | Dependency check | <-> waits/events + * | (by tracking wait and event context) | + * +-----------------------------------------+ + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int dept_stop; +static int dept_per_cpu_ready; + +#define DEPT_READY_WARN (!oops_in_progress) + +/* + * Make all operations using DEPT_WARN_ON() fail on oops_in_progress and + * prevent warning message. + */ +#define DEPT_WARN_ON_ONCE(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN_ONCE(c, "DEPT_WARN_ON_ONCE: " #c); \ + __ret; \ + }) + +#define DEPT_WARN_ONCE(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN_ONCE(1, "DEPT_WARN_ONCE: " s); \ + }) + +#define DEPT_WARN_ON(c) \ + ({ \ + int __ret = 0; \ + \ + if (likely(DEPT_READY_WARN)) \ + __ret = WARN(c, "DEPT_WARN_ON: " #c); \ + __ret; \ + }) + +#define DEPT_WARN(s...) \ + ({ \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_WARN: " s); \ + }) + +#define DEPT_STOP(s...) \ + ({ \ + WRITE_ONCE(dept_stop, 1); \ + if (likely(DEPT_READY_WARN)) \ + WARN(1, "DEPT_STOP: " s); \ + }) + +#define DEPT_INFO_ONCE(s...) pr_warn_once("DEPT_INFO_ONCE: " s) + +static arch_spinlock_t dept_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; +static arch_spinlock_t stage_spin = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; + +/* + * DEPT internal engine should be careful in using outside functions + * e.g. printk at reporting since that kind of usage might cause + * untrackable deadlock. + */ +static atomic_t dept_outworld = ATOMIC_INIT(0); + +static inline void dept_outworld_enter(void) +{ + atomic_inc(&dept_outworld); +} + +static inline void dept_outworld_exit(void) +{ + atomic_dec(&dept_outworld); +} + +static inline bool dept_outworld_entered(void) +{ + return atomic_read(&dept_outworld); +} + +static inline bool dept_lock(void) +{ + while (!arch_spin_trylock(&dept_spin)) + if (unlikely(dept_outworld_entered())) + return false; + return true; +} + +static inline void dept_unlock(void) +{ + arch_spin_unlock(&dept_spin); +} + +/* + * whether to stack-trace on every wait or every ecxt + */ +static bool rich_stack = true; + +enum bfs_ret { + BFS_CONTINUE, + BFS_CONTINUE_REV, + BFS_DONE, + BFS_SKIP, +}; + +static inline bool after(unsigned int a, unsigned int b) +{ + return (int)(b - a) < 0; +} + +static inline bool before(unsigned int a, unsigned int b) +{ + return (int)(a - b) < 0; +} + +static inline bool valid_stack(struct dept_stack *s) +{ + return s && s->nr > 0; +} + +static inline bool valid_class(struct dept_class *c) +{ + return c->key; +} + +static inline void invalidate_class(struct dept_class *c) +{ + c->key = 0UL; +} + +static inline struct dept_ecxt *dep_e(struct dept_dep *d) +{ + return d->ecxt; +} + +static inline struct dept_wait *dep_w(struct dept_dep *d) +{ + return d->wait; +} + +static inline struct dept_class *dep_fc(struct dept_dep *d) +{ + return dep_e(d)->class; +} + +static inline struct dept_class *dep_tc(struct dept_dep *d) +{ + return dep_w(d)->class; +} + +static inline const char *irq_str(int irq) +{ + if (irq == DEPT_SIRQ) + return "softirq"; + if (irq == DEPT_HIRQ) + return "hardirq"; + return "(unknown)"; +} + +static inline struct dept_task *dept_task(void) +{ + return ¤t->dept_task; +} + +/* + * Dept doesn't work either when it's stopped by DEPT_STOP() or in a nmi + * context. + */ +static inline bool dept_working(void) +{ + return !READ_ONCE(dept_stop) && !in_nmi(); +} + +/* + * Even k == NULL is considered as a valid key because it would use + * &->map_key as the key in that case. + */ +struct dept_key __dept_no_validate__; +static inline bool valid_key(struct dept_key *k) +{ + return &__dept_no_validate__ != k; +} + +/* + * Pool + * ===================================================================== + * DEPT maintains pools to provide objects in a safe way. + * + * 1) Static pool is used at the beginning of booting time. + * 2) Local pool is tried first before the static pool. Objects that + * have been freed will be placed. + */ + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +#define OBJECT(id, nr) \ +static struct dept_##id spool_##id[nr]; \ +static DEFINE_PER_CPU(struct llist_head, lpool_##id); + #include "dept_object.h" +#undef OBJECT + +static struct dept_pool pool[OBJECT_NR] = { +#define OBJECT(id, nr) { \ + .name = #id, \ + .obj_sz = sizeof(struct dept_##id), \ + .obj_nr = ATOMIC_INIT(nr), \ + .node_off = offsetof(struct dept_##id, pool_node), \ + .spool = spool_##id, \ + .lpool = &lpool_##id, }, + #include "dept_object.h" +#undef OBJECT +}; + +/* + * Can use llist no matter whether CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG is + * enabled or not because NMI and other contexts in the same CPU never + * run inside of DEPT concurrently by preventing reentrance. + */ +static void *from_pool(enum object_t t) +{ + struct dept_pool *p; + struct llist_head *h; + struct llist_node *n; + + /* + * llist_del_first() doesn't allow concurrent access e.g. + * between process and IRQ context. + */ + if (DEPT_WARN_ON(!irqs_disabled())) + return NULL; + + p = &pool[t]; + + /* + * Try local pool first. + */ + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + n = llist_del_first(h); + if (n) + return (void *)n - p->node_off; + + /* + * Try static pool. + */ + if (atomic_read(&p->obj_nr) > 0) { + int idx = atomic_dec_return(&p->obj_nr); + + if (idx >= 0) + return p->spool + (idx * p->obj_sz); + } + + DEPT_INFO_ONCE("---------------------------------------------\n" + " Some of Dept internal resources are run out.\n" + " Dept might still work if the resources get freed.\n" + " However, the chances are Dept will suffer from\n" + " the lack from now. Needs to extend the internal\n" + " resource pools. Ask max.byungchul.park@gmail.com\n"); + return NULL; +} + +static void to_pool(void *o, enum object_t t) +{ + struct dept_pool *p = &pool[t]; + struct llist_head *h; + + preempt_disable(); + if (likely(dept_per_cpu_ready)) + h = this_cpu_ptr(p->lpool); + else + h = &p->boot_pool; + + llist_add(o + p->node_off, h); + preempt_enable(); +} + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); \ +static void (*dtor_##id)(struct dept_##id *a); \ +static inline struct dept_##id *new_##id(void) \ +{ \ + struct dept_##id *a; \ + \ + a = (struct dept_##id *)from_pool(OBJECT_##id); \ + if (unlikely(!a)) \ + return NULL; \ + \ + atomic_set(&a->ref, 1); \ + \ + if (ctor_##id) \ + ctor_##id(a); \ + \ + return a; \ +} \ + \ +static inline struct dept_##id *get_##id(struct dept_##id *a) \ +{ \ + atomic_inc(&a->ref); \ + return a; \ +} \ + \ +static inline void put_##id(struct dept_##id *a) \ +{ \ + if (!atomic_dec_return(&a->ref)) { \ + if (dtor_##id) \ + dtor_##id(a); \ + to_pool(a, OBJECT_##id); \ + } \ +} \ + \ +static inline void del_##id(struct dept_##id *a) \ +{ \ + put_##id(a); \ +} \ + \ +static inline bool id##_consumed(struct dept_##id *a) \ +{ \ + return a && atomic_read(&a->ref) > 1; \ +} +#include "dept_object.h" +#undef OBJECT + +#define SET_CONSTRUCTOR(id, f) \ +static void (*ctor_##id)(struct dept_##id *a) = f + +static void initialize_dep(struct dept_dep *d) +{ + INIT_LIST_HEAD(&d->bfs_node); + INIT_LIST_HEAD(&d->dep_node); + INIT_LIST_HEAD(&d->dep_rev_node); +} +SET_CONSTRUCTOR(dep, initialize_dep); + +static void initialize_class(struct dept_class *c) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iecxt *ie = &c->iecxt[i]; + struct dept_iwait *iw = &c->iwait[i]; + + ie->ecxt = NULL; + ie->enirq = i; + ie->staled = false; + + iw->wait = NULL; + iw->irq = i; + iw->staled = false; + iw->touched = false; + } + c->bfs_gen = 0U; + + INIT_LIST_HEAD(&c->all_node); + INIT_LIST_HEAD(&c->dep_head); + INIT_LIST_HEAD(&c->dep_rev_head); +} +SET_CONSTRUCTOR(class, initialize_class); + +static void initialize_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + e->enirq_stack[i] = NULL; + e->enirq_ip[i] = 0UL; + } + e->ecxt_ip = 0UL; + e->ecxt_stack = NULL; + e->enirqf = 0UL; + e->event_ip = 0UL; + e->event_stack = NULL; +} +SET_CONSTRUCTOR(ecxt, initialize_ecxt); + +static void initialize_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) { + w->irq_stack[i] = NULL; + w->irq_ip[i] = 0UL; + } + w->wait_ip = 0UL; + w->wait_stack = NULL; + w->irqf = 0UL; +} +SET_CONSTRUCTOR(wait, initialize_wait); + +static void initialize_stack(struct dept_stack *s) +{ + s->nr = 0; +} +SET_CONSTRUCTOR(stack, initialize_stack); + +#define OBJECT(id, nr) \ +static void (*ctor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_CONSTRUCTOR + +#define SET_DESTRUCTOR(id, f) \ +static void (*dtor_##id)(struct dept_##id *a) = f + +static void destroy_dep(struct dept_dep *d) +{ + if (dep_e(d)) + put_ecxt(dep_e(d)); + if (dep_w(d)) + put_wait(dep_w(d)); +} +SET_DESTRUCTOR(dep, destroy_dep); + +static void destroy_ecxt(struct dept_ecxt *e) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (e->enirq_stack[i]) + put_stack(e->enirq_stack[i]); + if (e->class) + put_class(e->class); + if (e->ecxt_stack) + put_stack(e->ecxt_stack); + if (e->event_stack) + put_stack(e->event_stack); +} +SET_DESTRUCTOR(ecxt, destroy_ecxt); + +static void destroy_wait(struct dept_wait *w) +{ + int i; + + for (i = 0; i < DEPT_IRQS_NR; i++) + if (w->irq_stack[i]) + put_stack(w->irq_stack[i]); + if (w->class) + put_class(w->class); + if (w->wait_stack) + put_stack(w->wait_stack); +} +SET_DESTRUCTOR(wait, destroy_wait); + +#define OBJECT(id, nr) \ +static void (*dtor_##id)(struct dept_##id *a); + #include "dept_object.h" +#undef OBJECT + +#undef SET_DESTRUCTOR + +/* + * Caching and hashing + * ===================================================================== + * DEPT makes use of caching and hashing to improve performance. Each + * object can be obtained in O(1) with its key. + * + * NOTE: Currently we assume all the objects in the hashs will never be + * removed. Implement it when needed. + */ + +/* + * Some information might be lost but it's only for hashing key. + */ +static inline unsigned long mix(unsigned long a, unsigned long b) +{ + int halfbits = sizeof(unsigned long) * 8 / 2; + unsigned long halfmask = (1UL << halfbits) - 1UL; + + return (a << halfbits) | (b & halfmask); +} + +static bool cmp_dep(struct dept_dep *d1, struct dept_dep *d2) +{ + return dep_fc(d1)->key == dep_fc(d2)->key && + dep_tc(d1)->key == dep_tc(d2)->key; +} + +static unsigned long key_dep(struct dept_dep *d) +{ + return mix(dep_fc(d)->key, dep_tc(d)->key); +} + +static bool cmp_class(struct dept_class *c1, struct dept_class *c2) +{ + return c1->key == c2->key; +} + +static unsigned long key_class(struct dept_class *c) +{ + return c->key; +} + +#define HASH(id, bits) \ +static struct hlist_head table_##id[1 << (bits)]; \ + \ +static inline struct hlist_head *head_##id(struct dept_##id *a) \ +{ \ + return table_##id + hash_long(key_##id(a), bits); \ +} \ + \ +static inline struct dept_##id *hash_lookup_##id(struct dept_##id *a) \ +{ \ + struct dept_##id *b; \ + \ + hlist_for_each_entry_rcu(b, head_##id(a), hash_node) \ + if (cmp_##id(a, b)) \ + return b; \ + return NULL; \ +} \ + \ +static inline void hash_add_##id(struct dept_##id *a) \ +{ \ + get_##id(a); \ + hlist_add_head_rcu(&a->hash_node, head_##id(a)); \ +} \ + \ +static inline void hash_del_##id(struct dept_##id *a) \ +{ \ + hlist_del_rcu(&a->hash_node); \ + put_##id(a); \ +} +#include "dept_hash.h" +#undef HASH + +static inline struct dept_dep *lookup_dep(struct dept_class *fc, + struct dept_class *tc) +{ + struct dept_ecxt onetime_e = { .class = fc }; + struct dept_wait onetime_w = { .class = tc }; + struct dept_dep onetime_d = { .ecxt = &onetime_e, + .wait = &onetime_w }; + return hash_lookup_dep(&onetime_d); +} + +static inline struct dept_class *lookup_class(unsigned long key) +{ + struct dept_class onetime_c = { .key = key }; + + return hash_lookup_class(&onetime_c); +} + +/* + * Report + * ===================================================================== + * DEPT prints useful information to help debuging on detection of + * problematic dependency. + */ + +static inline void print_ip_stack(unsigned long ip, struct dept_stack *s) +{ + if (ip) + print_ip_sym(KERN_WARNING, ip); + + if (valid_stack(s)) { + pr_warn("stacktrace:\n"); + stack_trace_print(s->raw, s->nr, 5); + } + + if (!ip && !valid_stack(s)) + pr_warn("(N/A)\n"); +} + +#define print_spc(spc, fmt, ...) \ + pr_warn("%*c" fmt, (spc) * 4, ' ', ##__VA_ARGS__) + +static void print_diagram(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + bool firstline = true; + int spc = 1; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (!firstline) + pr_warn("\nor\n\n"); + firstline = false; + + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, " <%s interrupt>\n", irq_str(irq)); + print_spc(spc + 1, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } + + if (!irqf) { + print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); + print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); + } +} + +static void print_dep(struct dept_dep *d) +{ + struct dept_ecxt *e = dep_e(d); + struct dept_wait *w = dep_w(d); + struct dept_class *fc = dep_fc(d); + struct dept_class *tc = dep_tc(d); + unsigned long irqf; + int irq; + const char *w_fn = w->wait_fn ?: "(unknown)"; + const char *e_fn = e->event_fn ?: "(unknown)"; + const char *c_fn = e->ecxt_fn ?: "(unknown)"; + const char *fc_n = fc->sched_map ? "" : (fc->name ?: "(unknown)"); + const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); + + irqf = e->enirqf & w->irqf; + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + pr_warn("%s has been enabled:\n", irq_str(irq)); + print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); + pr_warn("\n"); + + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d) in %s context:\n", + w_fn, tc_n, tc->sub_id, irq_str(irq)); + print_ip_stack(w->irq_ip[irq], w->irq_stack[irq]); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } + + if (!irqf) { + pr_warn("[S] %s(%s:%d):\n", c_fn, fc_n, fc->sub_id); + print_ip_stack(e->ecxt_ip, e->ecxt_stack); + pr_warn("\n"); + + pr_warn("[W] %s(%s:%d):\n", w_fn, tc_n, tc->sub_id); + print_ip_stack(w->wait_ip, w->wait_stack); + pr_warn("\n"); + + pr_warn("[E] %s(%s:%d):\n", e_fn, fc_n, fc->sub_id); + print_ip_stack(e->event_ip, e->event_stack); + } +} + +static void save_current_stack(int skip); + +/* + * Print all classes in a circle. + */ +static void print_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + int i; + + dept_outworld_enter(); + save_current_stack(6); + + pr_warn("===================================================\n"); + pr_warn("DEPT: Circular dependency has been detected.\n"); + pr_warn("%s %.*s %s\n", init_utsname()->release, + (int)strcspn(init_utsname()->version, " "), + init_utsname()->version, + print_tainted()); + pr_warn("---------------------------------------------------\n"); + pr_warn("summary\n"); + pr_warn("---------------------------------------------------\n"); + + if (fc == tc) + pr_warn("*** AA DEADLOCK ***\n\n"); + else + pr_warn("*** DEADLOCK ***\n\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + if (fc != c) + pr_warn("\n"); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("\n"); + pr_warn("[S]: start of the event context\n"); + pr_warn("[W]: the wait blocked\n"); + pr_warn("[E]: the event not reachable\n"); + + i = 0; + do { + struct dept_dep *d = lookup_dep(fc, tc); + + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c's detail\n", 'A' + i); + pr_warn("---------------------------------------------------\n"); + pr_warn("context %c\n", 'A' + (i++)); + print_diagram(d); + pr_warn("\n"); + print_dep(d); + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + pr_warn("---------------------------------------------------\n"); + pr_warn("information that might be helpful\n"); + pr_warn("---------------------------------------------------\n"); + dump_stack(); + + dept_outworld_exit(); +} + +/* + * BFS(Breadth First Search) + * ===================================================================== + * Whenever a new dependency is added into the graph, search the graph + * for a new circular dependency. + */ + +static inline void enqueue(struct list_head *h, struct dept_dep *d) +{ + list_add_tail(&d->bfs_node, h); +} + +static inline struct dept_dep *dequeue(struct list_head *h) +{ + struct dept_dep *d; + + d = list_first_entry(h, struct dept_dep, bfs_node); + list_del(&d->bfs_node); + return d; +} + +static inline bool empty(struct list_head *h) +{ + return list_empty(h); +} + +static void extend_queue(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_head, dep_node) { + struct dept_class *next = dep_tc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +static void extend_queue_rev(struct list_head *h, struct dept_class *cur) +{ + struct dept_dep *d; + + list_for_each_entry(d, &cur->dep_rev_head, dep_rev_node) { + struct dept_class *next = dep_fc(d); + + if (cur->bfs_gen == next->bfs_gen) + continue; + next->bfs_gen = cur->bfs_gen; + next->bfs_dist = cur->bfs_dist + 1; + next->bfs_parent = cur; + enqueue(h, d); + } +} + +typedef enum bfs_ret bfs_f(struct dept_dep *d, void *in, void **out); +static unsigned int bfs_gen; + +/* + * NOTE: Must be called with dept_lock held. + */ +static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) +{ + LIST_HEAD(q); + enum bfs_ret ret; + + if (DEPT_WARN_ON(!cb)) + return; + + /* + * Avoid zero bfs_gen. + */ + bfs_gen = bfs_gen + 1 ?: 1; + + c->bfs_gen = bfs_gen; + c->bfs_dist = 0; + c->bfs_parent = c; + + ret = cb(NULL, in, out); + if (ret == BFS_DONE) + return; + if (ret == BFS_SKIP) + return; + if (ret == BFS_CONTINUE) + extend_queue(&q, c); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, c); + + while (!empty(&q)) { + struct dept_dep *d = dequeue(&q); + + ret = cb(d, in, out); + if (ret == BFS_DONE) + break; + if (ret == BFS_SKIP) + continue; + if (ret == BFS_CONTINUE) + extend_queue(&q, dep_tc(d)); + if (ret == BFS_CONTINUE_REV) + extend_queue_rev(&q, dep_fc(d)); + } + + while (!empty(&q)) + dequeue(&q); +} + +/* + * Main operations + * ===================================================================== + * Add dependencies - Each new dependency is added into the graph and + * checked if it forms a circular dependency. + * + * Track waits - Waits are queued into the ring buffer for later use to + * generate appropriate dependencies with cross-event. + * + * Track event contexts(ecxt) - Event contexts are pushed into local + * stack for later use to generate appropriate dependencies with waits. + */ + +static inline unsigned long cur_enirqf(void); +static inline int cur_irq(void); +static inline unsigned int cur_ctxt_id(void); + +static inline struct dept_iecxt *iecxt(struct dept_class *c, int irq) +{ + return &c->iecxt[irq]; +} + +static inline struct dept_iwait *iwait(struct dept_class *c, int irq) +{ + return &c->iwait[irq]; +} + +static inline void stale_iecxt(struct dept_iecxt *ie) +{ + if (ie->ecxt) + put_ecxt(ie->ecxt); + + WRITE_ONCE(ie->ecxt, NULL); + WRITE_ONCE(ie->staled, true); +} + +static inline void set_iecxt(struct dept_iecxt *ie, struct dept_ecxt *e) +{ + /* + * ->ecxt will never be updated once getting set until the class + * gets removed. + */ + if (ie->ecxt) + DEPT_WARN_ON(1); + else + WRITE_ONCE(ie->ecxt, get_ecxt(e)); +} + +static inline void stale_iwait(struct dept_iwait *iw) +{ + if (iw->wait) + put_wait(iw->wait); + + WRITE_ONCE(iw->wait, NULL); + WRITE_ONCE(iw->staled, true); +} + +static inline void set_iwait(struct dept_iwait *iw, struct dept_wait *w) +{ + /* + * ->wait will never be updated once getting set until the class + * gets removed. + */ + if (iw->wait) + DEPT_WARN_ON(1); + else + WRITE_ONCE(iw->wait, get_wait(w)); + + iw->touched = true; +} + +static inline void touch_iwait(struct dept_iwait *iw) +{ + iw->touched = true; +} + +static inline void untouch_iwait(struct dept_iwait *iw) +{ + iw->touched = false; +} + +static inline struct dept_stack *get_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + return s ? get_stack(s) : NULL; +} + +static inline void prepare_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + /* + * The dept_stack is already ready. + */ + if (s && !stack_consumed(s)) { + s->nr = 0; + return; + } + + if (s) + put_stack(s); + + s = dept_task()->stack = new_stack(); + if (!s) + return; + + get_stack(s); + del_stack(s); +} + +static void save_current_stack(int skip) +{ + struct dept_stack *s = dept_task()->stack; + + if (!s) + return; + if (valid_stack(s)) + return; + + s->nr = stack_trace_save(s->raw, DEPT_MAX_STACK_ENTRY, skip); +} + +static void finish_current_stack(void) +{ + struct dept_stack *s = dept_task()->stack; + + if (stack_consumed(s)) + save_current_stack(2); +} + +/* + * FIXME: For now, disable LOCKDEP while DEPT is working. + * + * Both LOCKDEP and DEPT report it on a deadlock detection using + * printk taking the risk of another deadlock that might be caused by + * locks of console or printk between inside and outside of them. + * + * For DEPT, it's no problem since multiple reports are allowed. But it + * would be a bad idea for LOCKDEP since it will stop even on a singe + * report. So we need to prevent LOCKDEP from its reporting the risk + * DEPT would take when reporting something. + */ +#include + +void dept_off(void) +{ + dept_task()->recursive++; + lockdep_off(); +} + +void dept_on(void) +{ + dept_task()->recursive--; + lockdep_on(); +} + +static inline unsigned long dept_enter(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + dept_off(); + prepare_current_stack(); + return flags; +} + +static inline void dept_exit(unsigned long flags) +{ + finish_current_stack(); + dept_on(); + arch_local_irq_restore(flags); +} + +static inline unsigned long dept_enter_recursive(void) +{ + unsigned long flags; + + flags = arch_local_irq_save(); + return flags; +} + +static inline void dept_exit_recursive(unsigned long flags) +{ + arch_local_irq_restore(flags); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static struct dept_dep *__add_dep(struct dept_ecxt *e, + struct dept_wait *w) +{ + struct dept_dep *d; + + if (DEPT_WARN_ON(!valid_class(e->class))) + return NULL; + + if (DEPT_WARN_ON(!valid_class(w->class))) + return NULL; + + if (lookup_dep(e->class, w->class)) + return NULL; + + d = new_dep(); + if (unlikely(!d)) + return NULL; + + d->ecxt = get_ecxt(e); + d->wait = get_wait(w); + + /* + * Add the dependency into hash and graph. + */ + hash_add_dep(d); + list_add(&d->dep_node, &dep_fc(d)->dep_head); + list_add(&d->dep_rev_node, &dep_tc(d)->dep_rev_head); + return d; +} + +static enum bfs_ret cb_check_dl(struct dept_dep *d, + void *in, void **out) +{ + struct dept_dep *new = (struct dept_dep *)in; + + /* + * initial condition for this BFS search + */ + if (!d) { + dep_tc(new)->bfs_parent = dep_fc(new); + + if (dep_tc(new) != dep_fc(new)) + return BFS_CONTINUE; + + /* + * AA circle does not make additional deadlock. We don't + * have to continue this BFS search. + */ + print_circle(dep_tc(new)); + return BFS_DONE; + } + + /* + * Allow multiple reports. + */ + if (dep_tc(d) == dep_fc(new)) + print_circle(dep_tc(new)); + + return BFS_CONTINUE; +} + +/* + * This function is actually in charge of reporting. + */ +static inline void check_dl_bfs(struct dept_dep *d) +{ + bfs(dep_tc(d), cb_check_dl, (void *)d, NULL); +} + +static enum bfs_ret cb_find_iw(struct dept_dep *d, void *in, void **out) +{ + int irq = *(int *)in; + struct dept_class *fc; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE_REV; + + fc = dep_fc(d); + iw = iwait(fc, irq); + + /* + * If any parent's ->wait was set, then the children would've + * been touched. + */ + if (!iw->touched) + return BFS_SKIP; + + if (!iw->wait) + return BFS_CONTINUE_REV; + + *out = iw; + return BFS_DONE; +} + +static struct dept_iwait *find_iw_bfs(struct dept_class *c, int irq) +{ + struct dept_iwait *iw = iwait(c, irq); + struct dept_iwait *found = NULL; + + if (iw->wait) + return iw; + + /* + * '->touched == false' guarantees there's no parent that has + * been set ->wait. + */ + if (!iw->touched) + return NULL; + + bfs(c, cb_find_iw, (void *)&irq, (void **)&found); + + if (found) + return found; + + untouch_iwait(iw); + return NULL; +} + +static enum bfs_ret cb_touch_iw_find_ie(struct dept_dep *d, void *in, + void **out) +{ + int irq = *(int *)in; + struct dept_class *tc; + struct dept_iecxt *ie; + struct dept_iwait *iw; + + if (DEPT_WARN_ON(!out)) + return BFS_DONE; + + /* + * initial condition for this BFS search + */ + if (!d) + return BFS_CONTINUE; + + tc = dep_tc(d); + ie = iecxt(tc, irq); + iw = iwait(tc, irq); + + touch_iwait(iw); + + if (!ie->ecxt) + return BFS_CONTINUE; + + if (!*out) + *out = ie; + + return BFS_CONTINUE; +} + +static struct dept_iecxt *touch_iw_find_ie_bfs(struct dept_class *c, + int irq) +{ + struct dept_iecxt *ie = iecxt(c, irq); + struct dept_iwait *iw = iwait(c, irq); + struct dept_iecxt *found = ie->ecxt ? ie : NULL; + + touch_iwait(iw); + bfs(c, cb_touch_iw_find_ie, (void *)&irq, (void **)&found); + return found; +} + +/* + * Should be called with dept_lock held. + */ +static void __add_idep(struct dept_iecxt *ie, struct dept_iwait *iw) +{ + struct dept_dep *new; + + /* + * There's nothing to do. + */ + if (!ie || !iw || !ie->ecxt || !iw->wait) + return; + + new = __add_dep(ie->ecxt, iw->wait); + + /* + * Deadlock detected. Let check_dl_bfs() report it. + */ + if (new) { + check_dl_bfs(new); + stale_iecxt(ie); + stale_iwait(iw); + } + + /* + * If !new, it would be the case of lack of object resource. + * Just let it go and get checked by other chances. Retrying is + * meaningless in that case. + */ +} + +static void set_check_iecxt(struct dept_class *c, int irq, + struct dept_ecxt *e) +{ + struct dept_iecxt *ie = iecxt(c, irq); + + set_iecxt(ie, e); + __add_idep(ie, find_iw_bfs(c, irq)); +} + +static void set_check_iwait(struct dept_class *c, int irq, + struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + set_iwait(iw, w); + __add_idep(touch_iw_find_ie_bfs(c, irq), iw); +} + +static void add_iecxt(struct dept_class *c, int irq, struct dept_ecxt *e, + bool stack) +{ + /* + * This access is safe since we ensure e->class has set locally. + */ + struct dept_task *dt = dept_task(); + struct dept_iecxt *ie = iecxt(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(ie->staled))) + return; + + /* + * Skip add_iecxt() if ie->ecxt has ever been set at least once. + * Which means it has a valid ->ecxt or been staled. + */ + if (READ_ONCE(ie->ecxt)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(ie->staled)) + goto unlock; + if (ie->ecxt) + goto unlock; + + e->enirqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * enirq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(e->enirq_ip[irq]); + DEPT_WARN_ON(e->enirq_stack[irq]); + + e->enirq_ip[irq] = dt->enirq_ip[irq]; + e->enirq_stack[irq] = stack ? get_current_stack() : NULL; + + set_check_iecxt(c, irq, e); +unlock: + dept_unlock(); +} + +static void add_iwait(struct dept_class *c, int irq, struct dept_wait *w) +{ + struct dept_iwait *iw = iwait(c, irq); + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (unlikely(READ_ONCE(iw->staled))) + return; + + /* + * Skip add_iwait() if iw->wait has ever been set at least once. + * Which means it has a valid ->wait or been staled. + */ + if (READ_ONCE(iw->wait)) + return; + + if (unlikely(!dept_lock())) + return; + + if (unlikely(iw->staled)) + goto unlock; + if (iw->wait) + goto unlock; + + w->irqf |= (1UL << irq); + + /* + * Should be NULL since it's the first time that these + * irq_{ip,stack}[irq] have ever set. + */ + DEPT_WARN_ON(w->irq_ip[irq]); + DEPT_WARN_ON(w->irq_stack[irq]); + + w->irq_ip[irq] = w->wait_ip; + w->irq_stack[irq] = get_current_stack(); + + set_check_iwait(c, irq, w); +unlock: + dept_unlock(); +} + +static inline struct dept_wait_hist *hist(int pos) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist + (pos % DEPT_MAX_WAIT_HIST); +} + +static inline int hist_pos_next(void) +{ + struct dept_task *dt = dept_task(); + + return dt->wait_hist_pos % DEPT_MAX_WAIT_HIST; +} + +static inline void hist_advance(void) +{ + struct dept_task *dt = dept_task(); + + dt->wait_hist_pos++; + dt->wait_hist_pos %= DEPT_MAX_WAIT_HIST; +} + +static inline struct dept_wait_hist *new_hist(void) +{ + struct dept_wait_hist *wh = hist(hist_pos_next()); + + hist_advance(); + return wh; +} + +static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) +{ + struct dept_wait_hist *wh = new_hist(); + + if (likely(wh->wait)) + put_wait(wh->wait); + + wh->wait = get_wait(w); + wh->wgen = wg; + wh->ctxt_id = ctxt_id; +} + +/* + * Should be called after setting up e's iecxt and w's iwait. + */ +static void add_dep(struct dept_ecxt *e, struct dept_wait *w) +{ + struct dept_class *fc = e->class; + struct dept_class *tc = w->class; + struct dept_dep *d; + int i; + + if (lookup_dep(fc, tc)) + return; + + if (unlikely(!dept_lock())) + return; + + /* + * __add_dep() will lookup_dep() again with lock held. + */ + d = __add_dep(e, w); + if (d) { + check_dl_bfs(d); + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_iwait *fiw = iwait(fc, i); + struct dept_iecxt *found_ie; + struct dept_iwait *found_iw; + + /* + * '->touched == false' guarantees there's no + * parent that has been set ->wait. + */ + if (!fiw->touched) + continue; + + /* + * find_iw_bfs() will untouch the iwait if + * not found. + */ + found_iw = find_iw_bfs(fc, i); + + if (!found_iw) + continue; + + found_ie = touch_iw_find_ie_bfs(tc, i); + __add_idep(found_ie, found_iw); + } + } + dept_unlock(); +} + +static atomic_t wgen = ATOMIC_INIT(1); + +static void add_wait(struct dept_class *c, unsigned long ip, + const char *w_fn, int sub_l, bool sched_sleep) +{ + struct dept_task *dt = dept_task(); + struct dept_wait *w; + unsigned int wg = 0U; + int irq; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + w = new_wait(); + if (unlikely(!w)) + return; + + WRITE_ONCE(w->class, get_class(c)); + w->wait_ip = ip; + w->wait_fn = w_fn; + w->wait_stack = get_current_stack(); + w->sched_sleep = sched_sleep; + + irq = cur_irq(); + if (irq < DEPT_IRQS_NR) + add_iwait(c, irq, w); + + /* + * Avoid adding dependency between user aware nested ecxt and + * wait. + */ + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + + /* + * the case of invalid key'ed one + */ + if (!eh->ecxt) + continue; + + if (eh->ecxt->class != c || eh->sub_l == sub_l) + add_dep(eh->ecxt, w); + } + + if (!wait_consumed(w) && !rich_stack) { + if (w->wait_stack) + put_stack(w->wait_stack); + w->wait_stack = NULL; + } + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + add_hist(w, wg, cur_ctxt_id()); + + del_wait(w); +} + +static bool add_ecxt(struct dept_map *m, struct dept_class *c, + unsigned long ip, const char *c_fn, + const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + unsigned long irqf; + int irq; + + if (DEPT_WARN_ON(!valid_class(c))) + return false; + + if (DEPT_WARN_ON_ONCE(dt->ecxt_held_pos >= DEPT_MAX_ECXT_HELD)) + return false; + + if (m->nocheck) { + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = NULL; + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + return true; + } + + e = new_ecxt(); + if (unlikely(!e)) + return false; + + e->class = get_class(c); + e->ecxt_ip = ip; + e->ecxt_stack = ip && rich_stack ? get_current_stack() : NULL; + e->event_fn = e_fn; + e->ecxt_fn = c_fn; + + eh = dt->ecxt_held + (dt->ecxt_held_pos++); + eh->ecxt = get_ecxt(e); + eh->map = m; + eh->class = get_class(c); + eh->wgen = atomic_read(&wgen); + eh->sub_l = sub_l; + + irqf = cur_enirqf(); + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + add_iecxt(c, irq, e, false); + + del_ecxt(e); + return true; +} + +static int find_ecxt_pos(struct dept_map *m, struct dept_class *c, + bool newfirst) +{ + struct dept_task *dt = dept_task(); + int i; + + if (newfirst) { + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } else { + for (i = 0; i < dt->ecxt_held_pos; i++) { + struct dept_ecxt_held *eh; + + eh = dt->ecxt_held + i; + if (eh->map == m && eh->class == c) + return i; + } + } + return -1; +} + +static bool pop_ecxt(struct dept_map *m, struct dept_class *c) +{ + struct dept_task *dt = dept_task(); + int pos; + int i; + + pos = find_ecxt_pos(m, c, true); + if (pos == -1) + return false; + + if (dt->ecxt_held[pos].class) + put_class(dt->ecxt_held[pos].class); + + if (dt->ecxt_held[pos].ecxt) + put_ecxt(dt->ecxt_held[pos].ecxt); + + dt->ecxt_held_pos--; + + for (i = pos; i < dt->ecxt_held_pos; i++) + dt->ecxt_held[i] = dt->ecxt_held[i + 1]; + return true; +} + +static inline bool good_hist(struct dept_wait_hist *wh, unsigned int wg) +{ + return wh->wait != NULL && before(wg, wh->wgen); +} + +/* + * Binary-search the ring buffer for the earliest valid wait. + */ +static int find_hist_pos(unsigned int wg) +{ + int oldest; + int l; + int r; + int pos; + + oldest = hist_pos_next(); + if (unlikely(good_hist(hist(oldest), wg))) { + DEPT_INFO_ONCE("Need to expand the ring buffer.\n"); + return oldest; + } + + l = oldest + 1; + r = oldest + DEPT_MAX_WAIT_HIST - 1; + for (pos = (l + r) / 2; l <= r; pos = (l + r) / 2) { + struct dept_wait_hist *p = hist(pos - 1); + struct dept_wait_hist *wh = hist(pos); + + if (!good_hist(p, wg) && good_hist(wh, wg)) + return pos % DEPT_MAX_WAIT_HIST; + if (good_hist(wh, wg)) + r = pos - 1; + else + l = pos + 1; + } + return -1; +} + +static void do_event(struct dept_map *m, struct dept_class *c, + unsigned int wg, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + struct dept_wait_hist *wh; + struct dept_ecxt_held *eh; + unsigned int ctxt_id; + int end; + int pos; + int i; + + if (DEPT_WARN_ON(!valid_class(c))) + return; + + if (m->nocheck) + return; + + /* + * The event was triggered before wait. + */ + if (!wg) + return; + + pos = find_ecxt_pos(m, c, false); + if (pos == -1) + return; + + eh = dt->ecxt_held + pos; + + if (DEPT_WARN_ON(!eh->ecxt)) + return; + + eh->ecxt->event_ip = ip; + eh->ecxt->event_stack = get_current_stack(); + + /* + * The ecxt already has done what it needs. + */ + if (!before(wg, eh->wgen)) + return; + + pos = find_hist_pos(wg); + if (pos == -1) + return; + + ctxt_id = cur_ctxt_id(); + end = hist_pos_next(); + end = end > pos ? end : end + DEPT_MAX_WAIT_HIST; + for (wh = hist(pos); pos < end; wh = hist(++pos)) { + if (after(wh->wgen, eh->wgen)) + break; + + if (dt->in_sched && wh->wait->sched_sleep) + continue; + + if (wh->ctxt_id == ctxt_id) + add_dep(eh->ecxt, wh->wait); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + struct dept_ecxt *e; + + if (before(dt->wgen_enirq[i], wg)) + continue; + + e = eh->ecxt; + add_iecxt(e->class, i, e, false); + } +} + +static void del_dep_rcu(struct rcu_head *rh) +{ + struct dept_dep *d = container_of(rh, struct dept_dep, rh); + + preempt_disable(); + del_dep(d); + preempt_enable(); +} + +/* + * NOTE: Must be called with dept_lock held. + */ +static void disconnect_class(struct dept_class *c) +{ + struct dept_dep *d, *n; + int i; + + list_for_each_entry_safe(d, n, &c->dep_head, dep_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + list_for_each_entry_safe(d, n, &c->dep_rev_head, dep_rev_node) { + list_del_rcu(&d->dep_node); + list_del_rcu(&d->dep_rev_node); + hash_del_dep(d); + call_rcu(&d->rh, del_dep_rcu); + } + + for (i = 0; i < DEPT_IRQS_NR; i++) { + stale_iecxt(iecxt(c, i)); + stale_iwait(iwait(c, i)); + } +} + +/* + * Context control + * ===================================================================== + * Whether a wait is in {hard,soft}-IRQ context or whether + * {hard,soft}-IRQ has been enabled on the way to an event is very + * important to check dependency. All those things should be tracked. + */ + +static inline unsigned long cur_enirqf(void) +{ + struct dept_task *dt = dept_task(); + int he = dt->hardirqs_enabled; + int se = dt->softirqs_enabled; + + if (he) + return DEPT_HIRQF | (se ? DEPT_SIRQF : 0UL); + return 0UL; +} + +static inline int cur_irq(void) +{ + if (lockdep_softirq_context(current)) + return DEPT_SIRQ; + if (lockdep_hardirq_context()) + return DEPT_HIRQ; + return DEPT_IRQS_NR; +} + +static inline unsigned int cur_ctxt_id(void) +{ + struct dept_task *dt = dept_task(); + int irq = cur_irq(); + + /* + * Normal process context + */ + if (irq == DEPT_IRQS_NR) + return 0U; + + return dt->irq_id[irq] | (1UL << irq); +} + +static void enirq_transition(int irq) +{ + struct dept_task *dt = dept_task(); + int i; + + /* + * READ wgen >= wgen of an event with IRQ enabled has been + * observed on the way to the event means, the IRQ can cut in + * within the ecxt. Used for cross-event detection. + * + * wait context event context(ecxt) + * ------------ ------------------- + * wait event + * WRITE wgen + * observe IRQ enabled + * READ wgen + * keep the wgen locally + * + * on the event + * check the local wgen + */ + dt->wgen_enirq[irq] = atomic_read(&wgen); + + for (i = dt->ecxt_held_pos - 1; i >= 0; i--) { + struct dept_ecxt_held *eh; + struct dept_ecxt *e; + + eh = dt->ecxt_held + i; + e = eh->ecxt; + if (e) + add_iecxt(e->class, irq, e, true); + } +} + +static void enirq_update(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long irqf; + unsigned long prev; + int irq; + + prev = dt->eff_enirqf; + irqf = cur_enirqf(); + dt->eff_enirqf = irqf; + + /* + * Do enirq_transition() only on an OFF -> ON transition. + */ + for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + if (prev & (1UL << irq)) + continue; + + dt->enirq_ip[irq] = ip; + enirq_transition(irq); + } +} + +/* + * Ensure it has been called on ON/OFF transition. + */ +static void dept_enirq_transition(unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * IRQ ON/OFF transition might happen while Dept is working. + * We cannot handle recursive entrance. Just ingnore it. + * Only transitions outside of Dept will be considered. + */ + if (dt->recursive) + return; + + flags = dept_enter(); + + enirq_update(ip); + + dept_exit(flags); +} + +void dept_softirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = true; + dept_enirq_transition(ip); +} + +void dept_hardirqs_on(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq_transition(_RET_IP_); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_on); + +void dept_hardirqs_on_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = true; + dept_enirq_transition(ip); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_on_ip); + +void dept_softirqs_off_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->softirqs_enabled = false; + dept_enirq_transition(ip); +} + +void dept_hardirqs_off(void) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; + dept_enirq_transition(_RET_IP_); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_off); + +void dept_hardirqs_off_ip(unsigned long ip) +{ + /* + * Assumes that it's called with IRQ disabled so that accessing + * current's fields is not racy. + */ + dept_task()->hardirqs_enabled = false; + dept_enirq_transition(ip); +} +EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); + +/* + * Ensure it's the outmost softirq context. + */ +void dept_softirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; +} + +/* + * Ensure it's the outmost hardirq context. + */ +void dept_hardirq_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; +} + +void dept_sched_enter(void) +{ + dept_task()->in_sched = true; +} + +void dept_sched_exit(void) +{ + dept_task()->in_sched = false; +} + +/* + * Exposed APIs + * ===================================================================== + */ + +static inline void clean_classes_cache(struct dept_key *k) +{ + int i; + + for (i = 0; i < DEPT_MAX_SUBCLASSES_CACHE; i++) { + if (!READ_ONCE(k->classes[i])) + continue; + + WRITE_ONCE(k->classes[i], NULL); + } +} + +void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u < 0)) { + m->nocheck = true; + return; + } + + if (DEPT_WARN_ON(sub_u >= DEPT_MAX_SUBCLASSES_USR)) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + clean_classes_cache(&m->map_key); + + m->keys = k; + m->sub_u = sub_u; + m->name = n; + m->wgen = 0U; + m->nocheck = !valid_key(k); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_init); + +void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, + const char *n) +{ + unsigned long flags; + + if (unlikely(!dept_working())) { + m->nocheck = true; + return; + } + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + if (k) { + clean_classes_cache(&m->map_key); + m->keys = k; + m->nocheck = !valid_key(k); + } + + if (sub_u >= 0 && sub_u < DEPT_MAX_SUBCLASSES_USR) + m->sub_u = sub_u; + + if (n) + m->name = n; + + m->wgen = 0U; + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_map_reinit); + +void dept_map_copy(struct dept_map *to, struct dept_map *from) +{ + if (unlikely(!dept_working())) { + to->nocheck = true; + return; + } + + *to = *from; + + /* + * XXX: 'to' might be in a stack or something. Using the address + * in a stack segment as a key is meaningless. Just ignore the + * case for now. + */ + if (!to->keys) { + to->nocheck = true; + return; + } + + /* + * Since the class cache can be modified concurrently we could + * observe half pointers (64bit arch using 32bit copy insns). + * Therefore clear the caches and take the performance hit. + * + * XXX: Doesn't work well with lockdep_set_class_and_subclass() + * since that relies on cache abuse. + */ + clean_classes_cache(&to->map_key); +} + +static LIST_HEAD(classes); + +static inline bool within(const void *addr, void *start, unsigned long size) +{ + return addr >= start && addr < start + size; +} + +void dept_free_range(void *start, unsigned int sz) +{ + struct dept_task *dt = dept_task(); + struct dept_class *c, *n; + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Failed to successfully free Dept objects.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_free_range() should not fail. + * + * FIXME: Should be fixed if dept_free_range() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + list_for_each_entry_safe(c, n, &classes, all_node) { + if (!within((void *)c->key, start, sz) && + !within(c->name, start, sz)) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} + +static inline int sub_id(struct dept_map *m, int e) +{ + return (m ? m->sub_u : 0) + e * DEPT_MAX_SUBCLASSES_USR; +} + +static struct dept_class *check_new_class(struct dept_key *local, + struct dept_key *k, int sub_id, + const char *n, bool sched_map) +{ + struct dept_class *c = NULL; + + if (DEPT_WARN_ON(sub_id >= DEPT_MAX_SUBCLASSES)) + return NULL; + + if (DEPT_WARN_ON(!k)) + return NULL; + + /* + * XXX: Assume that users prevent the map from using if any of + * the cached keys has been invalidated. If not, the cache, + * local->classes should not be used because it would be racy + * with class deletion. + */ + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + c = READ_ONCE(local->classes[sub_id]); + + if (c) + return c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (c) + goto caching; + + if (unlikely(!dept_lock())) + return NULL; + + c = lookup_class((unsigned long)k->base + sub_id); + if (unlikely(c)) + goto unlock; + + c = new_class(); + if (unlikely(!c)) + goto unlock; + + c->name = n; + c->sched_map = sched_map; + c->sub_id = sub_id; + c->key = (unsigned long)(k->base + sub_id); + hash_add_class(c); + list_add(&c->all_node, &classes); +unlock: + dept_unlock(); +caching: + if (local && sub_id < DEPT_MAX_SUBCLASSES_CACHE) + WRITE_ONCE(local->classes[sub_id], c); + + return c; +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l, + bool sched_sleep, bool sched_map) +{ + int e; + + /* + * Be as conservative as possible. In case of mulitple waits for + * a single dept_map, we are going to keep only the last wait's + * wgen for simplicity - keeping all wgens seems overengineering. + * + * Of course, it might cause missing some dependencies that + * would rarely, probabily never, happen but it helps avoid + * false positive report. + */ + for_each_set_bit(e, &w_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, sched_map); + if (!c) + continue; + + add_wait(c, ip, w_fn, sub_l, sched_sleep); + } +} + +/* + * Called between dept_enter() and dept_exit(). + */ +static void __dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn, + bool sched_map) +{ + struct dept_class *c; + struct dept_key *k; + int e; + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (DEPT_WARN_ON(e >= DEPT_MAX_SUBCLASSES_EVT)) + goto exit; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); + + if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { + do_event(m, c, READ_ONCE(m->wgen), ip); + pop_ecxt(m, c); + } +exit: + /* + * Keep the map diabled until the next sleep. + */ + WRITE_ONCE(m->wgen, 0U); +} + +void dept_wait(struct dept_map *m, unsigned long w_f, + unsigned long ip, const char *w_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_wait); + +void dept_stage_wait(struct dept_map *m, struct dept_key *k, + unsigned long ip, const char *w_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (m && m->nocheck) + return; + + /* + * Either m or k should be passed. Which means Dept relies on + * either its own map or the caller's position in the code when + * determining its class. + */ + if (DEPT_WARN_ON(!m && !k)) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + arch_spin_lock(&stage_spin); + + /* + * Ensure the outmost dept_stage_wait() works. + */ + if (dt->stage_m.keys) + goto unlock; + + if (m) { + dt->stage_m = *m; + + /* + * Ensure dt->stage_m.keys != NULL and it works with the + * map's map_key, not stage_m's one when ->keys == NULL. + */ + if (!m->keys) + dt->stage_m.keys = &m->map_key; + } else { + dt->stage_m.name = w_fn; + dt->stage_sched_map = true; + } + + /* + * dept_map_reinit() includes WRITE_ONCE(->wgen, 0U) that + * effectively disables the map just in case real sleep won't + * happen. dept_request_event_wait_commit() will enable it. + */ + dept_map_reinit(&dt->stage_m, k, -1, NULL); + + dt->stage_w_fn = w_fn; + dt->stage_ip = ip; +unlock: + arch_spin_unlock(&stage_spin); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_stage_wait); + +void dept_clean_stage(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + arch_spin_lock(&stage_spin); + memset(&dt->stage_m, 0x0, sizeof(struct dept_map)); + dt->stage_sched_map = false; + dt->stage_w_fn = NULL; + dt->stage_ip = 0UL; + arch_spin_unlock(&stage_spin); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_clean_stage); + +/* + * Always called from __schedule(). + */ +void dept_request_event_wait_commit(void) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + unsigned int wg; + unsigned long ip; + const char *w_fn; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + /* + * It's impossible that __schedule() is called while Dept is + * working that already disabled IRQ at the entrance. + */ + if (DEPT_WARN_ON(dt->recursive)) + return; + + flags = dept_enter(); + + /* + * Checks if current has staged a wait. + */ + if (!dt->stage_m.keys) + goto exit; + + w_fn = dt->stage_w_fn; + ip = dt->stage_ip; + sched_map = dt->stage_sched_map; + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(dt->stage_m.wgen, wg); + + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); +exit: + dept_exit(flags); +} + +/* + * Always called from try_to_wake_up(). + */ +void dept_stage_event(struct task_struct *t, unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_map m; + bool sched_map; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) + return; + + flags = dept_enter(); + + arch_spin_lock(&stage_spin); + m = t->dept_task.stage_m; + sched_map = t->dept_task.stage_sched_map; + arch_spin_unlock(&stage_spin); + + /* + * ->stage_m.keys should not be NULL if it's in use. Should + * make sure that it's not NULL when staging a valid map. + */ + if (!m.keys) + goto exit; + + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); +exit: + dept_exit(flags); +} + +/* + * Modifies the latest ecxt corresponding to m and e_f. + */ +void dept_map_ecxt_modify(struct dept_map *m, unsigned long e_f, + struct dept_key *new_k, unsigned long new_e_f, + unsigned long new_ip, const char *new_c_fn, + const char *new_e_fn, int new_sub_l) +{ + struct dept_task *dt = dept_task(); + struct dept_ecxt_held *eh; + struct dept_class *c; + struct dept_key *k; + unsigned long flags; + int pos = -1; + int new_e; + int e; + + if (unlikely(!dept_working())) + return; + + /* + * XXX: Couldn't handle re-enterance cases. Ingore it for now. + */ + if (dt->recursive) + return; + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + pos = find_ecxt_pos(m, c, true); + if (pos != -1) + break; + } + + if (unlikely(pos == -1)) + goto exit; + + eh = dt->ecxt_held + pos; + new_sub_l = new_sub_l >= 0 ? new_sub_l : eh->sub_l; + + new_e = find_first_bit(&new_e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (new_e < DEPT_MAX_SUBCLASSES_EVT) + /* + * Let it work with the first bit anyway. + */ + DEPT_WARN_ON(1UL << new_e != new_e_f); + else + new_e = e; + + pop_ecxt(m, c); + + /* + * Apply the key to the map. + */ + if (new_k) + dept_map_reinit(m, new_k, -1, NULL); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, new_e), m->name, false); + + if (c && add_ecxt(m, c, new_ip, new_c_fn, new_e_fn, new_sub_l)) + goto exit; + + /* + * Successfully pop_ecxt()ed but failed to add_ecxt(). + */ + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_map_ecxt_modify); + +void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, + const char *c_fn, const char *e_fn, int sub_l) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + struct dept_class *c; + struct dept_key *k; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt++; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + e = find_first_bit(&e_f, DEPT_MAX_SUBCLASSES_EVT); + + if (e >= DEPT_MAX_SUBCLASSES_EVT) + goto missing_ecxt; + + /* + * An event is an event. If the caller passed more than single + * event, then warn it and handle the event corresponding to + * the first bit anyway. + */ + DEPT_WARN_ON(1UL << e != e_f); + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, false); + + if (c && add_ecxt(m, c, ip, c_fn, e_fn, sub_l)) + goto exit; +missing_ecxt: + dt->missing_ecxt++; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_enter); + +bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + bool ret = false; + int e; + + if (unlikely(!dept_working())) + return false; + + if (dt->recursive) + return false; + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + if (find_ecxt_pos(m, c, true) != -1) { + ret = true; + break; + } + } + + dept_exit(flags); + + return ret; +} +EXPORT_SYMBOL_GPL(dept_ecxt_holding); + +void dept_request_event(struct dept_map *m) +{ + unsigned long flags; + unsigned int wg; + + if (unlikely(!dept_working())) + return; + + if (m->nocheck) + return; + + /* + * Allow recursive entrance. + */ + flags = dept_enter_recursive(); + + /* + * Avoid zero wgen. + */ + wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); + WRITE_ONCE(m->wgen, wg); + + dept_exit_recursive(flags); +} +EXPORT_SYMBOL_GPL(dept_request_event); + +void dept_event(struct dept_map *m, unsigned long e_f, + unsigned long ip, const char *e_fn) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + /* + * Dept won't work with this even though an event + * context has been asked. Don't make it confused at + * handling the event. Disable it until the next. + */ + WRITE_ONCE(m->wgen, 0U); + return; + } + + if (m->nocheck) + return; + + flags = dept_enter(); + + __dept_event(m, e_f, ip, e_fn, false); + + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_event); + +void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, + unsigned long ip) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int e; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + dt->missing_ecxt--; + return; + } + + /* + * Should go ahead no matter whether ->nocheck == true or not + * because ->nocheck value can be changed within the ecxt area + * delimitated by dept_ecxt_enter() and dept_ecxt_exit(). + */ + + flags = dept_enter(); + + for_each_set_bit(e, &e_f, DEPT_MAX_SUBCLASSES_EVT) { + struct dept_class *c; + struct dept_key *k; + + k = m->keys ?: &m->map_key; + c = check_new_class(&m->map_key, k, + sub_id(m, e), m->name, false); + if (!c) + continue; + + /* + * When it found an ecxt for any event in e_f, done. + */ + if (pop_ecxt(m, c)) + goto exit; + } + + dt->missing_ecxt--; +exit: + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_ecxt_exit); + +void dept_task_exit(struct task_struct *t) +{ + struct dept_task *dt = &t->dept_task; + int i; + + if (unlikely(!dept_working())) + return; + + raw_local_irq_disable(); + + if (dt->stack) + put_stack(dt->stack); + + for (i = 0; i < dt->ecxt_held_pos; i++) { + if (dt->ecxt_held[i].class) + put_class(dt->ecxt_held[i].class); + if (dt->ecxt_held[i].ecxt) + put_ecxt(dt->ecxt_held[i].ecxt); + } + + for (i = 0; i < DEPT_MAX_WAIT_HIST; i++) + if (dt->wait_hist[i].wait) + put_wait(dt->wait_hist[i].wait); + + dt->task_exit = true; + dept_off(); + + raw_local_irq_enable(); +} + +void dept_task_init(struct task_struct *t) +{ + memset(&t->dept_task, 0x0, sizeof(struct dept_task)); +} + +void dept_key_init(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive) { + DEPT_STOP("Key initialization fails.\n"); + return; + } + + flags = dept_enter(); + + clean_classes_cache(k); + + /* + * dept_key_init() should not fail. + * + * FIXME: Should be fixed if dept_key_init() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + DEPT_STOP("The class(%s/%d) has not been removed.\n", + c->name, sub_id); + break; + } + + dept_unlock(); + dept_exit(flags); +} +EXPORT_SYMBOL_GPL(dept_key_init); + +void dept_key_destroy(struct dept_key *k) +{ + struct dept_task *dt = dept_task(); + unsigned long flags; + int sub_id; + + if (unlikely(!dept_working())) + return; + + if (dt->recursive == 1 && dt->task_exit) { + /* + * Need to allow to go ahead in this case where + * ->recursive has been set to 1 by dept_off() in + * dept_task_exit() and ->task_exit has been set to + * true in dept_task_exit(). + */ + } else if (dt->recursive) { + DEPT_STOP("Key destroying fails.\n"); + return; + } + + flags = dept_enter(); + + /* + * dept_key_destroy() should not fail. + * + * FIXME: Should be fixed if dept_key_destroy() causes deadlock + * with dept_lock(). + */ + while (unlikely(!dept_lock())) + cpu_relax(); + + for (sub_id = 0; sub_id < DEPT_MAX_SUBCLASSES; sub_id++) { + struct dept_class *c; + + c = lookup_class((unsigned long)k->base + sub_id); + if (!c) + continue; + + hash_del_class(c); + disconnect_class(c); + list_del(&c->all_node); + invalidate_class(c); + + /* + * Actual deletion will happen on the rcu callback + * that has been added in disconnect_class(). + */ + del_class(c); + } + + dept_unlock(); + dept_exit(flags); + + /* + * Wait until even lockless hash_lookup_class() for the class + * returns NULL. + */ + might_sleep(); + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(dept_key_destroy); + +static void move_llist(struct llist_head *to, struct llist_head *from) +{ + struct llist_node *first = llist_del_all(from); + struct llist_node *last; + + if (!first) + return; + + for (last = first; last->next; last = last->next); + llist_add_batch(first, last, to); +} + +static void migrate_per_cpu_pool(void) +{ + const int boot_cpu = 0; + int i; + + /* + * The boot CPU has been using the temperal local pool so far. + * From now on that per_cpu areas have been ready, use the + * per_cpu local pool instead. + */ + DEPT_WARN_ON(smp_processor_id() != boot_cpu); + for (i = 0; i < OBJECT_NR; i++) { + struct llist_head *from; + struct llist_head *to; + + from = &pool[i].boot_pool; + to = per_cpu_ptr(pool[i].lpool, boot_cpu); + move_llist(to, from); + } +} + +#define B2KB(B) ((B) / 1024) + +/* + * Should be called after setup_per_cpu_areas() and before no non-boot + * CPUs have been on. + */ +void __init dept_init(void) +{ + size_t mem_total = 0; + + local_irq_disable(); + dept_per_cpu_ready = 1; + migrate_per_cpu_pool(); + local_irq_enable(); + +#define HASH(id, bits) BUILD_BUG_ON(1 << (bits) <= 0); + #include "dept_hash.h" +#undef HASH +#define OBJECT(id, nr) mem_total += sizeof(struct dept_##id) * nr; + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) mem_total += sizeof(struct hlist_head) * (1 << (bits)); + #include "dept_hash.h" +#undef HASH + + pr_info("DEPendency Tracker: Copyright (c) 2020 LG Electronics, Inc., Byungchul Park\n"); + pr_info("... DEPT_MAX_STACK_ENTRY: %d\n", DEPT_MAX_STACK_ENTRY); + pr_info("... DEPT_MAX_WAIT_HIST : %d\n", DEPT_MAX_WAIT_HIST); + pr_info("... DEPT_MAX_ECXT_HELD : %d\n", DEPT_MAX_ECXT_HELD); + pr_info("... DEPT_MAX_SUBCLASSES : %d\n", DEPT_MAX_SUBCLASSES); +#define OBJECT(id, nr) \ + pr_info("... memory used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct dept_##id) * nr)); + #include "dept_object.h" +#undef OBJECT +#define HASH(id, bits) \ + pr_info("... hash list head used by %s: %zu KB\n", \ + #id, B2KB(sizeof(struct hlist_head) * (1 << (bits)))); + #include "dept_hash.h" +#undef HASH + pr_info("... total memory used by objects and hashs: %zu KB\n", B2KB(mem_total)); + pr_info("... per task memory footprint: %zu bytes\n", sizeof(struct dept_task)); +} diff --git a/kernel/dependency/dept_hash.h b/kernel/dependency/dept_hash.h new file mode 100644 index 000000000000..fd85aab1fdfb --- /dev/null +++ b/kernel/dependency/dept_hash.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * HASH(id, bits) + * + * id : Id for the object of struct dept_##id. + * bits: 1UL << bits is the hash table size. + */ + +HASH(dep, 12) +HASH(class, 12) diff --git a/kernel/dependency/dept_object.h b/kernel/dependency/dept_object.h new file mode 100644 index 000000000000..0b7eb16fe9fb --- /dev/null +++ b/kernel/dependency/dept_object.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * OBJECT(id, nr) + * + * id: Id for the object of struct dept_##id. + * nr: # of the object that should be kept in the pool. + */ + +OBJECT(dep, 1024 * 8) +OBJECT(class, 1024 * 8) +OBJECT(stack, 1024 * 32) +OBJECT(ecxt, 1024 * 16) +OBJECT(wait, 1024 * 32) diff --git a/kernel/exit.c b/kernel/exit.c index 15dc2ec80c46..0f48752c8b4c 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -916,6 +916,7 @@ void __noreturn do_exit(long code) exit_tasks_rcu_finish(); lockdep_free_task(tsk); + dept_task_exit(tsk); do_task_dead(); } diff --git a/kernel/fork.c b/kernel/fork.c index 9f7fe3541897..1d33fc3868f1 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -97,6 +97,7 @@ #include #include #include +#include #include #include @@ -2219,6 +2220,7 @@ static __latent_entropy struct task_struct *copy_process( #ifdef CONFIG_LOCKDEP lockdep_init_task(p); #endif + dept_task_init(p); #ifdef CONFIG_DEBUG_MUTEXES p->blocked_on = NULL; /* not blocked yet */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 48568a0f5651..2882ea2e1b5a 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1194,6 +1194,7 @@ static void free_module(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod->data_layout.base, mod->data_layout.size); + dept_free_range(mod->data_layout.base, mod->data_layout.size); /* Finally, free the core (containing the module structure) */ module_memfree(mod->core_layout.base); @@ -2893,6 +2894,7 @@ static int load_module(struct load_info *info, const char __user *uargs, free_module: /* Free lock-classes; relies on the preceding sync_rcu() */ lockdep_free_key_range(mod->data_layout.base, mod->data_layout.size); + dept_free_range(mod->data_layout.base, mod->data_layout.size); module_deallocate(mod, info); free_copy: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 25b582b6ee5f..0dc066caed9a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -64,6 +64,7 @@ #include #include #include +#include #ifdef CONFIG_PREEMPT_DYNAMIC # ifdef CONFIG_GENERIC_ENTRY @@ -4070,6 +4071,7 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) int cpu, success = 0; preempt_disable(); + dept_stage_event(p, _RET_IP_); if (p == current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -6446,6 +6448,12 @@ static void __sched notrace __schedule(unsigned int sched_mode) rq = cpu_rq(cpu); prev = rq->curr; + prev_state = READ_ONCE(prev->__state); + if (sched_mode != SM_PREEMPT && prev_state & TASK_NORMAL) + dept_request_event_wait_commit(); + + dept_sched_enter(); + schedule_debug(prev, !!sched_mode); if (sched_feat(HRTICK) || sched_feat(HRTICK_DL)) @@ -6560,6 +6568,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) __balance_callbacks(rq); raw_spin_rq_unlock_irq(rq); } + dept_sched_exit(); } void __noreturn do_task_dead(void) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 881c3f84e88a..611fd01751a7 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1255,6 +1255,33 @@ config DEBUG_PREEMPT menu "Lock Debugging (spinlocks, mutexes, etc...)" +config DEPT + bool "Dependency tracking (EXPERIMENTAL)" + depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT + select DEBUG_SPINLOCK + select DEBUG_MUTEXES + select DEBUG_RT_MUTEXES if RT_MUTEXES + select DEBUG_RWSEMS + select DEBUG_WW_MUTEX_SLOWPATH + select DEBUG_LOCK_ALLOC + select TRACE_IRQFLAGS + select STACKTRACE + select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 + select KALLSYMS + select KALLSYMS_ALL + select PROVE_LOCKING + default n + help + Check dependencies between wait and event and report it if + deadlock possibility has been detected. Multiple reports are + allowed if there are more than a single problem. + + This feature is considered EXPERIMENTAL that might produce + false positive reports because new dependencies start to be + tracked, that have never been tracked before. It's worth + noting, to mitigate the impact by the false positives, multi + reporting has been supported. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 8d24279fad05..cd89138d62ba 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -1398,6 +1398,8 @@ static void reset_locks(void) local_irq_disable(); lockdep_free_key_range(&ww_lockdep.acquire_key, 1); lockdep_free_key_range(&ww_lockdep.mutex_key, 1); + dept_free_range(&ww_lockdep.acquire_key, 1); + dept_free_range(&ww_lockdep.mutex_key, 1); I1(A); I1(B); I1(C); I1(D); I1(X1); I1(X2); I1(Y1); I1(Y2); I1(Z1); I1(Z2); From patchwork Mon Jun 26 11:56:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112911 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7442373vqr; Mon, 26 Jun 2023 05:24:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5/EuSWwwoceO2G127GqsqlSiOL2XMOdGx22ktuU8836edlCxGQ73/WW5R4TNhzIMBXiI2+ X-Received: by 2002:a17:906:5a51:b0:991:c9da:70da with SMTP id my17-20020a1709065a5100b00991c9da70damr1601755ejc.61.1687782286994; Mon, 26 Jun 2023 05:24:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782286; cv=none; d=google.com; s=arc-20160816; b=bPBGJEr+7li8kmYnzY6c1q6/BX4nlY0ESKXPPis6MZLDrSzyqkrXTfq+vTEJY79mR5 s8mSZ7WNCWbnHM0/8RP3Y2V10Gcyp6IBOtwvRD3yasBXn2tX3WHhdLyDAE2BBlWTPTRX 7zpYWhrXqStbhJqPTdRIvjalDsMHW76O13VyyoBHsCMO6pOLhePi2kJu8eikL15gw4bS wRv42VUljErBx8D0o7N8bV2r7VT2WHhmBD9z5ltBvVaUCIpUKPx5E+PC2jFUWj5G1Yc5 qyy5w2OIUE+g1Wkt8Nt8P5MI7+bXvo7pizkWxehPeilEMUrmYHiV7S7X40wZmNdp1z8A +efQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=ZnXU4oX1XIuBchFhMOWqhUh/vjN/lnC7AlZDxiu5KB0=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=c/c6ZjhEXLAeFPSOLXp82kSNzyNmhnjvHGiGG3A2chKLY6u8rE5JCz6PomrWI/5E6T N8ma5NTsTgVg4J9WRLSc5W6V9mYXKqrLSTESLTnavj4vjuG0IscF07L6R+hkmQH10Ww2 S1gKDfIpzWu/0Z3eD96p5rimcWY46Io65MG39o5dNMSy/yqAwWQPhB/0Qd7lS9rh5QGE DqngO4OmscBD/dmTOCYbNHHHtLqWUbJOxjgdkHJ7aIYXjjFae9CCDCqai6lnRIq2CjI5 MC0ZN5krtk1AESe2k+o0kbhwXU6BMw7qoM70C36eWheUjtst6vWb6btNCDLMxaSZcB+j 2Y/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fi1-20020a170906da0100b00982945cb0bcsi2771521ejb.873.2023.06.26.05.24.22; Mon, 26 Jun 2023 05:24:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230217AbjFZMOa (ORCPT + 99 others); Mon, 26 Jun 2023 08:14:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230087AbjFZMOD (ORCPT ); Mon, 26 Jun 2023 08:14:03 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 87E5AE7D; Mon, 26 Jun 2023 05:13:56 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-b7-64997d6b4fa4 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 03/25] dept: Add single event dependency tracker APIs Date: Mon, 26 Jun 2023 20:56:38 +0900 Message-Id: <20230626115700.13873-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf1CLcRzHfZ8f32eN8ZjwiKMbnZOTOD8+0Tn+4XHOHefOH/kjYw+NVSya cp0oLlESNapztZjUYrb8rCVLafJjqcu46TROUukuNpvmxwr/fO51n/fr3p9/PiJSWk+HiJQJ BwR1glwlw2JKPDBOt2Bv2kVFZG2ODM6ejgT3tywKSm4YMNivVyEw1BwloLdpHbzy9CMYfvaC BG2BHUFZ91sSapq7EFgqjmFo/zAeOtyDGGwFpzBklN/A0NbnJ8BZmE9AlWkjtObpCGjw9VCg 7cVQrM0gAuMTAT59JQP69DBwVRQx4O9eBLauThosb+bDxUtODHUWGwXNd10EtN8vwdBl+E1D a3MLBfazOTRUf9Fh6PPoSdC7Bxl42VBKgDEzUPTZbyHgxNdfNDzOaQjQ5ZsEdLyuRVCf9Y4A k6ETQ6O7nwCzqYCEH1ebELhyBxg4ftrHQPHRXASnjhdS8OLnYxoynUth2FuCV6/kG/sHST7T rOEtnlKKf6Lj+HtFbxk+s/4Nw5eaDvLminC+vK6X4MuG3DRvqjyJedNQPsNnD3QQvLOzDvNf nj9n+JYLw9SmGTHiaIWgUiYL6oWrtovjrHlF9L5XUw95tLPSUYc0GwWJOHYJ5zrzlfrP72x+ NMKYncs5HD5yhIPZUM6c85HORmIRyZaP5XpaHjEjwSR2PdfTaB2VKDaM++hrH2UJu5Trbr7y r3QWV2VsGN0Hscu42qe60QPSgHPMacV/nXNBnOM685encQ8rHFQekpSiMZVIqkxIjpcrVUsi 4lISlIcidibGm1Dgq/Rp/m130ZB9ixWxIiQbJ4mceUEhpeXJSSnxVsSJSFmwZIpXq5BKFPKU VEGdGKs+qBKSrGi6iJJNlSz2aBRSdrf8gLBXEPYJ6v8pIQoKSUfzHKppUakWV2Jf1Azjrpjw 87He4kQ71XNnTWf+zepV0ZwipHjt7bRH25YbFf2bbiWnr7g0cUJv1q6oW61Tgp86vRmTnRGa 2NA57zeEaWziNoxnj0nVSKor9+/3thjnHhloa61tela449u1mh2b93zPNdf9yI4mDI3MPfHW BSsfOA7LqKQ4+aJwUp0k/wNpr8oFUQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTdxjF/d/X0llz0xG5Y1G0GcOAQ1iGeSKgftK/RhdjTEYWDXT2ZnQU NK10YGJSoRBEqEACRUADFCqBMrA1BpWa2spL5xtKQUcqEbLJUISotFKpYsH45eSXc07O8+UR kdImOlKkzDkhqHPkKhkjpsQ/Jxf+kHXqvCKh5VwYVJYlgG++hIKGLgsDQ391ILBcOU3AdN9u eOyfQbB47wEJxuohBE0TT0m40j+OwN5WwMDwv2vA45tjwF19loFCUxcDD18GCfDWVBHQYd0P dyqaCXAEpigwTjNQbywkQvI/AQFzOwtmXTRMttWxEJxIBPf4KA2uC24a7GNxcP6il4Feu5uC /p5JAoavNzAwblmi4U7/IAVDleU0dM42M/DSbybB7Jtj4ZGjkYBufWjtRdBOQPHbjzQMlDtC 1HKZAM8/NxDcLHlGgNUyyoDLN0OAzVpNwvtLfQgmDa9YKCoLsFB/2oDgbFENBQ8+DNCg9ybB 4kIDszMFu2bmSKy3/Ynt/kYK/93M42t1T1msvznG4kZrLra1xWJT7zSBm974aGxtP8Ng65sq Fpe+8hDYO9rL4Nn791k8WLtIHVj3qzhFIaiUWkG9ZXuGONNZUUcffxyR5zdG6ZBHWorCRDz3 E//MHUTLzHAx/JMnAXKZw7kNvK38OV2KxCKSM33FTw3eZpeDr7k9/JTLuVKiuGj+eWB4hSVc Ej/R30p9Ho3iO7odK34Yt5W/cbd55YA01CnwOpkKJG5Eq9pRuDJHmy1XqpLiNVmZ+TnKvPij x7KtKPQ45lPByh40P7zbiTgRkq2WJKyvVUhpuVaTn+1EvIiUhUvWLhgVUolCnn9SUB9LV+eq BI0TfSuiZBGSvb8IGVLud/kJIUsQjgvqLykhCovUodi0vGtLuTFzNZ0v3g6sNhlHdqSm93Tp Du3TJr7b44n9LxkVbWjTL8UdNHw83HJA67b0FqRs3FTcao2xRbYmrh3Zdj1iNmXsmyO1S7tU aTjuj/KoluiFNIf/+w9lCXeNI7c6CjcZqr5LGtcN1Zzb7BIf4q+GVxiI+tSo34p/fL3ddFJG aTLlibGkWiP/BA+c/uo0AwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769767999720552949?= X-GMAIL-MSGID: =?utf-8?q?1769767999720552949?= Wrapped the base APIs for easier annotation on wait and event. Start with supporting waiters on each single event. More general support for multiple events is a future work. Do more when the need arises. How to annotate (the simplest way): 1. Initaialize a map for the interesting wait. /* * Recommand to place along with the wait instance. */ struct dept_map my_wait; /* * Recommand to place in the initialization code. */ sdt_map_init(&my_wait); 2. Place the following at the wait code. sdt_wait(&my_wait); 3. Place the following at the event code. sdt_event(&my_wait); That's it! Signed-off-by: Byungchul Park --- include/linux/dept_sdt.h | 62 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 include/linux/dept_sdt.h diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h new file mode 100644 index 000000000000..12a793b90c7e --- /dev/null +++ b/include/linux/dept_sdt.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Single-event Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_SDT_H +#define __LINUX_DEPT_SDT_H + +#include +#include + +#ifdef CONFIG_DEPT +#define sdt_map_init(m) \ + do { \ + static struct dept_key __key; \ + dept_map_init(m, &__key, 0, #m); \ + } while (0) + +#define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) + +#define sdt_wait(m) \ + do { \ + dept_request_event(m); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + } while (0) + +/* + * sdt_might_sleep() and its family will be committed in __schedule() + * when it actually gets to __schedule(). Both dept_request_event() and + * dept_wait() will be performed on the commit. + */ + +/* + * Use the code location as the class key if an explicit map is not used. + */ +#define sdt_might_sleep_start(m) \ + do { \ + struct dept_map *__m = m; \ + static struct dept_key __key; \ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + } while (0) + +#define sdt_might_sleep_end() dept_clean_stage() + +#define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) +#else /* !CONFIG_DEPT */ +#define sdt_map_init(m) do { } while (0) +#define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start(m) do { } while (0) +#define sdt_might_sleep_end() do { } while (0) +#define sdt_ecxt_enter(m) do { } while (0) +#define sdt_event(m) do { } while (0) +#define sdt_ecxt_exit(m) do { } while (0) +#endif +#endif /* __LINUX_DEPT_SDT_H */ From patchwork Mon Jun 26 11:56:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112910 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7441561vqr; Mon, 26 Jun 2023 05:23:19 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4NAg3k9/+rH3YrgCxTJa3kYznNcxEqrNtdRw/Xu4g5iXoi+LUp/e1GrqqqRoW/sK+AgY3R X-Received: by 2002:a05:6a20:4314:b0:117:3c00:77da with SMTP id h20-20020a056a20431400b001173c0077damr26912850pzk.20.1687782199608; Mon, 26 Jun 2023 05:23:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782199; cv=none; d=google.com; s=arc-20160816; b=FR3S8djHQ8+ob9cZAta+DT5YXg/p2inuUtw9Hbo0ZmRuSuhP0DzJcT944lIpg7n7FC J6hQsWR69Hctxl7tZujE6SrmDwhk5za+ihnxMiCn3On9nxcs47IaI+1jGuXySs7dPLh+ nrzr4MlxHyf+y2CMxeRePYwNzR8u6bDhz19VP/q40TI6WqBSthkWTjQmxfXRj4Ky0aBi qvCAL4mBj7kUpF5U322vfExjEDGUFBqG9H8h9AQguogw7SdmfKHGljiTHk0M5W7m9oP4 YV+HXFn1ndFLwtMYKX5p0FC2VuqwxgTgnprMcjaKPSIfh375xY6FzJd8uq0cAaqvfD0g MskA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=wYwglYqfeifxk6lW+gBTi4yrQI/O7ARWxzTDfqjUyi8=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=gBR4LnZycS6aD3LV7i8szqgN1SGjS7gin8uJZoSa0XpVg4Lk7jMa7Aik1mTKyqLpak 2RKaCi4tdRhXUI0EtCA0LjRjX6YOmtBO4LH4y/9krW3rPrJMJv6R82w53NMOgK36L7ui kL+5mRU4cxM8kpKNQ9r5fk+dDsrQSvAa2vyNdmLQPAhelNpxfVsnWhvB9+I9+Bn3TU8P ESdJ0AiTy6dTap7jrWwouCJzEw+wGW8x9fhD/Ih2jkkgaP+7q5XMesjQqYdUsRSoICvR iI7cmVG0eqmZ7czDJw1MI/D24AkjTVCuAISl/FX9f4XvjaBJWkVrB/slAI5HfwHzH0dv /sqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m10-20020a170902bb8a00b001b04c325d66si4564293pls.565.2023.06.26.05.23.06; Mon, 26 Jun 2023 05:23:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230329AbjFZMPJ (ORCPT + 99 others); Mon, 26 Jun 2023 08:15:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230109AbjFZMOD (ORCPT ); Mon, 26 Jun 2023 08:14:03 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07868E75; Mon, 26 Jun 2023 05:13:55 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-c8-64997d6b8990 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 04/25] dept: Add lock dependency tracker APIs Date: Mon, 26 Jun 2023 20:56:39 +0900 Message-Id: <20230626115700.13873-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTZxjHfc/lPaeV6knH5IhGXRMz1MDQeHkSL9Ho4vtFY+JijM5oZ89G BYppFcFoUqGaWsABykVLTC1aCcVbYRsCdRUEQSaXSSpKS4R4oUolQYsieCkYvzz5Jf//88vz 4eFp5b9sNK/VHZD0OnWSCssZeTDCHpt49Kwm3uKPg7zseAi9NTNQcq0CQ8dVJ4KKqmMUBBo3 wsORQQRj99tpKCroQHChz09DVVMvAndZBoYHT6dBV2gIQ0tBFobM0msYOl+NU+ArzKfA6doE rbl2CjyjLxgoCmCwFmVS4TFAwaijnAOHcT70l53jYLxvMbT0ellwP14EZ8/7MNS5Wxhoqu6n 4EFNCYbeis8stDY1M9CRl8PCldd2DK9GHDQ4QkMc/O+xUXDdFBa9HHdTcOLNJxbu5njCdPEG BV2PahHcMj+hwFXhxdAQGqSg0lVAw4fLjQj6TwU5OJ49yoH12CkEWccLGWj/eJcFk28ZjL0v wWtXkobBIZqYKg8R94iNIffsIrl5zs8R063HHLG5DpLKsoWktC5AkQvDIZa4yk9i4hrO54gl 2EURn7cOk9dtbRxpLh5jtszeIV+lkZK0qZL+pzV75Amm8HX7B6LSLnmMyIicSguS8aKwVOzJ O818Y7P/PD3BWPhR7O4eneRIYZ5YmfOctSA5TwulU8UXzXe4ieA7YZ3Y422YZEaYL973/hUW 8bxCWCZmB1Z/dc4Vndc9kx6ZsFys/c+OJlgZrmT46vGEUxSyZKJjoJb+ujBTvF3WzeQihQ1N KUdKrS41Wa1NWhqXkK7TpsXtTUl2ofBbOY6O76xGwx1b65HAI1WEIn5OsUbJqlMN6cn1SORp VaRixvsijVKhUacflvQpu/UHkyRDPZrFM6ooxZKRQxql8If6gJQoSfsl/beU4mXRRoQiY/p3 vkx2/Ba1qw3P/TWmyhK7O5jovLnNaqpZsS96w7yM3yOarI0xueZsa/GUjYObV3Rfrf1THd1q MKdd2nDGNuPi9sOLin9YV3O7Mz3zSFDWXt284J/pP1tzdVH5M6fJxfWFz4wkEPxlpT+2J9LK 3ev8nHBjQd9ex9/fp7z7dKZliYoxJKgXL6T1BvUX5oVgh1IDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/L/xxXs8NmerCLMZJAyexivVJZ0YdOQRL0IehDdWiHHN5i K3OVNHWFmUZKc1ZmOmuJrrJpYOlieF9aWpN1m1IS1cqSLluZdplGXx5+8Lz8eD88LKmopiNZ TcZBSZshpqmwjJIlr8lfkppzXh1vsSmhpCge/N8KKKi4acMwcKMega0plwBf52Z4EhhFMPGg nwSzaQBB9ashEpq6hhE4avMwuF+HwqB/DIPLdBpDfs1NDI8+TBLgLSsloN6+DXrPWghwjr+l wOzDcNGcTwTjHQHj1joGrIZoGKm9wMDkq2XgGvbQ0H7JRYPjeSycr/RiaHW4KOhqHiHAfbcC w7DtDw29XT0UDJQU03D9kwXDh4CVBKt/jIHHzioCGoxB2/tJBwEnv/6mobvYGaQrtwgYfNaC 4F7BSwLsNg+Gdv8oAY12Ewk/r3UiGDnzkYETReMMXMw9g+D0iTIK+n9102D0JsDEjwq8Ya3Q PjpGCsbGw4IjUEUJ9y28cOfCECMY7z1nhCr7IaGxNkaoafURQvUXPy3Y605hwf6llBEKPw4S gtfTioVPDx8yQk/5BLV9/i7ZWrWUpsmStEuT9spSjMHvDryLyL7qNCADqlcUohCW51byBUOV 5BRjbjH/9On4NIdxC/nG4jd0IZKxJFczk3/b08FMFUpuI//C0z7NFBfNP/DcpgoRy8q5BL7I t+6fM4qvb3BOe0K4VXxLnwVNsSJ4kudtw2eRrArNqENhmoysdFGTlhCnS03RZ2iy4/ZlpttR cDjWnMmSZvTNvbkNcSxSzZLHLyhXK2gxS6dPb0M8S6rC5OE/zGqFXC3qj0jazD3aQ2mSrg3N ZSlVhHzrTmmvgtsvHpRSJemApP3fEmxIpAFVz98t9eXnWsUdSeETWXm2RGPH0dW3O22Lw719 s/p9XatD7yqPR/UpyYJHYTGfx+40+5dnig5F2ewVI5c3HTM3aCO2JSVWph81FUV9b1L2l5qu 17gPm0K66UDn7Dnu0NSc0eRywznqdbZjy/rE6N7SQOR+vMj3xrWxR/9yXuxlvYrSpYjLYkit TvwLwSrfujQDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769767907844166097?= X-GMAIL-MSGID: =?utf-8?q?1769767907844166097?= Wrapped the base APIs for easier annotation on typical lock. Signed-off-by: Byungchul Park --- include/linux/dept_ldt.h | 77 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 include/linux/dept_ldt.h diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h new file mode 100644 index 000000000000..062613e89fc3 --- /dev/null +++ b/include/linux/dept_ldt.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Lock Dependency Tracker + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __LINUX_DEPT_LDT_H +#define __LINUX_DEPT_LDT_H + +#include + +#ifdef CONFIG_DEPT +#define LDT_EVT_L 1UL +#define LDT_EVT_R 2UL +#define LDT_EVT_W 1UL +#define LDT_EVT_RW (LDT_EVT_R | LDT_EVT_W) +#define LDT_EVT_ALL (LDT_EVT_L | LDT_EVT_RW) + +#define ldt_init(m, k, su, n) dept_map_init(m, k, su, n) +#define ldt_lock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ + } \ + } while (0) + +#define ldt_rlock(m, sl, t, n, i, q) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ + else { \ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ + } \ + } while (0) + +#define ldt_wlock(m, sl, t, n, i) \ + do { \ + if (n) \ + dept_ecxt_enter_nokeep(m); \ + else if (t) \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ + else { \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ + } \ + } while (0) + +#define ldt_unlock(m, i) dept_ecxt_exit(m, LDT_EVT_ALL, i) + +#define ldt_downgrade(m, i) \ + do { \ + if (dept_ecxt_holding(m, LDT_EVT_W)) \ + dept_map_ecxt_modify(m, LDT_EVT_W, NULL, LDT_EVT_R, i, "downgrade", "read_unlock", -1);\ + } while (0) + +#define ldt_set_class(m, n, k, sl, i) dept_map_ecxt_modify(m, LDT_EVT_ALL, k, 0UL, i, "lock_set_class", "(any)unlock", sl) +#else /* !CONFIG_DEPT */ +#define ldt_init(m, k, su, n) do { (void)(k); } while (0) +#define ldt_lock(m, sl, t, n, i) do { } while (0) +#define ldt_rlock(m, sl, t, n, i, q) do { } while (0) +#define ldt_wlock(m, sl, t, n, i) do { } while (0) +#define ldt_unlock(m, i) do { } while (0) +#define ldt_downgrade(m, i) do { } while (0) +#define ldt_set_class(m, n, k, sl, i) do { } while (0) +#endif +#endif /* __LINUX_DEPT_LDT_H */ From patchwork Mon Jun 26 11:56:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112915 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7445857vqr; Mon, 26 Jun 2023 05:30:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4CC0qxSKZMFY/OLQqM4vKWEL30L7g+fHHKBK6Fg1RNLmma/bHgqa4jq2NVsgNH7O0uKQN9 X-Received: by 2002:a05:6a20:9191:b0:10b:9dc1:c5e5 with SMTP id v17-20020a056a20919100b0010b9dc1c5e5mr27080059pzd.34.1687782637805; Mon, 26 Jun 2023 05:30:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782637; cv=none; d=google.com; s=arc-20160816; b=JSZ5kzVNbvVR6m3FqebvIbUVr6BRk/Na/QTiwXlRxBpkxXE8itWWjzCGB8TG+VOEjU NYXLGcDULVaj9RPZWFU0FoyGZijpI4hJWyLG1DNEBbgU1E+JGbgBmI0Po5S8bTZZf9bp esPa4Arh0j2SYzmUNKABa70EIusfWhlXDy2i8p5lrxM2hnFx3hr+7a96231rq4kGSFUF OMuqG98MimSs+2Xa8579CvGBrE7Ud3/io/riWWNlSC65HKNHARFRPqlWaCTADN2RPubD u0BZPjQ4z9MeyUCa2mqD+3WTehIIgyV80OCt4/+7tidIr5rPDYg9FMy/qU1M8FyyM9Wi zmfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=I1wz6nGe0fOOtyNHZ6hQ7sJzK8jBgHRYdfjT8qCFgcs=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=HPugSa2akjsrkjfdopvFQ6Qlo0uHfszPxUdVe+21pA9EkM7oDlYmY+X4S9NDP++EOu sgH0Omdlyud7raXEzlpGe9Kt+GsWEsCxrDhugXWgaKq/n/4FRJipbzAkmGEoahtCaOH6 Yh5+IUE/EyKRmT+sLwgHtnPkCNdBLnrCqCQFUhcJbcdtAup1oh3nBfYhkXjKAFCvZFLw QFFD14nwGFtm3c6GYHuMXj5FNILDLUho1s5IYH4OjAuH+qXAUHO7Im/MtHJnq33BFwTt cxgkJ7SUfYOfAcKCXP9tSmAYpnu9wfFYpL2XrwjGtHKbT05Jw3TKmIqlG8tAC+DFmwzz SiEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fb3-20020a056a002d8300b00668230a86edsi1384359pfb.256.2023.06.26.05.30.24; Mon, 26 Jun 2023 05:30:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbjFZMPP (ORCPT + 99 others); Mon, 26 Jun 2023 08:15:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbjFZMOE (ORCPT ); Mon, 26 Jun 2023 08:14:04 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 076FFE74; Mon, 26 Jun 2023 05:13:55 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-d9-64997d6b86a9 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 05/25] dept: Tie to Lockdep and IRQ tracing Date: Mon, 26 Jun 2023 20:56:40 +0900 Message-Id: <20230626115700.13873-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTH9zz33udeOrtcO6J3OnVrRjRuAi5gjrrovnkTNVl8+TLf6Oyd VAqaFhHMjJUXUbBGMMhrllJIxwBFW11UqJYqpRUFNgh2pjAky1iFQkTKWkBdq/HLyS/nf87v 05+jFA5mCafJyJR0GSqtkshoWWCBeU3ayUp1ouk/gJLziRCcPktDTUszgd6rTQiab5zG4O/Y Ak9mxhHMPe6hoLysF0Hts0EKbriGENgbcgn0/f0R9AcnCXjKignk1bUQ+H1sHoPvcimGJut2 6LpoxuAIj9JQ7idQXZ6HI+NfDGFLIwsWQxyMNFSxMP9sLXiGBhiwP/0SKn/2EWize2hw3RrB 0HenhsBQ8xsGulxuGnpLjAxcmTATGJuxUGAJTrLwh8OE4Vp+RPR83o7hzMvXDHQaHRGqv46h /89WBHfPDmOwNg8QuB8cx2CzllEw+0sHgpELARYKzodZqD59AUFxwWUael51MpDvS4a5UA35 dqN4f3ySEvNtx0X7jIkWH5oF8XbVICvm333KiibrMdHWsFqsa/NjsXYqyIjWxnNEtE6VsmJR oB+LvoE2Ik50d7Oiu2KO/u7T72XfqCWtJkvSJWxKkaUOvLjHHB3ORdmXbtZTBuQ9VIRiOIFP EobHfqXfs9NlwFEm/ErB6w1TUY7lPxNsxn+YIiTjKL7uQ2HU/YCNBh/zm4W/Kn9DRYjjaD5O MPqXRddyPlkotrSRd84VQtM1x1tPDL9OaH1kRlFWRG5yfU4SdQp8dYzQErqH3z18IrQ3eOmL SG5CHzQihSYjK12l0SbFp+ZkaLLjDx5Jt6JIrywn5/fcQlO9O52I55BygTxxeYVawaiy9Dnp TiRwlDJWvihUrlbI1aqcE5LuyAHdMa2kd6KlHK1cLP965rhawR9SZUppknRU0r1PMRezxIBo 98NpJs3/6FJ7RaCpO3th7o+mpSlZsT8lXF+16/A27Q5dXEl6UnzjrNE9+lXYP9RncBj1reMr FipTSrbuvhmaen7i3Btv5uevpnsU7aUu95mdP3RuCHbZ7hS6ElSb15tOJXe0VBU6F30xGFpn rg949hdmTuRtwLW6vbM79uwrwB4lrU9VrV1N6fSq/wEb89QFUwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxiG956P95x2dDnpUE6YzKVqNjFDUFmexY+ZGOOZCcZ/+/gjDT2R joKmVQQNBAGN8mEsiggCKcVUBmxgEcNG61iRQkURpRTEAlJFLBSJSgkI6NoZ/zy5ct93rl8P S8qNdDirTjkialOUGgWWUtJ9W3O+TcooVUVfH18F+oJo8M+eoaC8oR5D7591COpvnCTA27EH BuZ8CBbv3SehpLgXQdXYMAk37CMIrDXZGPqefQZO/wwGR3E+hpzqBgwPppYIcF8qIqDOHAfd 540EtC1MUFDixXClJIcInBcELJhqGTBlrQNPTRkDS2Mx4Bhx0dBe4aDBOrQBSivdGCxWBwX2 Fg8BfX+XYxipf09Dt72Lgl59IQ1/vDRimJozkWDyzzDwsM1AQGNuwDa5ZCXg9Jt3NHQWtgXo 6nUCnI9aEdw684QAc70LQ7vfR0CTuZiEt9c6EHjOTTNwqmCBgSsnzyHIP3WJgvvLnTTkumNh cb4c79wmtPtmSCG36ZhgnTNQwh0jL/xVNswIubeGGMFgPio01UQK1RYvIVS99tOCufYsFsyv ixghb9pJCG6XBQsve3oYoevyIrU/4lfpNpWoUaeK2o074qWJrlf/0IefZKO0C81XySw0eDAP SVie28Lb7FlEkDH3NT84uEAGOZT7im8qfE7nISlLctWf8hNdt5lg8Tn3Az9aehPlIZaluHV8 oTciGMu4WD7fZMEfnKv5usa2/z0S7ju+9a4RBVke2GS7bfg8khrQJ7UoVJ2SmqxUa2KjdEmJ 6SnqtKiEQ8lmFPgcU8aSvgXN9u2xIY5FihBZ9JeXVXJamapLT7YhniUVobKV8yUquUylTD8u ag8d0B7ViDob+oKlFGGyvT+J8XLuoPKImCSKh0Xtx5ZgJeFZKKNq/GnojwdmIo5flDn+ZdeW hcefmBqFkOadIZmVYph+f5fEPqHveezvT4gb897cvWxhp39fE/bM9+DON/2TrvJO370N/b/R jeu/j9PEtG4emJiMatjudJ32jGcuXmgvuj0wq13PV0SurWxeHv1lxfyQ55WhY+PPztaK7ohd mZJNBQpKl6iMiSS1OuV/iNJAATUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768367272388721?= X-GMAIL-MSGID: =?utf-8?q?1769768367272388721?= Yes. How to place Dept in here looks so ugly. But it's inevitable as long as relying on Lockdep. The way should be enhanced gradually. 1. Basically relies on Lockdep to track typical locks and IRQ things. 2. Dept fails to recognize IRQ situation so it generates false alarms when raw_local_irq_*() APIs are used. So made it track those too. 3. Lockdep doesn't track the outmost {hard,soft}irq entracnes but Dept makes use of it. So made it track those too. Signed-off-by: Byungchul Park --- include/linux/irqflags.h | 22 +++++- include/linux/local_lock_internal.h | 1 + include/linux/lockdep.h | 102 ++++++++++++++++++++++------ include/linux/lockdep_types.h | 3 + include/linux/mutex.h | 1 + include/linux/percpu-rwsem.h | 2 +- include/linux/rtmutex.h | 1 + include/linux/rwlock_types.h | 1 + include/linux/rwsem.h | 1 + include/linux/seqlock.h | 2 +- include/linux/spinlock_types_raw.h | 3 + include/linux/srcu.h | 2 +- kernel/dependency/dept.c | 4 +- kernel/locking/lockdep.c | 23 +++++++ 14 files changed, 139 insertions(+), 29 deletions(-) diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h index 5ec0fa71399e..0ebc5ec2dbd4 100644 --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -13,6 +13,7 @@ #define _LINUX_TRACE_IRQFLAGS_H #include +#include #include #include @@ -60,8 +61,10 @@ extern void trace_hardirqs_off(void); # define lockdep_softirqs_enabled(p) ((p)->softirqs_enabled) # define lockdep_hardirq_enter() \ do { \ - if (__this_cpu_inc_return(hardirq_context) == 1)\ + if (__this_cpu_inc_return(hardirq_context) == 1) { \ current->hardirq_threaded = 0; \ + dept_hardirq_enter(); \ + } \ } while (0) # define lockdep_hardirq_threaded() \ do { \ @@ -136,6 +139,8 @@ do { \ # define lockdep_softirq_enter() \ do { \ current->softirq_context++; \ + if (current->softirq_context == 1) \ + dept_softirq_enter(); \ } while (0) # define lockdep_softirq_exit() \ do { \ @@ -170,17 +175,28 @@ extern void warn_bogus_irq_restore(void); /* * Wrap the arch provided IRQ routines to provide appropriate checks. */ -#define raw_local_irq_disable() arch_local_irq_disable() -#define raw_local_irq_enable() arch_local_irq_enable() +#define raw_local_irq_disable() \ + do { \ + arch_local_irq_disable(); \ + dept_hardirqs_off(); \ + } while (0) +#define raw_local_irq_enable() \ + do { \ + dept_hardirqs_on(); \ + arch_local_irq_enable(); \ + } while (0) #define raw_local_irq_save(flags) \ do { \ typecheck(unsigned long, flags); \ flags = arch_local_irq_save(); \ + dept_hardirqs_off(); \ } while (0) #define raw_local_irq_restore(flags) \ do { \ typecheck(unsigned long, flags); \ raw_check_bogus_irq_restore(); \ + if (!arch_irqs_disabled_flags(flags)) \ + dept_hardirqs_on(); \ arch_local_irq_restore(flags); \ } while (0) #define raw_local_save_flags(flags) \ diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 975e33b793a7..39f67788fd95 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -21,6 +21,7 @@ typedef struct { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, \ .owner = NULL, diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 1f1099dac3f0..99961026ba43 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -12,6 +12,7 @@ #include #include +#include #include struct task_struct; @@ -39,6 +40,8 @@ static inline void lockdep_copy_map(struct lockdep_map *to, */ for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) to->class_cache[i] = NULL; + + dept_map_copy(&to->dmap, &from->dmap); } /* @@ -441,7 +444,8 @@ enum xhlock_context_t { * Note that _name must not be NULL. */ #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \ - { .name = (_name), .key = (void *)(_key), } + { .name = (_name), .key = (void *)(_key), \ + .dmap = DEPT_MAP_INITIALIZER(_name, _key) } static inline void lockdep_invariant_state(bool force) {} static inline void lockdep_free_task(struct task_struct *task) {} @@ -523,33 +527,89 @@ extern bool read_lock_is_recursive(void); #define lock_acquire_shared(l, s, t, n, i) lock_acquire(l, s, t, 1, 1, n, i) #define lock_acquire_shared_recursive(l, s, t, n, i) lock_acquire(l, s, t, 2, 1, n, i) -#define spin_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define spin_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define spin_release(l, i) lock_release(l, i) - -#define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) +#define spin_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define spin_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define spin_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwlock_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) #define rwlock_acquire_read(l, s, t, i) \ do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, !read_lock_is_recursive());\ if (read_lock_is_recursive()) \ lock_acquire_shared_recursive(l, s, t, NULL, i); \ else \ lock_acquire_shared(l, s, t, NULL, i); \ } while (0) - -#define rwlock_release(l, i) lock_release(l, i) - -#define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define seqcount_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) -#define seqcount_release(l, i) lock_release(l, i) - -#define mutex_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define mutex_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define mutex_release(l, i) lock_release(l, i) - -#define rwsem_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwsem_acquire_nest(l, s, t, n, i) lock_acquire_exclusive(l, s, t, n, i) -#define rwsem_acquire_read(l, s, t, i) lock_acquire_shared(l, s, t, NULL, i) -#define rwsem_release(l, i) lock_release(l, i) +#define rwlock_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define seqcount_acquire(l, s, t, i) \ +do { \ + ldt_wlock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_acquire_read(l, s, t, i) \ +do { \ + ldt_rlock(&(l)->dmap, s, t, NULL, i, false); \ + lock_acquire_shared_recursive(l, s, t, NULL, i); \ +} while (0) +#define seqcount_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define mutex_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define mutex_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define mutex_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) +#define rwsem_acquire(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_exclusive(l, s, t, NULL, i); \ +} while (0) +#define rwsem_acquire_nest(l, s, t, n, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, n, i); \ + lock_acquire_exclusive(l, s, t, n, i); \ +} while (0) +#define rwsem_acquire_read(l, s, t, i) \ +do { \ + ldt_lock(&(l)->dmap, s, t, NULL, i); \ + lock_acquire_shared(l, s, t, NULL, i); \ +} while (0) +#define rwsem_release(l, i) \ +do { \ + ldt_unlock(&(l)->dmap, i); \ + lock_release(l, i); \ +} while (0) #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) #define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_) diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index d22430840b53..50c887967dd8 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -11,6 +11,7 @@ #define __LINUX_LOCKDEP_TYPES_H #include +#include #define MAX_LOCKDEP_SUBCLASSES 8UL @@ -76,6 +77,7 @@ struct lock_class_key { struct hlist_node hash_entry; struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES]; }; + struct dept_key dkey; }; extern struct lock_class_key __lockdep_no_validate__; @@ -185,6 +187,7 @@ struct lockdep_map { int cpu; unsigned long ip; #endif + struct dept_map dmap; }; struct pin_cookie { unsigned int val; }; diff --git a/include/linux/mutex.h b/include/linux/mutex.h index 8f226d460f51..58bf314eddeb 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -25,6 +25,7 @@ , .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define __DEP_MAP_MUTEX_INITIALIZER(lockname) diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 36b942b67b7d..e871aca04645 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -21,7 +21,7 @@ struct percpu_rw_semaphore { }; #ifdef CONFIG_DEBUG_LOCK_ALLOC -#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }, +#define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) }, #else #define __PERCPU_RWSEM_DEP_MAP_INIT(lockname) #endif diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 7d049883a08a..35889ac5eeae 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -81,6 +81,7 @@ do { \ .dep_map = { \ .name = #mutexname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(mutexname, NULL),\ } #else #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname) diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h index 1948442e7750..6e58dfc84997 100644 --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -10,6 +10,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL), \ } #else # define RW_DEP_MAP_INIT(lockname) diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index efa5c324369a..4f856e745dce 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -21,6 +21,7 @@ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SLEEP, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ }, #else # define __RWSEM_DEP_MAP_INIT(lockname) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 3926e9027947..6ba00bcbc11a 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -81,7 +81,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name, #ifdef CONFIG_DEBUG_LOCK_ALLOC # define SEQCOUNT_DEP_MAP_INIT(lockname) \ - .dep_map = { .name = #lockname } + .dep_map = { .name = #lockname, .dmap = DEPT_MAP_INITIALIZER(lockname, NULL) } /** * seqcount_init() - runtime initializer for seqcount_t diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h index 91cb36b65a17..3dcc551ded25 100644 --- a/include/linux/spinlock_types_raw.h +++ b/include/linux/spinlock_types_raw.h @@ -31,11 +31,13 @@ typedef struct raw_spinlock { .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_SPIN, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define SPIN_DEP_MAP_INIT(lockname) \ .dep_map = { \ .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } # define LOCAL_SPIN_DEP_MAP_INIT(lockname) \ @@ -43,6 +45,7 @@ typedef struct raw_spinlock { .name = #lockname, \ .wait_type_inner = LD_WAIT_CONFIG, \ .lock_type = LD_LOCK_PERCPU, \ + .dmap = DEPT_MAP_INITIALIZER(lockname, NULL),\ } #else # define RAW_SPIN_DEP_MAP_INIT(lockname) diff --git a/include/linux/srcu.h b/include/linux/srcu.h index 9b9d0bbf1d3c..c934158ed4e8 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -35,7 +35,7 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name, __init_srcu_struct((ssp), #ssp, &__srcu_key); \ }) -#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name }, +#define __SRCU_DEP_MAP_INIT(srcu_name) .dep_map = { .name = #srcu_name, .dmap = DEPT_MAP_INITIALIZER(srcu_name, NULL) }, #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ int init_srcu_struct(struct srcu_struct *ssp); diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8ec638254e5f..d3b6d2f4cd7b 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -245,10 +245,10 @@ static inline bool dept_working(void) * Even k == NULL is considered as a valid key because it would use * &->map_key as the key in that case. */ -struct dept_key __dept_no_validate__; +extern struct lock_class_key __lockdep_no_validate__; static inline bool valid_key(struct dept_key *k) { - return &__dept_no_validate__ != k; + return &__lockdep_no_validate__.dkey != k; } /* diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index e3375bc40dad..7ff3fb4d2735 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -1220,6 +1220,8 @@ void lockdep_register_key(struct lock_class_key *key) struct lock_class_key *k; unsigned long flags; + dept_key_init(&key->dkey); + if (WARN_ON_ONCE(static_obj(key))) return; hash_head = keyhashentry(key); @@ -4327,6 +4329,8 @@ void noinstr lockdep_hardirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_hardirqs_on_ip(ip); + if (unlikely(!debug_locks)) return; @@ -4392,6 +4396,8 @@ EXPORT_SYMBOL_GPL(lockdep_hardirqs_on); */ void noinstr lockdep_hardirqs_off(unsigned long ip) { + dept_hardirqs_off_ip(ip); + if (unlikely(!debug_locks)) return; @@ -4436,6 +4442,8 @@ void lockdep_softirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; + dept_softirqs_on_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4474,6 +4482,9 @@ void lockdep_softirqs_on(unsigned long ip) */ void lockdep_softirqs_off(unsigned long ip) { + + dept_softirqs_off_ip(ip); + if (unlikely(!lockdep_enabled())) return; @@ -4806,6 +4817,8 @@ void lockdep_init_map_type(struct lockdep_map *lock, const char *name, { int i; + ldt_init(&lock->dmap, &key->dkey, subclass, name); + for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++) lock->class_cache[i] = NULL; @@ -5544,6 +5557,12 @@ void lock_set_class(struct lockdep_map *lock, const char *name, { unsigned long flags; + /* + * dept_map_(re)init() might be called twice redundantly. But + * there's no choice as long as Dept relies on Lockdep. + */ + ldt_set_class(&lock->dmap, name, &key->dkey, subclass, ip); + if (unlikely(!lockdep_enabled())) return; @@ -5561,6 +5580,8 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip) { unsigned long flags; + ldt_downgrade(&lock->dmap, ip); + if (unlikely(!lockdep_enabled())) return; @@ -6333,6 +6354,8 @@ void lockdep_unregister_key(struct lock_class_key *key) unsigned long flags; bool found = false; + dept_key_destroy(&key->dkey); + might_sleep(); if (WARN_ON_ONCE(static_obj(key))) From patchwork Mon Jun 26 11:56:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112894 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7436444vqr; Mon, 26 Jun 2023 05:15:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7HgylKzPEyYz159zpZx0k65PIomk0HqDACJhfjLXd05Vmn6/f0gM2+LNEf3qBdHY+CWl6+ X-Received: by 2002:a17:906:4793:b0:974:32e:7de9 with SMTP id cw19-20020a170906479300b00974032e7de9mr26322379ejc.56.1687781711166; Mon, 26 Jun 2023 05:15:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687781711; cv=none; d=google.com; s=arc-20160816; b=eLid+aDcBx7KONnov/U4AXQmrzIqTQeSRRIqxWkA/Xv7l57D5TwyV0qW0rDofLHqTq PtACT1sBCIzKpuiN2STCid3yJKpqcaaD/UZPyDi7NKsL6RWApSvf2+VHO3wE6AoaTfCe eWwjeDB0oT9kOaU/89tWzrVhrilBTtP5mxvrQSZ51k00ZRbhQBb5PYDW9YjppgATYSqT a/B9rIai6YQTFD6QGaz8y+Q/taJwW4ieILP1JKxYfoHQbwTP7lZsm8qRIy72NK5Sw3HU 9rnPik2M0RVTHCx+Rti0+9S8jFjOKwdRsvhNh3MomYzXmbw5Y5p3Ythx34IIFM4m+EFh XGLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=Fei6JF8fFGpwRZZZ1z/H1CbAv0E6kC20RDaUButsOpw=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=SyVsa1NR5peB+smM9w9e80jpPBi6Uzt/vmGmnNAdQujUiPbROL2JtWBawWoqZ1EDM6 D4rjQc1dLntXcX/RpySL01uT1wmrkcZeku2VNk6TC+ZOy/WOo7Tv9ZKGLzp0C7/ATgMj DTMwMoGZ8dtFv7ke9GndG83S8qcnbOjgWuEbjW67CBN5DLizmxQiqmxCTFg61kC+sw6T TMO7YueBzFNEIhmeBNhp0nuQCWtIS0mLVfinyRFxIMdDPAgQwq0z4g29/zIICl/hE1zw BkIdf3f+CTFfExUbAPch50aVvBqqMvNjqoiSvnEWBRWP1GeI35P+1RYclvL6uOIVTM8O Gaqw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b6-20020a1709064d4600b00986a784726fsi2773637ejv.996.2023.06.26.05.14.44; Mon, 26 Jun 2023 05:15:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230016AbjFZMNr (ORCPT + 99 others); Mon, 26 Jun 2023 08:13:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbjFZMNp (ORCPT ); Mon, 26 Jun 2023 08:13:45 -0400 X-Greylist: delayed 901 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 26 Jun 2023 05:13:41 PDT Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7FE8C1B7; Mon, 26 Jun 2023 05:13:40 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-eb-64997d6b36ff From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 06/25] dept: Add proc knobs to show stats and dependency graph Date: Mon, 26 Jun 2023 20:56:41 +0900 Message-Id: <20230626115700.13873-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSXUxTZxjHfc/Hew4dNWed0eNHpulU4scQjZAnmVHjxXwv1Cjc+HHBGns2 GguSVvmYuiDUCSgoGgQFGRRTCSDqqQlVaFerYFFBEMLQFDZw6lipxM6iHRVtMd48+eX5/5/f 1cPTKgc7j9elHZAMaRq9GisYhS+65lv9kfPauNu9CEpOxkHgTT4DlVcbMXQ3NSBovHGUgtG2 zfDHxBiCyc5HNJSVdiOoGR6k4Ub7EAJ7XS6G3r9nQl9gHENH6QkMebVXMfR4QxR4zp2hoEHe Cg9OmylwBl8yUDaKoaIsjwqPfygIWuo5sOQsgZG6CxyEhldDx1A/C/anK+B8lQdDq72DgXbb CAW9tyoxDDV+YOFBu5uB7pIiFq68MmPwTlhosATGOXjsrKbgmiks+jdkp+DX/6ZYuFfkDNOl 6xT0PWlB4Mj/iwK5sR/DncAYBVa5lIb/L7chGCn2cXDsZJCDiqPFCE4cO8fAo/f3WDB54mHy XSXe+B25MzZOE5M1k9gnqhly3yySmxcGOWJyPOVItXyQWOuWk9rWUYrU+AMskesLMJH9ZzhS 6OujiKe/FZNXXV0ccZdPMtsX7Fas00p6XYZkWLX+B0WK7W4lTh9IyPrNOz8HNa8sRDwvCmtF r/xNIYqaxuMFMhthLMSIAwNBOsKzhEWitehFeK/gaaH2C/Gl+y4XCb4SEsXxKXm6xAhLxIfm iyjCSiFe9Dg76U/ShWLDNec0RwkJYstD83RHFe7kelw4IhWFs1FiUZWL+nQwV7xdN8CcRspq NKMeqXRpGakanX5tbEp2mi4rdu/+VBmFv8pyJLTHhvzdSS4k8EgdrYz7ulyrYjUZxuxUFxJ5 Wj1LOftdmVal1Gqyf5YM+5MNB/WS0YXm84x6jnLNRKZWJfykOSDtk6R0yfA5pfioeTno0vM8 T5U/vXPns/zJ63MTt+5MOLStOdO3ZXH2nMGKJ8M9K5qv7Pg9Nu7PnpjNW75/C9ZnOSWKArd6 MRdjSo622NYl/vi863AzGdNl2d774n2/tJFdvf1JmjVfGuXc2cum6l+fOrRh1WuDWK5LMd53 bKIdQ01Lg8XbW2xm/+WQ2/tYzRhTNKuX0waj5iM2ypM5UQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSe1CMexjH/d7L792Wbd5ZDW86TmaHYULKKI/RuMyZo5cZBsMYd5t9Rzu1 YZfIjJkum1MqZGRD2N2a1VRubyG07JQuq1SUJFu0DNLNpO1IG2fLnH+e+cz3+5nvX4+ElJvo aRJ1zCFBG6OMVmApJV23NGl+1PELqqDEuzMhMz0IXIMpFOTcLMLQeKMQQVFJAgFdleHwaqgH wcizBhIMWY0ITJ3tJJRUdSCw5idiaPrgDc2ufgz2rDQMSbk3MTzvdhPgOH+WgEJxLdSeMRNg G/5EgaELwyVDEuE5nwkYthQwYImfBc78iwy4O4PB3tFCQ8VlOw3Wtrlw4YoDQ5nVTkFVqZOA pgc5GDqKftFQW1VDQWNmBg3X+8wYuocsJFhc/Qy8sBkJuKX3rH1xWwk48e0nDdUZNg/l3Sag +fVDBI9S3hEgFrVgqHD1EFAsZpHw41olAuepXgaS04cZuJRwCkFa8nkKGkaradA7QmDkew5e EcZX9PSTvL74CG8dMlL8UzPH37/YzvD6R20MbxQP88X5AXxuWRfBmwZcNC8WpGJeHDjL8Cd7 mwne0VKG+b76eoavyR6h1k/fJg1TCdHqWEG7YNkeaWTpkxx8oDX06NVuv3h0b95J5CXh2EXc P6kiPcaYnc21tg6TY+zDzuCKMz56cqmEZHMncp9qnjBjxWR2I9f/UxyXKHYWV2e+jMZYxoZw Dtsz8veoP1d4yzbOXmwo97DOPO7IPU6ioxyfQVIjmlCAfNQxsRqlOjokUBcVGRejPhq4d79G RJ7HsRx3Z5aiwabwcsRKkGKSLOjPbJWcVsbq4jTliJOQCh/ZlO8GlVymUsYdE7T7d2sPRwu6 cuQnoRRTZWu2CHvk7D7lISFKEA4I2v9bQuI1LR69eT99oe9Bjf/OXyl/OJc/NyVcIz/3em9u SKYqe6pHS1X5g6pFC1cGa5MsaQ/ur8m7s91gn7OUg1WJ/9b7LR7YER5n3tC39fUJVl8rjZgc EXau8/HWujeheYFr0YDdbGzXvNq4d/Yu378nznXqUv8qCfB3X/nq/TLZJKzedDr97WjEEgWl i1QGB5BanfI/rX9FNzQDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769767395670845299?= X-GMAIL-MSGID: =?utf-8?q?1769767395670845299?= It'd be useful to show Dept internal stats and dependency graph on runtime via proc for better information. Introduced the knobs. Signed-off-by: Byungchul Park --- kernel/dependency/Makefile | 1 + kernel/dependency/dept.c | 24 +++----- kernel/dependency/dept_internal.h | 26 +++++++++ kernel/dependency/dept_proc.c | 95 +++++++++++++++++++++++++++++++ 4 files changed, 131 insertions(+), 15 deletions(-) create mode 100644 kernel/dependency/dept_internal.h create mode 100644 kernel/dependency/dept_proc.c diff --git a/kernel/dependency/Makefile b/kernel/dependency/Makefile index b5cfb8a03c0c..92f165400187 100644 --- a/kernel/dependency/Makefile +++ b/kernel/dependency/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DEPT) += dept.o +obj-$(CONFIG_DEPT) += dept_proc.o diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index d3b6d2f4cd7b..c5e23e9184b8 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -74,6 +74,7 @@ #include #include #include +#include "dept_internal.h" static int dept_stop; static int dept_per_cpu_ready; @@ -261,20 +262,13 @@ static inline bool valid_key(struct dept_key *k) * have been freed will be placed. */ -enum object_t { -#define OBJECT(id, nr) OBJECT_##id, - #include "dept_object.h" -#undef OBJECT - OBJECT_NR, -}; - #define OBJECT(id, nr) \ static struct dept_##id spool_##id[nr]; \ static DEFINE_PER_CPU(struct llist_head, lpool_##id); #include "dept_object.h" #undef OBJECT -static struct dept_pool pool[OBJECT_NR] = { +struct dept_pool dept_pool[OBJECT_NR] = { #define OBJECT(id, nr) { \ .name = #id, \ .obj_sz = sizeof(struct dept_##id), \ @@ -304,7 +298,7 @@ static void *from_pool(enum object_t t) if (DEPT_WARN_ON(!irqs_disabled())) return NULL; - p = &pool[t]; + p = &dept_pool[t]; /* * Try local pool first. @@ -339,7 +333,7 @@ static void *from_pool(enum object_t t) static void to_pool(void *o, enum object_t t) { - struct dept_pool *p = &pool[t]; + struct dept_pool *p = &dept_pool[t]; struct llist_head *h; preempt_disable(); @@ -2136,7 +2130,7 @@ void dept_map_copy(struct dept_map *to, struct dept_map *from) clean_classes_cache(&to->map_key); } -static LIST_HEAD(classes); +LIST_HEAD(dept_classes); static inline bool within(const void *addr, void *start, unsigned long size) { @@ -2168,7 +2162,7 @@ void dept_free_range(void *start, unsigned int sz) while (unlikely(!dept_lock())) cpu_relax(); - list_for_each_entry_safe(c, n, &classes, all_node) { + list_for_each_entry_safe(c, n, &dept_classes, all_node) { if (!within((void *)c->key, start, sz) && !within(c->name, start, sz)) continue; @@ -2244,7 +2238,7 @@ static struct dept_class *check_new_class(struct dept_key *local, c->sub_id = sub_id; c->key = (unsigned long)(k->base + sub_id); hash_add_class(c); - list_add(&c->all_node, &classes); + list_add(&c->all_node, &dept_classes); unlock: dept_unlock(); caching: @@ -2958,8 +2952,8 @@ static void migrate_per_cpu_pool(void) struct llist_head *from; struct llist_head *to; - from = &pool[i].boot_pool; - to = per_cpu_ptr(pool[i].lpool, boot_cpu); + from = &dept_pool[i].boot_pool; + to = per_cpu_ptr(dept_pool[i].lpool, boot_cpu); move_llist(to, from); } } diff --git a/kernel/dependency/dept_internal.h b/kernel/dependency/dept_internal.h new file mode 100644 index 000000000000..007c1eec6bab --- /dev/null +++ b/kernel/dependency/dept_internal.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Dept(DEPendency Tracker) - runtime dependency tracker internal header + * + * Started by Byungchul Park : + * + * Copyright (c) 2020 LG Electronics, Inc., Byungchul Park + */ + +#ifndef __DEPT_INTERNAL_H +#define __DEPT_INTERNAL_H + +#ifdef CONFIG_DEPT + +enum object_t { +#define OBJECT(id, nr) OBJECT_##id, + #include "dept_object.h" +#undef OBJECT + OBJECT_NR, +}; + +extern struct list_head dept_classes; +extern struct dept_pool dept_pool[]; + +#endif +#endif /* __DEPT_INTERNAL_H */ diff --git a/kernel/dependency/dept_proc.c b/kernel/dependency/dept_proc.c new file mode 100644 index 000000000000..7d61dfbc5865 --- /dev/null +++ b/kernel/dependency/dept_proc.c @@ -0,0 +1,95 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Procfs knobs for Dept(DEPendency Tracker) + * + * Started by Byungchul Park : + * + * Copyright (C) 2021 LG Electronics, Inc. , Byungchul Park + */ +#include +#include +#include +#include "dept_internal.h" + +static void *l_next(struct seq_file *m, void *v, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_next(v, &dept_classes, pos); +} + +static void *l_start(struct seq_file *m, loff_t *pos) +{ + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + return seq_list_start_head(&dept_classes, *pos); +} + +static void l_stop(struct seq_file *m, void *v) +{ +} + +static int l_show(struct seq_file *m, void *v) +{ + struct dept_class *fc = list_entry(v, struct dept_class, all_node); + struct dept_dep *d; + const char *prefix; + + if (v == &dept_classes) { + seq_puts(m, "All classes:\n\n"); + return 0; + } + + prefix = fc->sched_map ? " " : ""; + seq_printf(m, "[%p] %s%s\n", (void *)fc->key, prefix, fc->name); + + /* + * XXX: Serialize list traversal if needed. The following might + * give a wrong information on contention. + */ + list_for_each_entry(d, &fc->dep_head, dep_node) { + struct dept_class *tc = d->wait->class; + + prefix = tc->sched_map ? " " : ""; + seq_printf(m, " -> [%p] %s%s\n", (void *)tc->key, prefix, tc->name); + } + seq_puts(m, "\n"); + + return 0; +} + +static const struct seq_operations dept_deps_ops = { + .start = l_start, + .next = l_next, + .stop = l_stop, + .show = l_show, +}; + +static int dept_stats_show(struct seq_file *m, void *v) +{ + int r; + + seq_puts(m, "Availability in the static pools:\n\n"); +#define OBJECT(id, nr) \ + r = atomic_read(&dept_pool[OBJECT_##id].obj_nr); \ + if (r < 0) \ + r = 0; \ + seq_printf(m, "%s\t%d/%d(%d%%)\n", #id, r, nr, (r * 100) / (nr)); + #include "dept_object.h" +#undef OBJECT + + return 0; +} + +static int __init dept_proc_init(void) +{ + proc_create_seq("dept_deps", S_IRUSR, NULL, &dept_deps_ops); + proc_create_single("dept_stats", S_IRUSR, NULL, dept_stats_show); + return 0; +} + +__initcall(dept_proc_init); From patchwork Mon Jun 26 11:56:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112905 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7440502vqr; Mon, 26 Jun 2023 05:21:34 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4xF96HAJCAWI/4jag22hvE7I57lQCXN4xhKkdwlrxJLy3aQQ++wIAwewIaeNB/aX0BlsGv X-Received: by 2002:a05:6a20:748d:b0:126:92de:b893 with SMTP id p13-20020a056a20748d00b0012692deb893mr7089314pzd.31.1687782094240; Mon, 26 Jun 2023 05:21:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782094; cv=none; d=google.com; s=arc-20160816; b=c8jstA4tUses+N9fQbw8has9KaSDEfMYwQbxQTy7nh861Z4MF8pmg+r1VnLfvdtzLy MuwKcocxPLEP9qAoOApOoO9DYU/CVEUBRs677MWCLaeOnGU5PqQFPGc+yOOEUjSJnAxg yeNBzHyUDKZFAtCvOZj0gyFazEVqqw11VkggDLzDPQ3qnJ4hqUHKZpM6c5vcDyZZSUCg eVIMNlq+FBSrVTJ/fwbOKjJkPfxgLTjf7C17lV7ktBqshdXmBpPGdRVx461/rijbSGRT FSgnG8wCTT0S2HH87h4TkWzB6oFXS9nzIt5YDeyJMKooOVfHFvDByC3iRiiNEmCPTmXG hT2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=1StrxiJxnfIog1Mu4omcy0uRqYka5+9bBl5OVrHzIJk=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=hEDAjEBngYlC9hWjqZoUfDu/32j3wLhRoenle2IXsJ0xUrOzQuthCMRSn3+/uHOaaY i3e2s3XrlnEbXB522IIzO5lc4nG2jE0M33L5wdrblXHx6NELD+fOMzkL5JDhGBvOp+ds 5jXZVeZO1UJWTZf7lrqSVLL3mbAOQYHUNH9i9OzenG0ii/KTBxC08K5iUH7OrrxPfgxg On8XESIgEvS6Y4C1k3i/2yGt7LFSpyFRYVUcKPhrLNaXLKkz4yjt3IIk9K/ureN/Sk6u nluitYxumarire9eHHu2j5m1u7psJF3DddgIwXAHDR7+M884WCRUgykRD9rhUmwZASyi E/gA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 67-20020a630246000000b0055785a37147si4859771pgc.590.2023.06.26.05.21.21; Mon, 26 Jun 2023 05:21:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230087AbjFZMPN (ORCPT + 99 others); Mon, 26 Jun 2023 08:15:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230044AbjFZMOD (ORCPT ); Mon, 26 Jun 2023 08:14:03 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 07A0FE79; Mon, 26 Jun 2023 05:13:55 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-fc-64997d6b187d From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 07/25] dept: Apply sdt_might_sleep_{start,end}() to wait_for_completion()/complete() Date: Mon, 26 Jun 2023 20:56:42 +0900 Message-Id: <20230626115700.13873-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0wTWRSAvXdm7kwrxXHQdUSDpgaNbBZxRT3xHU10YlyzCf80Zrexk21D qaZIAY0JVDSKgA+ClYeKZVMQWMWCikKRRXn5oi4EqanNwpqFCkiCFq3go9X45+RLzne+X4ej hCYmktMbD8gmo8agJkpaORpm+8lwuFAb1/FfGJzJiQP/2+M0lFyrJuC6WoWgui4Tg691Gzyb GEEw+biLAmuBC8Hl/hcU1LV5ETgrLAS6X4ZDj3+MQGfBSQJHyq4ReDo8hcFz7iyGKscv8PC0 DUNzYJAGq49AsfUIDo4hDAF7JQv2jGgYqChiYap/OXR6exlwPv8RCi96CDQ6O2loqx/A0H2n hIC3+jMDD9s6aHCdyWXgr9c2AsMTdgrs/jEW/mkuxVCTFQy9mnJiOPbmEwPtuc1B+vM6hh53 A4Km4/9icFT3ErjnH8FQ6yig4EN5K4KBvFEWjuYEWCjOzENw8ug5Gro+tjOQ5VkJk+9LyKa1 0r2RMUrKqk2VnBOltPTAJkq3i16wUlbTc1YqdaRItRUxUlmjD0uXx/2M5Kg8QSTH+FlWyh7t wZKnt5FIr588YaWO85P0r/N3KddpZYPeLJuWbfhdqRt2jrL7785Mu+C6z2aguvBspOBEPl48 cdfOfOe8Ri8dYsIvEfv6AlSIZ/ELxdrc/4OOkqP4suniYMd9NhtxXASvE9utiSGk+WjRkrco pKv4lWKxLx9/Sy4Qq2qav2YU/Cqx4ZENhVgIOhZPCwklRT5fIX6cHKS/HcwV/67oo08jVSma VokEvdGcpNEb4mN16UZ9WuzefUkOFPwq++Gp3fVo3JXQgngOqcNUcVHntQKjMSenJ7UgkaPU s1Q/vLdqBZVWk35QNu37zZRikJNb0DyOVs9R/TyRqhX4PzQH5ERZ3i+bvm8xp4jMQGnCou1b r86OX3qqfNOQ111UcyeyIYJ/xM2u3/vWGLMhqnBj4e7M/pkzfN05CbSQeGvbxZrWl3veWHcM da3enhAetTHCycTO2TyQv+TmmiJL30HSv97dZYo0+y7N8+6MXqEIuG+8WydYdqa4dery1MVm 7SH87PaNrdmWLSuom1c+pavpZJ1meQxlStZ8AYYYLbRRAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTZxzGfd9zpVpzrGQ70+iWGmbCNoRkLP8EYvxgwhsTCS5EE2OinT2x DVBNy3ULCxRQVoUIBst1KcVUhDpYSyJKqwSQy0ToBJmTyoS4YVcuCbOEjoq2Gr88+SXPk9+n h6cUFmYHr9VlS3qdKlPJymhZalLJVxmFdep493QcVF2Kh8CrchoaO+wseH5pR2DvKsbgu58C f6wuIFh/OE6BucaDoHn2GQVdgzMI3K1GFiZebIXJwDILIzUXWShp6WDhd38Ig/dqNYZ2x2F4 cNmKoTc4T4PZx0KDuQSH4yWGoK2NA1tRDMy11nMQmk2AkZkpBvqbRhhwP/0C6n72suByj9Aw 2D2HYeJOIwsz9jcMPBgcpsFTVcHAzSUrC/5VGwW2wDIHj3otGDpLw7Z/Q24M5//bYGCoojdM 137FMPlnD4K75c8xOOxTLPQHFjA4HTUU/H/9PoK5ykUOyi4FOWgorkRwsewqDeOvhxgo9SbC +lojeyCZ9C8sU6TUmUfcqxaa/GYVye36ZxwpvfuUIxZHDnG2xpIWlw+T5pUAQxxtP7HEsVLN EdPiJCbeKRdLlsbGODJcu06n7TouS1ZLmdpcSb9v/ymZxu9e5M7d25bf5BngilDXVhOK4kXh a7HSNUNHmBX2ik+eBKkIRwufic6KfxgTkvGU0LJZnB8e4EyI57cLGnHInBFBWogRjZV7InO5 kCg2+K7g98pPxfbO3neaKOEbsWfUiiKsCG+M3j72MpJZ0KY2FK3V5WaptJmJcYYMTYFOmx93 +myWA4V/YysMVXWjVxMpfUjgkXKLPH53rVrBqHINBVl9SOQpZbT8ozWzWiFXqwq+l/RnT+pz MiVDH9rJ08qP5YeOSacUwhlVtpQhSeck/YcW81E7ilA2/iTQZFtT5M+lpfr1BX9X9QSNLToy O/Gt74Znant62cnj6ZoX1F+PX36Z0J13PqbrwHzzeHaeufqG1TV2dLTZ9FDv3J3y+U2j/8St nluHpjeX/Di6b4C6Xu9MXklL2mUvzzmi+a7wynNe+XiobuPC9MYPb5JOHBxrV4T2PvK3xaYr aYNGlRBL6Q2qt1umRrEzAwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769767796828605556?= X-GMAIL-MSGID: =?utf-8?q?1769767796828605556?= Makes Dept able to track dependencies by wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index 62b32b19e0a8..32d535abebf3 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -10,6 +10,7 @@ */ #include +#include /* * struct completion - structure used to maintain state for a "completion" @@ -26,14 +27,33 @@ struct completion { unsigned int done; struct swait_queue_head wait; + struct dept_map dmap; }; +#define init_completion(x) \ +do { \ + sdt_map_init(&(x)->dmap); \ + __init_completion(x); \ +} while (0) + +/* + * XXX: No use cases for now. Fill the body when needed. + */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) {} -static inline void complete_release(struct completion *x) {} + +static inline void complete_acquire(struct completion *x) +{ + sdt_might_sleep_start(&x->dmap); +} + +static inline void complete_release(struct completion *x) +{ + sdt_might_sleep_end(); +} #define COMPLETION_INITIALIZER(work) \ - { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) } + { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait), \ + .dmap = DEPT_MAP_INITIALIZER(work, NULL), } #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) @@ -75,13 +95,13 @@ static inline void complete_release(struct completion *x) {} #endif /** - * init_completion - Initialize a dynamically allocated completion + * __init_completion - Initialize a dynamically allocated completion * @x: pointer to completion structure that is to be initialized * * This inline function will initialize a dynamically created completion * structure. */ -static inline void init_completion(struct completion *x) +static inline void __init_completion(struct completion *x) { x->done = 0; init_swait_queue_head(&x->wait); From patchwork Mon Jun 26 11:56:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112926 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7460317vqr; Mon, 26 Jun 2023 05:56:57 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4+RWJU5eqlBcTtc1ZREh+aV05xgydzOvT7c9ghtFX5LBYs79Ii3ilEEhRmiTiDjMS7BR7h X-Received: by 2002:a05:6358:515b:b0:132:dc7a:7d06 with SMTP id 27-20020a056358515b00b00132dc7a7d06mr7348463rwj.8.1687784216512; Mon, 26 Jun 2023 05:56:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687784216; cv=none; d=google.com; s=arc-20160816; b=jXRDCJtDgXDXIw2UBHVajUUYCQb+EArtUtpPhS8TwGrayJ2cxQm05QB0SXvXVS+Mqs HFonhXpMI83Vc8Bn5lLcrBungdzKfnxmzXEEH6Xuj08elFGdhziZGTLrr1XBHxmR/2nc nhQAa2ADMUwKJslDu6gBJKXdyDmvuMm7laZVEDaW1vJbyNo9C6Ua6joQ9z9wic3JHf9R 9Afolj6/JRg1yqrZRyKkZ4HQZkyAOMa9XutF8D2PephCNwJQVQiU0RhRXnUurZccpM+/ pKLso3hO/VIP8R4UnLGHA7NXOel7d4D5pXz7Ukicr3a7Lvc+eI67VDBukE4T3c17nfqF 0yxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=LneQcjYWam7erajbXAnMaHyH4lCwHSIeexYxaiflgjM=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=ab+HhT+IPK0FHtcJa7ZQMgvxyextbm1+EtJnUFqCPukpFdf3aGZk8ailXUPFN2CRIv 5dkT0KewDSqCv8bFlDN0UQ/Ckkj8rl8/Mu/Fujww6elkD6cjvX98GEE/2MThasaJj2tg s8y+Vij8T886SESObBVDxq6zQBRz3pqSB923tLKFu9H1yHUvHyS511rhEjo6SreuVwh4 qGAnQD22E+UKoDjVDBDuNgAlwCzVzpTuWeCZhgn3MAuYhD9mB8WA/pMwV2M9uDz/h2By d1gJ/sy2YgVcCNiBuaA+z3Gd8wMy41btvdRcpmDKaXyNxckCI5a7FXWnFNrT5UuuXQq3 4LvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i24-20020a63d458000000b00553b64eb254si4908790pgj.854.2023.06.26.05.56.44; Mon, 26 Jun 2023 05:56:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229520AbjFZMqg (ORCPT + 99 others); Mon, 26 Jun 2023 08:46:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230375AbjFZMqI (ORCPT ); Mon, 26 Jun 2023 08:46:08 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3C2171700; Mon, 26 Jun 2023 05:44:59 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-0f-64997d6bb058 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 08/25] dept: Apply sdt_might_sleep_{start,end}() to PG_{locked,writeback} wait Date: Mon, 26 Jun 2023 20:56:43 +0900 Message-Id: <20230626115700.13873-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUxTZxSH97733vdeCjU3Bd1VyDBNjBGVocHtJC5mCdHdJTPRbEuMxo/O 3o1KQW3la9OEj2IUKQqRFpGYUkxFwA+KS0Qp6ZBPGdhJUxlWMpiACEiGFi10aovxn5Mn53d+ z1+HoxROZgWnSTsm6dJUWiWR0bLpiKr12hMX1AktHZFQUpQAvlenaKi8UU/Adb0OQf2tXAwT 7d/Ao7kpBAu9Dygwl7kQVA0/oeBWxxACR00egf6nS8DtmyHQXXaGQH71DQJ/TQYweE2lGOrs 26HnnBWD0z9Og3mCwEVzPg6OZxj8tloWbDmrYKSmgoXA8AboHvIw4BhcCxcueQk0O7pp6Lg9 gqH/TiWBofp3DPR0dNHgKjEycO2FlcDknI0Cm2+GhYdOC4abhqDoecCB4eTLtwx0Gp1ButyA wf33XQQtp/7BYK/3ELjnm8LQaC+jYP5KO4KR4mkWCor8LFzMLUZwpsBEw4P/OxkweDfBwptK 8vVm8d7UDCUaGjNFx5yFFu9bBbGp4gkrGloGWdFiTxcba+LE6uYJLFbN+hjRXnuaiPbZUlYs nHZj0etpJuKLvj5W7CpfoHfE7JZ9pZa0mgxJ9/mWA7LkXtcoORKQZ50/ayI56HF4IQrjBD5R mHz5nP3I46NWEmLCrxYGBvxUiKP4lUKjcYwpRDKO4qvDhfGutsVCJP+T8K+rH4WY5lcJvVMl OMRyfpMwbypHH6SxQt1N56IojP9CuPundXGvCN7keVtJSCrw5jDBb3hMPhSWC3/UDNDnkNyC PqlFCk1aRqpKo02MT85O02TFHzycakfBv7KdCOy5jWZd37cinkPKCHnCZ+VqBaPK0GentiKB o5RR8mVvzGqFXK3K/lXSHd6vS9dK+lYUzdHKT+Ub5zLVCv4X1TEpRZKOSLqPKebCVuSgq7An NyoyPaat5bXJWPqqrrjzd+HgsDtru2P8Tso+NU76sW+soHvnWcOVpIicmIaKXZk/PHRsfZpY tHA/+sDof4fmjSm+8KZYXN0wtia2abDot8i+ywoVtzxRHxe/5OSXlr3uo+1JHm9+j2ddPvk2 T7s0Ojdm27Ofv6tgOEcbPn5dSeuTVRviKJ1e9R5ycJ/5UwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTZxjH957Le0qxy0lFPTqnS51ZZBFhse6JM8YPZr4xbDFmusuHSbce pVIKaRHFRINcRJGqRblaZylLR7jJDiRjozWEm1S51EHASyVSdQhWSdQ2VOqldfHLk1/+/39+ nx4ZrbSzy2Q6Q5ZoNGj0Kixn5N9+lb827UiVNnG8+kuwlCRC4MUJBqyXGzF4mhsQNLYdo2C6 dxuMB/0I5geHaago8yCombxLQ1vfBAJXXR6GkQcfwmhgFoO77BSG/NrLGG48DlPgLS+loEH6 Bq6ftVPQGZpioGIaw4WKfCpyHlEQctRz4MhdDb66ag7Ck0ngnhhjofuimwXX7c+h6jcvBqfL zUBfu4+CkX+sGCYa37Bwva+fAY/FzELTUzuGx0EHDY7ALAf/dtooaCmI2GbCLgqOP3/NwlVz Z4R+/5OC0VsdCK6cuEeB1DiGoTvgp6BVKqPh5R+9CHynn3BQWBLi4MKx0whOFZYzMPzqKgsF XjXMz1nxlk2k2z9Lk4LWg8QVtDHkml0gf1ff5UjBldscsUkHSGtdPKl1TlOk5lmAJVL9SUyk Z6UcKX4yShHvmBOTp0NDHOmvnGd2fPyTfJNW1OuyReO6zSny1EHPQ5wZVhw6f6Yc56I7scUo Ribw64Wph3YcZcx/Jty8GaKjHMd/IrSa/2OLkVxG87WxwlR/DxctFvK/CPc9IyjKDL9aGPRb qCgreLXwsrwS/S9dKTS0dL4TxfAbhI4B+7tcGdnkebvwWSS3oQ/qUZzOkJ2u0enVCaa01ByD 7lDCrxnpEop8juNI2NKOXoxs60K8DKkWKBJXVGqVrCbblJPehQQZrYpTLJ6r0CoVWk3OYdGY scd4QC+autBHMka1RLH9ezFFye/TZIlpopgpGt+3lCxmWS4aSFYHDUf/muxJWOw7Gbtf/vMi 5L+03rrUuXtjafV3w+Z17r13JGeob3NvyhqDNZ7ZRWe0+7L5zMPNs492fpHfFDTua1k18+kc tlUWZZKU2sRdWR2K5EmlPrnwR3fJD2vZ0f1Q8vXMraI9XE2b2Zen3hrbU5Q1d2m5pVSQTP5z KsaUqkmKp40mzVu761OUNQMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769770022175573916?= X-GMAIL-MSGID: =?utf-8?q?1769770022175573916?= Makes Dept able to track dependencies by PG_{locked,writeback} waits. Signed-off-by: Byungchul Park --- mm/filemap.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index c4d4ace9cc70..adc49cb59db6 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1215,6 +1216,9 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; +static struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +static struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); + static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) { @@ -1226,6 +1230,11 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + if (bit_nr == PG_locked) + sdt_might_sleep_start(&PG_locked_map); + else if (bit_nr == PG_writeback) + sdt_might_sleep_start(&PG_writeback_map); + if (bit_nr == PG_locked && !folio_test_uptodate(folio) && folio_test_workingset(folio)) { delayacct_thrashing_start(&in_thrashing); @@ -1327,6 +1336,8 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, */ finish_wait(q, wait); + sdt_might_sleep_end(); + if (thrashing) { delayacct_thrashing_end(&in_thrashing); psi_memstall_leave(&pflags); From patchwork Mon Jun 26 11:56:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112934 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7464623vqr; Mon, 26 Jun 2023 06:02:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7jZ7KSJo/C57r6piug4eWZbZYR8uUcZt71q7LpZpivBsWIHodExJFSfHmh8sazR9NPNqu+ X-Received: by 2002:a05:6402:31e1:b0:51b:fc6d:866b with SMTP id dy1-20020a05640231e100b0051bfc6d866bmr5796546edb.30.1687784568480; Mon, 26 Jun 2023 06:02:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687784568; cv=none; d=google.com; s=arc-20160816; b=0H8rlJsEHNaVC3RV4UpQ/758fLa55epckHNmuz8gEsxXXpoVmft3Gpu7Nl3QL9kgwA 0bCVNth+4kQVebh7rnwFjGRBv56x5w8PeDw1WGJ6dtYzi4G1w3um4dKVFawkhWazJwEL 19mQtN+VDRn96VpAouTsF+EDqAR4u05i7JB+YSLhuN2b7eJBnslLdKNNLiXqUncUjkcr 5I88jqcN+pAP1Sy4ugJ8O3P7JbY52HYzgWZBr1yvo7NZB6DFutPIqh1CBPHpuSK/P2Mm 2QaWbZiCeVIX1eETm4onzSY3m3NXz3ojTozBDdi+YbsT4cLZEgQ6xn1UhYJl/M2fXyV8 5UTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=j3qZaaTKkqDKVjKQYLdByHPT5Ri7p6S8VHG0sIrEZsU=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=La58jp5yYZ3xVYC0GpFGGW0Manhj6kMxZLD4nurlqrUPOGiWEhmHs3Aapp9NMF/tYP J6nHFlAOmNyY4R0faHZXaWDujm4Y3x0uzX2vrZlqm6oesptE7RCb0VzBm/DMc+8pglY5 qcpUuaY3jbSCxy9edv98b0tyYogaNPww0IypO3uWDAdoY7VO1kvsaeJEZRAS5mitqeJT SWUjx1wYBIPYQQ1l5SYmaDcm0E0RbsK+si4+jDslUUdXapmhJoSxoD3Zi9lMZzNhuu1J DHvWPNi0uMJvP5UgZ2oUiGTjd8djgrFztT1tYR0jEmjPSmdc71PTy+IWjcHd+ecV20MC TL+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y6-20020a50eb06000000b0051bee6b4881si2604240edp.545.2023.06.26.06.02.11; Mon, 26 Jun 2023 06:02:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229777AbjFZMrX (ORCPT + 99 others); Mon, 26 Jun 2023 08:47:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231383AbjFZMrH (ORCPT ); Mon, 26 Jun 2023 08:47:07 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4C04B19AB; Mon, 26 Jun 2023 05:46:04 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-22-64997d6ca48a From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 09/25] dept: Apply sdt_might_sleep_{start,end}() to swait Date: Mon, 26 Jun 2023 20:56:44 +0900 Message-Id: <20230626115700.13873-10-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0yTZxTHeZ732mrdmw7mCxI1TTqNi4BOyDFeE018P0g02cwWXKbN+iLd Cmi5iYkLl7ooF4MmpQgNwaIdoTiw1YhCkRUtIAoVEZEUJrCIKIhhtrEUxcL0y8kv55z/73w5 LCG/TUWwmpR0UZei0ipoKSmdWmperz15QR0zVh8J54piwPv2NAmm+joa3H9ZEdRdy8UwcXcP PPFNIgg86CHAaHAjuDgyRMA11zACR00eDY/+XQZ93mkaOg2FNORX19Pw8NUcBk/peQxWWzx0 lZgxtPrHSTBO0FBhzMfB8gKD31LLgCVHCaM15QzMjWyAzuF+ChyD38CFSg8NzY5OElyNoxge 3TLRMFw3T0GXq4ME97liCq68NtPwymchwOKdZqC3tQpDgz4oejnnwPDHfx8oaC9uDdKlqxj6 njYhaDn9DIOtrp+GNu8kBrvNQMDsn3cRjJ6dYuBUkZ+BityzCApPlZLQ876dAr0nFgLvTPTO LULb5DQh6O1ZgsNXRQr3zLxws3yIEfQtg4xQZcsQ7DXrhOrmCSxcnPFSgq32DC3YZs4zQsFU HxY8/c208Lq7mxE6ygLk/sgE6Va1qNVkirro7YelSd1Fbuaoiz1+wzFI5aAeugBJWJ7bxI+9 NzCf2WV9gReY5tbwAwN+YoFDudW8vfg5VYCkLMFVL+HHO+4sBr7k4vneD/mLAZJT8o+H+hel Mi6O95cMfTqwirc2tC6KJMF+030zWmA5F8vneZyfdgol/Evrgf85nP+7ZoAsQbIqFFKL5JqU zGSVRrspKik7RXM86pfUZBsKvpXl5NzBRjTj/s6JOBYplspiVpap5ZQqMy072Yl4llCEyr56 Z1TLZWpV9glRl3pIl6EV05xoBUsqlss2+rLUcu6IKl38TRSPirrPU8xKInJQ5NO9T7btCEED 87FLriuVksth8/pLMduzUq9aykqls6HWZ8dMUZZdo/F3lF8bm9Ymfmu8v/uHxPZAwk/k1uUZ gSttlF56/Ud5w0j5G9900ZbNFRnhcbsyf620Jvzzc+N4kyk84nePs/R7aa79i/b0kDCmAna0 UAZu2+rojan7EvOxgkxLUm1YR+jSVB8BZG24cFIDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG/Z/L/xxXq9MadSi6MLDAsDRSXlAqCvIQFEFE0Zda7ZCreWnz kpFhaVFeQgVbpZZOW+Isa4uydGJaUzPn0qVmU9IiM29dnLVmlyn15eXH8/D8Pr0sKTPQS1h1 TLyojVFqFFhCSXaGpwVpUq6qgssmEeRmBYN78gIFhVWVGBx3TAgq758hYPhZJHRPjSLwtrWT oM93ICgZ6CPhvq0fgbX8LIbO9/PA6Z7A0JKfiSGttArDy5FpAlyX8wgwmXdAa46BgHrPEAX6 YQwF+jTCdz4S4DFWMGBMDYDB8msMTA+EQEt/Fw2NRS00WHvXwNXrLgy11hYKbNWDBHQ+LsTQ X/mHhlZbMwWO3Gwabo8bMIxMGUkwuicY6KgvJuBuus/2adpKwPlvv2loyq73Udk9ApyvaxDU XXhLgLmyC0Oje5QAizmfhJ+3niEYvDTGwLksDwMFZy4hyDx3mYL2X000pLtCwfujEG+OEBpH J0gh3ZIkWKeKKeG5gRceXetjhPS6XkYoNicIlvJAobR2mBBKvrppwVxxEQvmr3mMkDHmJARX Vy0Wxu12Rmi+4qV2LdsviVCJGnWiqF238aAkyp7lYOJs7ImH1l46FbXjDOTP8twG3mb6SMww 5lbzPT0ecobl3Erekv2BzkASluRK5/BDzU+ZmWIht4Pv+J02O6C4AP5VX9esSMqF8Z6cvn/S Fbzpbv2syN+X17wwoBmWcaH8WVcDzkGSYuRXgeTqmMRopVoTulZ3LCo5Rn1i7eHYaDPyfY4x ZTq3Gk12RjYgjkWKudLg5VdUMlqZqEuObkA8Syrk0kU/9CqZVKVMPilqYw9oEzSirgEtZSnF Yun2veJBGXdEGS8eE8U4Ufu/JVj/Jano1J4jbc4nJrmf6svTPe8ipaex/UnHsuN+u29UJ916 u95u5J0F+w6POTSqi+FFYd1SpW4+tW1VCXpg+V6Vv2BiizdWP2dyG24LZAt/Hd26arzW+oiP 3v5ivvwxn+DdtzVQPbcnr3OvJORnkV11+mZQWPObAH2Q4fPqpk2HWoczN1YoKF2UMiSQ1OqU fwGlIlRlNQMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769770391977142546?= X-GMAIL-MSGID: =?utf-8?q?1769770391977142546?= Makes Dept able to track dependencies by swaits. Signed-off-by: Byungchul Park --- include/linux/swait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/swait.h b/include/linux/swait.h index 6a8c22b8c2a5..02848211cef5 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -6,6 +6,7 @@ #include #include #include +#include #include /* @@ -161,6 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ + sdt_might_sleep_start(NULL); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ @@ -176,6 +178,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); cmd; \ } \ finish_swait(&wq, &__wait); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Jun 26 11:56:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112925 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7457249vqr; Mon, 26 Jun 2023 05:50:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ74cDN6P6Ov5QWPis1E8hMSb5hNCtJrw8qb0fNwXQeUajuCmUg4vLeY5JliTLHL6oOrkDpg X-Received: by 2002:a17:902:ecd1:b0:1b0:6e16:b92c with SMTP id a17-20020a170902ecd100b001b06e16b92cmr8153063plh.54.1687783841207; Mon, 26 Jun 2023 05:50:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783841; cv=none; d=google.com; s=arc-20160816; b=UYYZqrIgXJI7n0U1z1wE1KeyZoyBXG1X8nROAV+aTV2ROckkPPBA+xcTL6ZIDoYdKm XKtJbOgg8SnTlB1FilQsXiPqRBbbACV1PXNIPL1dAsTaW3k9bZCuT4KqNXOu1sVKcH2l cYyA9i3wsoJlY2WSuBG+zS1X6Tic/G7HiM1QU7WmvpiEULXnghgNCuSoZ/lyCY1uVpN7 v4+KL1Jp9sxSccOSzhyb/1txl7XpFL9shv58dl0VBTAMNGykGmy2ztCU0w/u2zzDW134 YyWB16CHfqtizMtzT28hNvVshMB/4Kq/1Hu6kfwmm7d4UD+eE7xqoM0MCYmJ+aJX18PC k3bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=rmNEJTaUYs7UE2cWJwbcfEU9pc5mtKN+YN10z92GqIs=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=sw9alOd4auU+GGkSI+72+Ovp0I91T1SQs4hbS9wHUoiPZrMgtKcBafg6pUiNjGGKhI e8Lo3Ecq5BrLOiqPCpkUlpPc7CvSo8X/afAyupv0FlvwdqSg494LbuAtyNbvaYtWPxzN i6DYKJ5uFqiKnh1Yc3VRR6mZeTOeOVpSM6PiL4kdKiVGhLnm3VSYGvEAOoMbLyMdkodl eHSevv9VniiiTRHGLWzkx5dU4trmp2v86vj/qpYzUmFfxWOSo5zXOuHnA7VEIpiPuI6e 9KhMpz6LtLAeBsw9MbX73lmK9w7oUUZRBo3/gCwgQPs0Gnhf0BXFck59FkGgIKHO+4JP DOGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l9-20020a170902d34900b001b548730447si4678326plk.414.2023.06.26.05.50.28; Mon, 26 Jun 2023 05:50:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230025AbjFZMr6 (ORCPT + 99 others); Mon, 26 Jun 2023 08:47:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229977AbjFZMrb (ORCPT ); Mon, 26 Jun 2023 08:47:31 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BBB552699; Mon, 26 Jun 2023 05:46:38 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-35-64997d6ceff0 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 10/25] dept: Apply sdt_might_sleep_{start,end}() to waitqueue wait Date: Mon, 26 Jun 2023 20:56:45 +0900 Message-Id: <20230626115700.13873-11-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSXUyTZxTH97wfz/u22u1NBX2mqEsn2+ICK0aXs6mbu1h8srhkm9kStwtt 6BvpLOCKosxoAEGhWAddsApkluJqhQ6k9cINShAUqKB0whjDgkrYkEllAcrsYB8tZjcnv5zz P7+rv8iqW/iVoiHjgGzK0Bk1WMkpQ0sdScaj5/TaX2dehLJTWgjPFnFQ1eDGEKivQ+C+ksfA xI3t8PPcJIL5W70s2MoDCKofDLNwpWMEgc+Vj6Fv7FnoD09h8JeXYDhe04Dhx0cLDATPWBmo 87wP3aUOBloj4xzYJjBU2o4z0fGQgYizVgBnbiKMuioEWHiQAv6RAR58Q6/CuW+CGJp9fg46 ro4y0PdDFYYR9788dHd0cRAos/Dw3WMHhkdzThac4SkB7rTaGbhcEBX9vuBj4MTMPzx0Wlqj dKGRgf5fmhC0FN1nwOMewNAenmTA6yln4a+LNxCMng4JUHgqIkBl3mkEJYVnOOj9u5OHguAm mH9Shbdtpu2TUywt8B6ivjk7R286CP2+YligBS1DArV7DlKvaz2taZ5gaPV0mKee2mJMPdNW gZpD/QwNDjRj+vj2bYF2nZ3nPkj4VLlFLxsN2bLptbf2KNP8TUf2X1IcbvlqCOWii4IZiSKR NpIxa4oZKRaxeLxeiDGWXiaDgxE2xnHSC8Rr+Y03I6XISjVLyHjX9cXQMukT8vCSazHESYnk REkfjrFKep3c/6NSeCpdS+outy5mFNF9U48DxVgtbSL5wTYckxLppILcdPq5pw/Pk2uuQa4U qezomVqkNmRkp+sMxo3JaTkZhsPJqZnpHhRtlfPowmdX0XRgZxuSRKRZqtKuOatX87rsrJz0 NkREVhOnWv7Epler9LqcL2VT5m7TQaOc1YZWiZxmhWrD3CG9WtqrOyDvk+X9sun/KyMqVuai XapXZhVb848lrSKBuIiqeSxpJg8XZq7uWhf6KN73+V1ZrN9J38w++W31XkVx6dY1iUF7+5F3 P173XucyU/qu7tT4sjd+em7zF5aX7rmur9VuD4my2b7j7eS7OxKKzMNecv6CtTe1Z4vV3Xje UvHnhz0J1/xfN87a+je8424o1DrjkzVcVpouZT1rytL9BxrIrrZRAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe8/lPcfV5LC0DnYfRGFoBhoPZbcP0VtURGBBFDnaIVfTYjPT bpguqamVhlleykuspXbb/FDpZMxSp2XWxMzmMIvUMu02aWnZlPry58fz/Ph/+vO0opQN4TUJ iZIuQaVVYhkj27oyPUx78qo6wmKVQU5WBHh/nGWg6G4VhrY7lQiqqk9TMPBkA7waGUQw+uw5 Dfl5bQhK33bTUN3gQWAzp2FwvQ+Edu8wBmdeJob08rsYXnwao8B9OZeCSssWaLlYRoHd18dA /gCGwvx0yh/9FPhMFRyYUhdCr7mAg7G3y8Dp6WChvtjJgq1rCVy95sZQa3My0PCglwLXoyIM nqpxFloamhhoy8lm4fZQGYZPIyYaTN5hDl7aSyi4Z/C3fRyzUZDx/Q8Ljdl2P924T0H76xoE dWd7KLBUdWCo9w5SYLXk0fDr5hMEvec/c3Amy8dB4enzCDLPXGbg+e9GFgzuKBj9WYTXRpP6 wWGaGKxHiW2khCHNZSJ5WNDNEUNdF0dKLEeI1RxKymsHKFL6zcsSS8U5TCzfcjli/NxOEXdH LSZDra0caboyymybs0sWrZa0miRJt3R1rCzOWXP88K2A5LoLXSgV3eSMKIAXhUjxXN+dScbC IrGz00dPcJAwX7Rmf2CNSMbTQvlUsa/p8aQ0Xdgh9t8yT0qMsFDMyHThCZYLy8WeL4X/SueJ lffsk06A/17ztAxNsEKIEtPcDnwRyUrQlAoUpElIildptFHh+oNxKQma5PB9h+ItyD8c08mx nAfoh2uDAwk8Uk6TR8y9olawqiR9SrwDiTytDJLP+JmvVsjVqpRjku7QXt0RraR3oFk8o5wp 37RTilUI+1WJ0kFJOizp/n8pPiAkFcXkfdlYGLjUUOsxlI9gxO/3bY5P61/FKMNcm7b7eEUn rzFef+qwzx0P3l28y9Rav0BYcyO1uH1N8LPA5nVFkc1CaNKcaHPfh9hL1SvkIebZ3p5FvsWn DiyeotMeiJuatSN5KEa2Z7bx/rg3w7P+UUHjlu7bJ5as/rr13fdQIuW+UTL6ONWyUFqnV/0F DodnXDQDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769628931336298?= X-GMAIL-MSGID: =?utf-8?q?1769769628931336298?= Makes Dept able to track dependencies by waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait.h b/include/linux/wait.h index a0307b516b09..ff349e609da7 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -303,6 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ @@ -318,6 +320,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); cmd; \ } \ finish_wait(&wq_head, &__wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Jun 26 11:56:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112927 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7461032vqr; Mon, 26 Jun 2023 05:58:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ69nTzT1EffYusFwnxnA9FnzD7Vs6M2mP5EgQRnClzZMQCLtJb+Y0bwyuFLVKubp8fkXQOE X-Received: by 2002:a05:6a20:a121:b0:126:23d:cd10 with SMTP id q33-20020a056a20a12100b00126023dcd10mr11701450pzk.21.1687784309805; Mon, 26 Jun 2023 05:58:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687784309; cv=none; d=google.com; s=arc-20160816; b=IsRqnxp4sTIPaREgs/oSlMZfZhg7k3IIUE+fi2gTlSliGon+1feA1laMKWVYQ5gIoh ZwLSMwps/B6hez1gPhK7lHqJqVXqidIub+HB8SPv+a5CZIPr7F+POC+GrJrwd/NUkbvo nlRrgXbrOwuZ5Oa4LJKkZpsgLE7Jx7fDPpB6LgOmngsMN5z5Smdai+td5Vz+PQrLDcZk q87S8W5dmnw+Q7kuJCjn4f2OuvSDWgSdV6JdOVVR+I+ZCNURffH4ylRo9KhuDVgSgnJc MENIHkRjzOdYGD6uTsxcxx3tVgRBHif9PIDNjCWmOzkxb5ib5x7hexcJiqY7GICvrgkM g+wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=z7OkeT6xvSEqQwLdFXJGyxH34kbodoT4wnSvM20EAlw=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=LM5iGZEL395qvz4qR4hRdqLY7lhUxt4LphPc3Duhl/VRgCMbtEXeZkeEusCEI6LvNR vYx9sY8My4txRblQF26YFt/w4M7IRq+bFp/8Siyy3reLumOVGULd88mXCD8Gd9zowLTE pFucbWrD9e1fJy7WCdr4R8DtykYLfDmVss3TTUEAVdf834p501tNqVl6wN5G1yq6e62l xM+Rbc2PfWoy904LJiYpIzGl+SFJMJxVB29A2JnUqew+YsZESTcMwaqS/y+h6RbTIiWi /x6eTXDmMqxYE2rt6cYcA2H5HliZJwCfMZujJ4MSD8M5k9ivGNZD7nBqWRRaDCnEG1pp JCuA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l184-20020a6388c1000000b005533ab7f205si5046359pgd.7.2023.06.26.05.58.17; Mon, 26 Jun 2023 05:58:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230240AbjFZMsa (ORCPT + 99 others); Mon, 26 Jun 2023 08:48:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230518AbjFZMsM (ORCPT ); Mon, 26 Jun 2023 08:48:12 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3CDEE2974; Mon, 26 Jun 2023 05:47:11 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-46-64997d6c0259 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 11/25] dept: Apply sdt_might_sleep_{start,end}() to hashed-waitqueue wait Date: Mon, 26 Jun 2023 20:56:46 +0900 Message-Id: <20230626115700.13873-12-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe8/lPWfLxWGVni5UTLppF7vyQBEJQYcoKPrShS7LnXQ0NbY0 DQIzFZspaulMLXTWWjq1tm6ms6VmLrtomq2YVlKZphnWLHNdptSXhx/PH37/58PDknIHPZ1V Rx0RtVFKjQJLKemAX/FizfFzqpB7r4Mg63QIeL6lUlBYacHQUlGGwHL9BAG99zfCi+F+BKOP n5JgyGlBUPy2k4TrjV0I7OZEDG3vJkG7ZxCDMycNw8mSSgytn7wEuHOzCSizboHmTCMBjpEe Cgy9GAoMJwnf+EjAiKmUAVPCXOg25zPgfbsMnF0dNNhfBcO5C24MNXYnBY23uwlou1OIocvy h4bmxiYKWrLSaSj/bMTwadhEgskzyMAzRxEBV5N8oj6vnYCUr79peJDu8NHFawS0v6xGUJv6 hgCrpQNDvaefAJs1h4Sfl+8j6M4YYCD59AgDBScyEKQl51Lw9NcDGpLcq2D0RyFev0ao7x8k hSTbUcE+XEQJD428UJXfyQhJta8YocgaI9jMQUJJTS8hFA95aMFaegoL1qFsRtAPtBOCu6MG C5+fPGGEprxRauvMXdK1KlGjjhW1S9ftl0b0dGbRh11s3K+GKiYBubAeSVieW8k/SmxF/znV +50aY8zN512uEXKMp3BzeFv6B1qPpCzJlUzke5oamLFgMreHr7yiH2eKm8snNzvHWcat5o13 Pv4rmM2XXXWMiyS+ffUj43iZnFvFJ7rr8JiU585I+G77jX9XTOPvmV1UJpIVoQmlSK6Oio1U qjUrl0TER6njloRFR1qR769Mx727b6Ohlu11iGORwk8WMitPJaeVsbr4yDrEs6Riisz/h0El l6mU8cdEbfQ+bYxG1NWhGSylCJAtHz6qknPhyiPiIVE8LGr/pwQrmZ6AdHnmRcpbGefPZz7r 6aMapm4Osyx4Y62tXHjXsKLVHnpgQc2FXmOAIo4J3kZXW9b9vNXXlbJjw9J50YsC1wRWxBxs a/U/kx/CpFVtpct3DN0IPbS9c2P4gbCbjP/OL9c2FUvF35HTnr8Ptu19Qbv0KWf9NI+d+kQ2 wF+Se+mu3HilQEHpIpTLgkitTvkXnAPZ7FMDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0xTZxgHcN9zec+hrnrWke2EXVhqjAYzwETI4yXG7AuvS6bOxBhNjNT1 ZHS0xbTKwISEq2IRQ02giqhYtCAg6AEdINWuCFIugkBY5woZzIAoQkSLsNZthWxfnvzy/yf/ Tw9Pq+xsBK8zHpNMRo1ejRWMYtfWnK/0GRe0sYNXVGA9Ewv+t/kMlNXXYuivq0FQ25hFwVR7 Avw6P40g0NtHg624H8HVsREaGjtGETirsjEMPlsFQ/5ZDJ7iAgw5FfUYnrwMUuArOUdBjfwt dBfZKXAtTjJgm8Jw0ZZDhc5zChYd1Rw4MtfCeFUpB8GxjeAZHWah7ZKHBefTDXDhsg9Dq9PD QEfTOAWDLWUYRmv/YaG7o5OBfmshCzdn7BhezjtocPhnORhwlVNwKze09iLopODkm79ZeFTo CunabQqGfruH4H7+HxTItcMY2vzTFDTIxTT8VdmOYPzsKw7yzixycDHrLIKCvBIG+t4/YiHX FweBhTK8Yxtpm56lSW7DT8Q5X86QLrtImktHOJJ7/ylHyuXjpKEqilS0TlHk6pyfJXL1aUzk uXMcsbwaoohvuBWTmcePOdJ5PsDs+fygYptW0utSJVPM9kRF0uSIlT3q5dPeP2zmMpEXW1AY LwqbxPzgO2bJWFgner2L9JLDhS/FhsIJ1oIUPC1UrBQnOx9yS8VHwiGx/oZl2YywVszr9ixb KcSL9pbn/41GijW3XMtDYaH8Xo8dLVklxInZPjcuQopytKIaheuMqQaNTh8XbU5OSjfq0qK/ TzHIKPQ5joygtQm9HUxwI4FH6g+UsV+c16pYTao53eBGIk+rw5UfL9i0KqVWk35CMqUcNh3X S2Y3+pRn1J8ov9kvJaqEHzTHpGRJOiqZ/m8pPiwiE/XM75tuLFq1vs6Pd6RSlbbfY2ylWQsT 2UcyNu/65cfdpxKVvRUl7jXXd98pkD67PvDz3b58Q6Njiz++7nS7Y84m93KGPbLl9YoT3z1o ef0s1pLgdbYUr1udXOmaiWgKSAOBy0aSHp389UTX8IHSMWb/TpV2deSf+xaMHTutH97dG6lm zEmajVG0yaz5F7H2sVY1AwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769770120832681293?= X-GMAIL-MSGID: =?utf-8?q?1769770120832681293?= Makes Dept able to track dependencies by hashed-waitqueue waits. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index 7725b7579b78..fe89282c3e96 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -6,6 +6,7 @@ * Linux wait-bit related types and methods: */ #include +#include struct wait_bit_key { void *flags; @@ -246,6 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ + sdt_might_sleep_start(NULL); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ @@ -263,6 +265,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); cmd; \ } \ finish_wait(__wq_head, &__wbq_entry.wq_entry); \ + sdt_might_sleep_end(); \ __out: __ret; \ }) From patchwork Mon Jun 26 11:56:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112943 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7468155vqr; Mon, 26 Jun 2023 06:06:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7a9W84k6sLOXW6i75iHq034IweIhshDh5UezP9tL8Ltey0I9xa61EfJAHZcawi/T5c/roU X-Received: by 2002:a05:651c:14f:b0:2b6:a077:845 with SMTP id c15-20020a05651c014f00b002b6a0770845mr1486226ljd.5.1687784810188; Mon, 26 Jun 2023 06:06:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687784810; cv=none; d=google.com; s=arc-20160816; b=i8A1FcFKCAVicMCdrCfT8oPztm7V52J4/J/Qa7YR+d+W4fBTuMXJnuLzBMSMxxAw0+ d0oqjhH/PGCfZvYOa2zee/CerdnI3bsoqh+sCkmmF30QUsK/CN8I6whEs6ROoBdCj1/O 1NfmDf7h9+q3VFIhI3ty/cOcG3QJj60lRXneemxhdPY3W0em1FrnjMVHyzMsWKTUJ/Qn FUICL66sC6CsJCvuPp2cS6MkDnSarjGGDJRmsLYvJL9C0ihAW4d3tPDX002K3b1QwAqa NA3yS4Wh1PZH2PNgLjTJ4mBBY6ez7MPTv/aH5iWsZ8KgYu3n2hVTdchjHuwZG+Ktih0B LxUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=mwe/yMIwKWv62hSDLhmQ1RSaOQK+TwFIVKBEfJCN4XM=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=Ts3YNmQqE6YL3CZ0FIQEYYx4ueHhdeJGRczstvlyvz5e/on3iyFUJudTL2JcHSBxdl QxHjg6DI+Ib37Z4u7Te1HJg9j4TYX0cMgik35qVMybb1LOR9xJb5mItYeJAWn0p3gnuG YqjOBMQQSVNXjOdwfv6Cr/PYAypgmRbnhdxxjP1OxM+PMUgQ7Bs7g4JS/iW9MWxhsaEl YjKa9Hdn6evQDGzDGqwCopeVYGp5HhUXPYOZzai/y6CQ7ZrwhB8kirQIdQzmhztYjF0E S31sZvlh54jkvSGny/2ADRyD1bWr2yjU2s52QinY6vLW/qCZrFpSdqLagtmu4frnKuM+ ufXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q27-20020a1709060e5b00b0098e4b5fa4f2si1427149eji.854.2023.06.26.06.06.25; Mon, 26 Jun 2023 06:06:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231213AbjFZMtT (ORCPT + 99 others); Mon, 26 Jun 2023 08:49:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229714AbjFZMs6 (ORCPT ); Mon, 26 Jun 2023 08:48:58 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BBD5210DF; Mon, 26 Jun 2023 05:47:51 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-5b-64997d6cc34d From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 12/25] dept: Distinguish each syscall context from another Date: Mon, 26 Jun 2023 20:56:47 +0900 Message-Id: <20230626115700.13873-13-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSXUxTdxjG9z8f/3Na7TwWsp3hsi2NRoNR0Yl5RWfMbnYu5ibTm+nF7Dwn 0lkQW0Hwg6HCooUSMZbylaUWrR2gYmsUgdZaBpbVCUrDGKsEiKIVsBFpI4JuLWY3b355njy/ q5cllbfoJFaTdUDSZam1Kiyn5BPzrSu0R6vEFH/leigvTYHI1EkKaq80Yui53ICg8doxAkId X8Ff0XEEM392k2A29SA4N/yQhGudgwhc9uMYeh+9D4FIGEOXqQTDiborGO6PzRIQrDhDQINj C/hPWwnwTD+hwBzCUGM+QcTOUwKmbfUM2AqXwIi9moHZ4dXQNdhHg2tgOVT9GsTQ5uqioLN5 hIDelloMg43/0uDv9FHQU26k4dJzK4axqI0EWyTMwAOPhYCmopjo2ayLgF9evqXhjtETo/NX CQj83YrAfXKIAEdjH4b2yDgBToeJhNcXOxCMlE0wUFw6zUDNsTIEJcUVFHS/uUNDUTAVZl7V 4s0bhPbxMCkUOQ8KrqiFEv6w8sLN6oeMUOQeYASLI0dw2pOFurYQIZybjNCCo/4UFhyTZxjB MBEghGBfGxae37vHCL7KGWrrxzvkG0VJq8mVdKs27ZJnXB69gbPP7sobtpmoQlT5tQHJWJ5b y7vHzMiA2Dn2930RjzG3lO/vnybjnMh9xjuNo7QByVmSq5vHP/H9zsSLBO4b/kXNDBFnilvC 95a/mWMFt45vCZxF7/yf8g1NnjmRLJa33rXO5UoulT8e9OK4lOdqZPwFWyt+N/iIv23vp04j hQW9V4+UmqzcTLVGu3ZlRn6WJm/l7n2ZDhR7K9vR2Z3NaLJnmxdxLFLNV6R8UikqaXWuPj/T i3iWVCUqPnhlFpUKUZ1/SNLt+0GXo5X0XrSIpVQfKtZED4pKbo/6gLRXkrIl3f8twcqSClFi WcJQkp4K+368YDtUfTiEXucuW5w2+n1a4PGRm981pO2Y2nupPt1izC/I8diTLV/6q/LYRw+i bytKPk/1dRZk73G2iquc7elu59DmcMe27T9/67+4X7b/5dTdn4zXy/1X0377J714QYtpWUJp QXO/OLhlkbtpoHvjwoqQPk80eKdUlD5DvTqZ1OnV/wEHejZpUgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzXSe0xTZxgGcL9z+c6h2nlWQU+Ym6aJMfGCmAx9kyFhyRK+mMlcXGKiW6Su B2ko6FpEmVGR1gULVXFg0XYLFK0IVbCYiEANaeXSsUkdDWNYiCCDoSjRUQLjooXEf9788jzJ 89fL04pyNprXZGZJukyVVolljCz5M8Nm7ckr6tgnI6uhqDAWQhP5DNhqnBj8t6sROO+eoWC0 JQn+mhxDMPNHJw2WEj+C8oE+Gu629iNwV+Zh6Br6AAKhcQy+kgIMhooaDI9fzFIQvHyJgmrX Lui4aKegeXqEAcsoBqvFQIXPvxRMO6o4cOSug8HKqxzMDmwFX383C95ffCy4ezfClV+DGJrc PgZa6wcp6GqwYeh3vmWho7WdAX+RmYVbr+wYXkw6aHCExjn4s7mMglpjeO35rJuCn/6bZ6HN 3BzWtTsUBP5uRPAg/ykFLmc3Bm9ojII6VwkN/99oQTB4/iUHZwunObCeOY+g4OxlBjrn2lgw BuNgZsqGE+OJd2ycJsa6Y8Q9WcaQ3+wiuX+1jyPGB70cKXMdJXWVG0hF0yhFyt+EWOKqOoeJ 680ljpheBigS7G7C5NWjRxxpL51hdn+8TxavlrSabEm3JSFFlnZ7+B4+UpxyfMBRwuSi0i9N iOdF4VOxo3uHCUXwWFgv9vRM0wuOFNaKdeZh1oRkPC1ULBVH2h9yC8UKIVl8bZ2hFswI68Su orlFy4VtYkOgGC1YFNaI1bXNi0MR4bzxd/tirhDixLygB19EsjK0pApFajKzM1QabVyMPj0t J1NzPOb7wxkuFH4cx8nZono00ZXkQQKPlMvksZ+UqhWsKlufk+FBIk8rI+UrpyxqhVytyvlR 0h0+oDuqlfQe9BHPKFfJd+6VUhTCIVWWlC5JRyTd+5biI6JzUZbyRHECvtb0XeDn1Jg1vsTt XPLnuzqj5l/P7/Fy+Sl+c01vrUKmvHkKouKt9de/2mRLlEeyU9EN1/fpQJE3ur94bsmHj78g xpG+gydwT3rQ6y/0VJxuoQ+ZUwPLvw1eiCo8beB2G/75xrZz401nzA+mlfZz0W1f77BO3b9l GXqmZPRpqq0baJ1e9Q6Rl3heNAMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769770645007178145?= X-GMAIL-MSGID: =?utf-8?q?1769770645007178145?= It enters kernel mode on each syscall and each syscall handling should be considered independently from the point of view of Dept. Otherwise, Dept may wrongly track dependencies across different syscalls. That might be a real dependency from user mode. However, now that Dept just started to work, conservatively let Dept not track dependencies across different syscalls. Signed-off-by: Byungchul Park --- arch/arm64/kernel/syscall.c | 2 ++ arch/x86/entry/common.c | 4 +++ include/linux/dept.h | 39 ++++++++++++--------- kernel/dependency/dept.c | 67 +++++++++++++++++++------------------ 4 files changed, 63 insertions(+), 49 deletions(-) diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c index a5de47e3df2b..e26d0cab0657 100644 --- a/arch/arm64/kernel/syscall.c +++ b/arch/arm64/kernel/syscall.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -105,6 +106,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, */ local_daif_restore(DAIF_PROCCTX); + dept_kernel_enter(); if (flags & _TIF_MTE_ASYNC_FAULT) { /* diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b33..7cdd27abe529 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef CONFIG_XEN_PV #include @@ -72,6 +73,7 @@ static __always_inline bool do_syscall_x32(struct pt_regs *regs, int nr) __visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) { + dept_kernel_enter(); add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); @@ -120,6 +122,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs) { int nr = syscall_32_enter(regs); + dept_kernel_enter(); add_random_kstack_offset(); /* * Subtlety here: if ptrace pokes something larger than 2^31-1 into @@ -140,6 +143,7 @@ static noinstr bool __do_fast_syscall_32(struct pt_regs *regs) int nr = syscall_32_enter(regs); int res; + dept_kernel_enter(); add_random_kstack_offset(); /* * This cannot use syscall_enter_from_user_mode() as it has to diff --git a/include/linux/dept.h b/include/linux/dept.h index b6d45b4b1fd6..f62c7b6f42c6 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -25,11 +25,16 @@ struct task_struct; #define DEPT_MAX_SUBCLASSES_USR (DEPT_MAX_SUBCLASSES / DEPT_MAX_SUBCLASSES_EVT) #define DEPT_MAX_SUBCLASSES_CACHE 2 -#define DEPT_SIRQ 0 -#define DEPT_HIRQ 1 -#define DEPT_IRQS_NR 2 -#define DEPT_SIRQF (1UL << DEPT_SIRQ) -#define DEPT_HIRQF (1UL << DEPT_HIRQ) +enum { + DEPT_CXT_SIRQ = 0, + DEPT_CXT_HIRQ, + DEPT_CXT_IRQS_NR, + DEPT_CXT_PROCESS = DEPT_CXT_IRQS_NR, + DEPT_CXTS_NR +}; + +#define DEPT_SIRQF (1UL << DEPT_CXT_SIRQ) +#define DEPT_HIRQF (1UL << DEPT_CXT_HIRQ) struct dept_ecxt; struct dept_iecxt { @@ -94,8 +99,8 @@ struct dept_class { /* * for tracking IRQ dependencies */ - struct dept_iecxt iecxt[DEPT_IRQS_NR]; - struct dept_iwait iwait[DEPT_IRQS_NR]; + struct dept_iecxt iecxt[DEPT_CXT_IRQS_NR]; + struct dept_iwait iwait[DEPT_CXT_IRQS_NR]; /* * classified by a map embedded in task_struct, @@ -207,8 +212,8 @@ struct dept_ecxt { /* * where the IRQ-enabled happened */ - unsigned long enirq_ip[DEPT_IRQS_NR]; - struct dept_stack *enirq_stack[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *enirq_stack[DEPT_CXT_IRQS_NR]; /* * where the event context started @@ -252,8 +257,8 @@ struct dept_wait { /* * where the IRQ wait happened */ - unsigned long irq_ip[DEPT_IRQS_NR]; - struct dept_stack *irq_stack[DEPT_IRQS_NR]; + unsigned long irq_ip[DEPT_CXT_IRQS_NR]; + struct dept_stack *irq_stack[DEPT_CXT_IRQS_NR]; /* * where the wait happened @@ -406,19 +411,19 @@ struct dept_task { int wait_hist_pos; /* - * sequential id to identify each IRQ context + * sequential id to identify each context */ - unsigned int irq_id[DEPT_IRQS_NR]; + unsigned int cxt_id[DEPT_CXTS_NR]; /* * for tracking IRQ-enabled points with cross-event */ - unsigned int wgen_enirq[DEPT_IRQS_NR]; + unsigned int wgen_enirq[DEPT_CXT_IRQS_NR]; /* * for keeping up-to-date IRQ-enabled points */ - unsigned long enirq_ip[DEPT_IRQS_NR]; + unsigned long enirq_ip[DEPT_CXT_IRQS_NR]; /* * current effective IRQ-enabled flag @@ -470,7 +475,7 @@ struct dept_task { .wait_hist = { { .wait = NULL, } }, \ .ecxt_held_pos = 0, \ .wait_hist_pos = 0, \ - .irq_id = { 0U }, \ + .cxt_id = { 0U }, \ .wgen_enirq = { 0U }, \ .enirq_ip = { 0UL }, \ .eff_enirqf = 0UL, \ @@ -509,6 +514,7 @@ extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); +extern void dept_kernel_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -560,6 +566,7 @@ struct dept_task { }; #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) +#define dept_kernel_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index c5e23e9184b8..4165cacf4ebb 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -221,9 +221,9 @@ static inline struct dept_class *dep_tc(struct dept_dep *d) static inline const char *irq_str(int irq) { - if (irq == DEPT_SIRQ) + if (irq == DEPT_CXT_SIRQ) return "softirq"; - if (irq == DEPT_HIRQ) + if (irq == DEPT_CXT_HIRQ) return "hardirq"; return "(unknown)"; } @@ -407,7 +407,7 @@ static void initialize_class(struct dept_class *c) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iecxt *ie = &c->iecxt[i]; struct dept_iwait *iw = &c->iwait[i]; @@ -432,7 +432,7 @@ static void initialize_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { e->enirq_stack[i] = NULL; e->enirq_ip[i] = 0UL; } @@ -448,7 +448,7 @@ static void initialize_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { w->irq_stack[i] = NULL; w->irq_ip[i] = 0UL; } @@ -487,7 +487,7 @@ static void destroy_ecxt(struct dept_ecxt *e) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (e->enirq_stack[i]) put_stack(e->enirq_stack[i]); if (e->class) @@ -503,7 +503,7 @@ static void destroy_wait(struct dept_wait *w) { int i; - for (i = 0; i < DEPT_IRQS_NR; i++) + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) if (w->irq_stack[i]) put_stack(w->irq_stack[i]); if (w->class) @@ -652,7 +652,7 @@ static void print_diagram(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (!firstline) pr_warn("\nor\n\n"); firstline = false; @@ -685,7 +685,7 @@ static void print_dep(struct dept_dep *d) const char *tc_n = tc->sched_map ? "" : (tc->name ?: "(unknown)"); irqf = e->enirqf & w->irqf; - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { pr_warn("%s has been enabled:\n", irq_str(irq)); print_ip_stack(e->enirq_ip[irq], e->enirq_stack[irq]); pr_warn("\n"); @@ -911,7 +911,7 @@ static void bfs(struct dept_class *c, bfs_f *cb, void *in, void **out) */ static inline unsigned long cur_enirqf(void); -static inline int cur_irq(void); +static inline int cur_cxt(void); static inline unsigned int cur_ctxt_id(void); static inline struct dept_iecxt *iecxt(struct dept_class *c, int irq) @@ -1459,7 +1459,7 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) if (d) { check_dl_bfs(d); - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_iwait *fiw = iwait(fc, i); struct dept_iecxt *found_ie; struct dept_iwait *found_iw; @@ -1495,7 +1495,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, struct dept_task *dt = dept_task(); struct dept_wait *w; unsigned int wg = 0U; - int irq; + int cxt; int i; if (DEPT_WARN_ON(!valid_class(c))) @@ -1511,9 +1511,9 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; - irq = cur_irq(); - if (irq < DEPT_IRQS_NR) - add_iwait(c, irq, w); + cxt = cur_cxt(); + if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) + add_iwait(c, cxt, w); /* * Avoid adding dependency between user aware nested ecxt and @@ -1594,7 +1594,7 @@ static bool add_ecxt(struct dept_map *m, struct dept_class *c, eh->sub_l = sub_l; irqf = cur_enirqf(); - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) add_iecxt(c, irq, e, false); del_ecxt(e); @@ -1746,7 +1746,7 @@ static void do_event(struct dept_map *m, struct dept_class *c, add_dep(eh->ecxt, wh->wait); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { struct dept_ecxt *e; if (before(dt->wgen_enirq[i], wg)) @@ -1788,7 +1788,7 @@ static void disconnect_class(struct dept_class *c) call_rcu(&d->rh, del_dep_rcu); } - for (i = 0; i < DEPT_IRQS_NR; i++) { + for (i = 0; i < DEPT_CXT_IRQS_NR; i++) { stale_iecxt(iecxt(c, i)); stale_iwait(iwait(c, i)); } @@ -1813,27 +1813,21 @@ static inline unsigned long cur_enirqf(void) return 0UL; } -static inline int cur_irq(void) +static inline int cur_cxt(void) { if (lockdep_softirq_context(current)) - return DEPT_SIRQ; + return DEPT_CXT_SIRQ; if (lockdep_hardirq_context()) - return DEPT_HIRQ; - return DEPT_IRQS_NR; + return DEPT_CXT_HIRQ; + return DEPT_CXT_PROCESS; } static inline unsigned int cur_ctxt_id(void) { struct dept_task *dt = dept_task(); - int irq = cur_irq(); + int cxt = cur_cxt(); - /* - * Normal process context - */ - if (irq == DEPT_IRQS_NR) - return 0U; - - return dt->irq_id[irq] | (1UL << irq); + return dt->cxt_id[cxt] | (1UL << cxt); } static void enirq_transition(int irq) @@ -1884,7 +1878,7 @@ static void enirq_update(unsigned long ip) /* * Do enirq_transition() only on an OFF -> ON transition. */ - for_each_set_bit(irq, &irqf, DEPT_IRQS_NR) { + for_each_set_bit(irq, &irqf, DEPT_CXT_IRQS_NR) { if (prev & (1UL << irq)) continue; @@ -1983,6 +1977,13 @@ void dept_hardirqs_off_ip(unsigned long ip) } EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); +void dept_kernel_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + /* * Ensure it's the outmost softirq context. */ @@ -1990,7 +1991,7 @@ void dept_softirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_SIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_SIRQ] += 1UL << DEPT_CXTS_NR; } /* @@ -2000,7 +2001,7 @@ void dept_hardirq_enter(void) { struct dept_task *dt = dept_task(); - dt->irq_id[DEPT_HIRQ] += 1UL << DEPT_IRQS_NR; + dt->cxt_id[DEPT_CXT_HIRQ] += 1UL << DEPT_CXTS_NR; } void dept_sched_enter(void) From patchwork Mon Jun 26 11:56:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112933 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7464393vqr; Mon, 26 Jun 2023 06:02:35 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6kg1mW2+3U6IiqJ8Z+JEFGpAGINvQD2LvqnrH1MeqS0crX25fLTpnDXzya+EQY/NSeOPcI X-Received: by 2002:a05:6a20:441c:b0:126:d0e2:3f94 with SMTP id ce28-20020a056a20441c00b00126d0e23f94mr2267457pzb.60.1687784554634; Mon, 26 Jun 2023 06:02:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687784554; cv=none; d=google.com; s=arc-20160816; b=cb1Jf1kSgLJOUz7D9so9JtTpda7aU3s0TPxJ5KkYks513InIORweoRugk/vkJBNn61 boxkThhjsGZVW1c3HrLnIrNFuvvdQpRI5K6NqFd/LEQMT51aQo9ooeAS5fzyiDjjzCeB ZbfxFoSvv+y2IbOKo5m4cID6kyXGmpdgNuJTeBSiV1FvHOuMVwqAmGQIOjyPCPCXa9fV nHsZhyfhAFP6DpOAjBdwJqEMJMKb+qe38wWqmFC9vo3lKm4rPYWA6fBHFxbU+XMSbU+G BOvGQO+l02ukTVyZateHjaejC7PT+c7th9BMRrAA42xO7eV6zGYJoJc0Ucmi902LCVvP 53eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=BvXEQ5ZQ3IP/vpby1IXgyMQtGEDB2fYF+BHL7ncNroo=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=WQz3xafiBedS1TLwmesM4131CSihuSr8Do54Uwv2G4jXD5UsqwggZSJFTd0VOKAYoY wPCHA5i0NdhYyVm58L8FM7ngAjvnmji+31GLitouXbWTb2mUWLz+Um75XxZy6eDDhfTf w48E32rr67Mkg+pReJtGl7w6Gg+lnQHJ7hjAxVn6XHUvQfd4OcP0m2R9WC2rX6s0CSSM gYxD2vS5Ru7Yy5C4gmkLjsXv7eZeySJCO7RBdHEHtnzsNd7NWyRmXEvJaTP+XjqNHKV1 5DSXd+1CoTkFdzO9W4gagetytwyKItlNPbU1qs2GPUKmeJysqWZ8qTHJs+cKB7Z1bBbW yBgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w71-20020a63824a000000b0053efd7561e1si5070352pgd.287.2023.06.26.06.02.15; Mon, 26 Jun 2023 06:02:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229872AbjFZMuO (ORCPT + 99 others); Mon, 26 Jun 2023 08:50:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229771AbjFZMtf (ORCPT ); Mon, 26 Jun 2023 08:49:35 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EEE872722; Mon, 26 Jun 2023 05:48:16 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-6b-64997d6ca482 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 13/25] dept: Distinguish each work from another Date: Mon, 26 Jun 2023 20:56:48 +0900 Message-Id: <20230626115700.13873-14-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSe0xTdxQHcH/38bu3lZKb+rqiUVNDzPCFBuXojBpN9P7hazOaBTelsXej kSJpAUFDgloXaYUIUguCSymuNlBBCyGolADy9EUdhAEWBOILBUmYrSAIK6j/nHxyzsn3/HNY Ul5FB7HqmDhRG6OMVmApJR0KyF+tSc5RhU6dhoxLoeD9eJGCvBIHBndxEQJH2VkCBup3w7++ QQTjT1pIMJvcCPL7ukkoa+hB4LKfw9D6MhDavMMYmk1GDOcLSjA8ez9BgOdqJgFFzr3w6LKV gOqxNxSYBzDkms8T/vKWgDFbIQO2lGDot19jYKJvHTT3tNPg6loJOX95MFS6miloqOgnoPVe HoYexxQNjxqaKHBnpNFw64MVw3ufjQSbd5iBf6otBNzW+4PeTbgI+PO/SRoa06r9unGHgLbO +wiqLvYS4HS0Y3jgHSSg1Gki4fPNegT96UMMXLg0xkDu2XQExgtXKWj50kiD3rMBxkfz8PYf hQeDw6SgLz0luHwWSnho5YW717oZQV/VxQgWZ7xQag8RCioHCCF/xEsLzsJULDhHMhnBMNRG CJ72Six8ePqUEZqyx6kDiyOkW1RitDpB1K7dGimNuuOexLEf5ySacv+mU1A5Z0ASlufCePvr GvTdI6/uktPG3Aq+o2NsxnO5ZXxp2mvagKQsyRXM5t801THTgzncTr7elzVjigvm6xr11LRl 3EY+3VqMv4Yu5YtuV88ESfz9+4+tM8fk3Ab+nKf2284VCT95I/yrF/I19g7qMpJZ0KxCJFfH JGiU6uiwNVFJMerENcdPapzI/1W25IkjFWjEfbAWcSxSBMhCl2Sr5LQyQZekqUU8SyrmyuaP mlVymUqZdFrUnjymjY8WdbVoEUspFsjW+06p5NwfyjjxhCjGitrvU4KVBKWgVfd+GQ0xX38c H1kfYawJ2ny8axnbGfm7xmhR9YZLg7OLCnW2M1nil+d9B1MNOwIDUg9kZWQvDhyoWL3HuPzo VsmCFw3kLlOd5p0rvHz29n3HEonx/Zu6f2vd/6m8JO+nX1terMwvC+s0dHRqfz48z6RNDvih PNbRuynzunRb3NShJFZB6aKU60JIrU75Pw6yyNRRAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSe0iTexgHcH/v5ffOt9Z5WWJvdmXnSKeijkbGE4UE3X4YWRBR+Ef10l5y OE02XRpFpqvMmmQ1l6lhs5aop3TaoYsT07x13UmxC9NqXcy8kTnL1Goa/fPw4fuF71+PglbZ 2CCFNi5B1sdJOjXmGT5yRdoi3cEcTUi25y/IOhkC3sF0BvKulWJwXS1BUFp5mIKu+vXwdKgH wcjDxzRYLS4EF1+301DZ0IHAWZSKoeXtFGj19mNotpzAkFZ4DcP/3aMUuLNPU1Di2Aj3T9ko qBnuZMDahSHXmkb5zgcKhu3FHNhTgsFTdJ6D0deh0NzRxkJdfjMLzhcLIeeCG0OVs5mBhhse Clpu5WHoKP3Bwv2GJgZcWWYW/u2zYegestNg9/Zz8KSmgIIyk2/t46iTgqOfv7PQaK7x6VI5 Ba3PbyOoTn9FgaO0DUOdt4eCCoeFhm9X6hF4Mns5OHJymIPcw5kIThzJZuDxWCMLJncYjHzN w6tWkrqefpqYKvYR51ABQ+7ZRHLzfDtHTNUvOFLgSCQVRQtIYVUXRS4OeFniKD6OiWPgNEcy elsp4m6rwqTv0SOONJ0bYTbPiuJXamSd1ijr/wnfxUeXu77j+MGpSZbcy2wK+k/IQP4KUVgq Dry7SY8bC/PEZ8+GJxwgzBUrzO/ZDMQraKFwktjZdJcbL6YKq8X6obMTZoRg8W6jiRm3Ulgm Ztqu4l+jc8SSspqJIX9ffvuBDY1bJYSJqe5afArxBcivGAVo44yxklYXttgQE50cp01avHtv rAP5Psd+cDTrBhpsWV+LBAVST1aGzD6nUbGS0ZAcW4tEBa0OUAZ+tWpUSo2UvF/W792pT9TJ hlo0Q8Gopykjtsm7VMIeKUGOkeV4Wf+7pRT+QSnILPfihUmfjvW1bEiIGcv55vXmbEoM3LP2 0Fjj1k/lgV+uT34ZXD0z8clSV8n8sukR4d7wo+mh0sczUV+K3vBRkZ0kqN38Jz9pvpQanNXm xxxoT00e88uPWMPHTzN0xSy3zltXuSNs5M6WtdvFJZv+2GbUddvv+M8pNEYNWpgLf0d61Iwh WgpdQOsN0k8Tj+VLNQMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769770377186092411?= X-GMAIL-MSGID: =?utf-8?q?1769770377186092411?= Workqueue already provides concurrency control. By that, any wait in a work doesn't prevents events in other works with the control enabled. Thus, each work would better be considered a different context. So let Dept assign a different context id to each work. Signed-off-by: Byungchul Park --- include/linux/dept.h | 2 ++ kernel/dependency/dept.c | 10 ++++++++++ kernel/workqueue.c | 3 +++ 3 files changed, 15 insertions(+) diff --git a/include/linux/dept.h b/include/linux/dept.h index f62c7b6f42c6..d9ca9dd50219 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -515,6 +515,7 @@ extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long extern void dept_sched_enter(void); extern void dept_sched_exit(void); extern void dept_kernel_enter(void); +extern void dept_work_enter(void); static inline void dept_ecxt_enter_nokeep(struct dept_map *m) { @@ -567,6 +568,7 @@ struct dept_task { }; #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) #define dept_kernel_enter() do { } while (0) +#define dept_work_enter() do { } while (0) #define dept_ecxt_enter_nokeep(m) do { } while (0) #define dept_key_init(k) do { (void)(k); } while (0) #define dept_key_destroy(k) do { (void)(k); } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 4165cacf4ebb..6cf17f206b78 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1977,6 +1977,16 @@ void dept_hardirqs_off_ip(unsigned long ip) } EXPORT_SYMBOL_GPL(dept_hardirqs_off_ip); +/* + * Assign a different context id to each work. + */ +void dept_work_enter(void) +{ + struct dept_task *dt = dept_task(); + + dt->cxt_id[DEPT_CXT_PROCESS] += 1UL << DEPT_CXTS_NR; +} + void dept_kernel_enter(void) { struct dept_task *dt = dept_task(); diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 07895deca271..69c4f464d017 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -51,6 +51,7 @@ #include #include #include +#include #include "workqueue_internal.h" @@ -2199,6 +2200,8 @@ __acquires(&pool->lock) lockdep_copy_map(&lockdep_map, &work->lockdep_map); #endif + dept_work_enter(); + /* ensure we're on the correct CPU */ WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) && raw_smp_processor_id() != pool->cpu); From patchwork Mon Jun 26 11:56:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112914 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7445851vqr; Mon, 26 Jun 2023 05:30:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ58p8fem63aijFE5BbVvG8UQmIo7PDVNVdlivk7ozvS6CoCBJ2ozzpZwUg+1fZQgCcZ0NaB X-Received: by 2002:a05:6a20:1441:b0:126:81ef:f18d with SMTP id a1-20020a056a20144100b0012681eff18dmr7685601pzi.40.1687782637557; Mon, 26 Jun 2023 05:30:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782637; cv=none; d=google.com; s=arc-20160816; b=x3zN3nAVZCeONPWMUXzOmHBp82rbnMR6mExfc9OmcbdFpO7XfzQR8w02Sj+oe5K4Qq zMakI2XwYIr/iW6E3kOlQDXgXvCf95gWTqDSAYKK66ItJBEOP8Mjg+QeTErNGXPVFj+H aMrOyA/mMl1eJgTmR/YocBobyU4tSLClbRWM1cHFirQV2nYYBQraYU+mZgbdI64OvKA9 2a8Cim6oghvXC3nurUGfhx4YAV2ppyf+YhDkV8fTO2o+Yfn76E5Bh+GUxmPdtjdVEexq ccV/qOxKfnv7xO/y+FEojX1DaOCT8rJv4OJAqB8HI9PNJROoyWVA+buujozDRHzJDmcQ 15QA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=Qn6JTi9qI4KdXE4hVcFEyI4pRFrskKFEq7qNrOJ8tXg=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=NrjdrVOYqrQqJxOUhNT5zTLW4f0+wN1GIy70JOjLvDhnGYwb8F0MBS41to2b2ElhPc al9uhZWcPbFtneSUKT/aFEsTZieVPPZTjsa69LKXs9rUSLkGm/UJEZ8kMeHBZAUwTMLl Whqwn70bkc0TTWg6WhgUlw4ksFLz1lPAgqy6uTBSbl18/ZjX2VchxHimVQI+dMutw/8F yBtet/R6SjMQMC6TkV9qdN/MZs4QKNRmF39FHyA3kmr2uLjSxzI41tepEpuwIJjE4vm0 YbBl7hZbHTb4ZEsAbOV5W3BdK6stTOikWAgdXDOULeqOkoQ5UO933Nj5lpWweZSWbDif Jm+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cw21-20020a056a00451500b006765597b319si2223019pfb.123.2023.06.26.05.30.24; Mon, 26 Jun 2023 05:30:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229841AbjFZMTi (ORCPT + 99 others); Mon, 26 Jun 2023 08:19:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230317AbjFZMT2 (ORCPT ); Mon, 26 Jun 2023 08:19:28 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E32210DB; Mon, 26 Jun 2023 05:19:03 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-8d-64997d6d8ce5 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 15/25] locking/lockdep, cpu/hotplus: Use a weaker annotation in AP thread Date: Mon, 26 Jun 2023 20:56:50 +0900 Message-Id: <20230626115700.13873-16-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTH9zz33ueWaslNJXp1Zm51RoOK1ag5RiKaLPFmyZJlhC+yRRt6 lUaoWCyKhqUqGiigokIVO4ViagOdYOGDAlVW5G2ooDSsw4pCDAgUcbhWK+1m68uXk1/O/5zf +XIklPwOs0ii0R4QdVpVhoJIaenU3KrVmXkX1UrbmzgoLVaC/98CGsx1dgJ912sR2BuPYhhv 3w5/BXwIZu/3UmAq60NQNfyEgsaOIQRO2zEC/c9jwe2fJtBdVkTgeHUdgYeTIQze8rMYah0/ QM8ZC4bW4BgNpnECl0zHcaS8wBC01rBgNSyDEVsFC6HhtdA9NMCAc3AlXLzsJdDi7Kah4+YI hv4mM4Eh+/8M9HR00dBXWsLA7y8tBCYDVgqs/mkWHrVWYqjPj4gmQk4MJ1//x0BnSWuErt7A 4P67GcHtgmcYHPYBAm1+H4YGRxkF7661Ixg5NcXCieIgC5eOnkJQdKKcht5wJwP53g0w+9ZM tm4W2nzTlJDfcFBwBipp4U8LL9yqeMIK+bcHWaHSoRcabPFCdcs4Fqpm/IzgqCkkgmPmLCsY p9xY8A60EOHlgwes0HVhlv5x8Q5polrM0OSIujVbdknTX7UP4qxr7KGwOcGASogRxUh4bj1v sPvwZ26rcNFRJtxy3uMJUlGO477mG0pGGSOSSiiueg4/1nWXjQbzuF/40fs+Jso0t4xvailC UZZxG/nCc12fDizha+tbP4hiIv3me5YPM3JuA3/M6yJRKc+di+F/c5WjjwsL+T9sHvoMklWi L2qQXKPNyVRpMtYnpOdqNYcS0vZlOlDkrax5odSbaKYv2YU4CVLMlSm/uqCWM6qc7NxMF+Il lCJONv+tSS2XqVW5h0Xdvp06fYaY7UJfSmjFAtm6wEG1nNujOiDuFcUsUfc5xZKYRQZU+s33 yjJr1sbhTVcC4TnfWU7vPnkrLTRuTk2ZMJ4u3uv2bDMc6Uku+FlUvqCNrrsLtieG9fH39gd3 Tsbqu62rDEkDLq2J+ydnzJk66EtOG1GcT1/Ko23OxM7Nz8p72zRuyrPisSPWHU5ptPz67ePV o/39q14lpWRJO56aC9bof1LQ2emqtfGULlv1HnZDSDxSAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0wTaRSG/b6Z+WYYrZntIs5iVNIEbxsRsqBnlaj/nGhU3MSsa9Sl2U6k 0iLbIsImGoRiEKwRDXKxmnJJIcCuWDRRpKRp5SaiKIgItSoxKoJ0g5aI4KU1+ufkyfu+eX4d jlKWM+GcNiVNNqSodSrC0/y2dTkrdUdKNdFPu9ZA4clo8L/Lo8FyqZ5Az391COqvHMMw0roJ Hk6OIZjuvktBcVEPgvJnjym40uZF4KjJJtD7fC70+X0EOosKCORUXiJwb3QGg+fcGQx19q3Q dboCg3PqJQ3FIwTOF+fgwHmFYcpWy4ItKxKGa8pYmHkWA53efgbcFzoZcAz+DKUXPQSaHZ00 tF0bxtDbZCHgrf/MQFdbBw09hWYG/h2vIDA6aaPA5vexcN9pxdBgCthezzgwHH/7iYF2szNA VZcx9D26gaAl7ykGe30/Abd/DEOjvYiCD9WtCIZPvWEh9+QUC+ePnUJQkHuOhrsf2xkweeJg +r2FbIyX3GM+SjI1HpYck1ZaulUhStfLHrOSqWWQlaz2Q1JjzQqpsnkES+UTfkay154gkn3i DCvlv+nDkqe/mUjjd+6wUkfJNJ2wcDcfr5F12nTZsGp9Ip/0f+sgTq1mMz5aorKQmeSjEE4U YkV3mYsOMhGWigMDU1SQQ4UIsdH8gslHPEcJlbPFlx032WDxo7BXfNE9xgSZFiLFpuYCFGSF sFo8cbbjm3SxWNfg/CoKCeQ3bld83SiFODHb4yKnEW9Fs2pRqDYlXa/W6uKijMlJmSnajKi/ DurtKPA4tiMzhdfQu95NLiRwSDVHEb2oRKNk1OnGTL0LiRylClWEvS/WKBUadeY/suHgn4ZD OtnoQgs4WjVfsfl3OVEp7FenycmynCobvreYCwnPQtl5Dd7hCc8eH47cubo6Iz831qVur3vS WvYTN17irvo1jt9iOmzOu3Uhmd8neq87x807IkZTE/j9P2zfsX3ZhrXhYV7Hk6HZaB7auU1s 8nXrN3hT52r7jg798iA6Yf7f5Qn6q4548/IlEW2/LaVjLeELHgwd+KMqrdSqjdk1p3dtg4o2 JqljVlAGo/oLfVzQwzQDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768366502370464?= X-GMAIL-MSGID: =?utf-8?q?1769768366502370464?= cb92173d1f0 ("locking/lockdep, cpu/hotplug: Annotate AP thread") was introduced to make lockdep_assert_cpus_held() work in AP thread. However, the annotation is too strong for that purpose. We don't have to use more than try lock annotation for that. Furthermore, now that Dept was introduced, false positive alarms was reported by that. Replaced it with try lock annotation. Signed-off-by: Byungchul Park --- kernel/cpu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/cpu.c b/kernel/cpu.c index 6c0a92ca6bb5..6a9b9c3d90a1 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -356,7 +356,7 @@ int lockdep_is_cpus_held(void) static void lockdep_acquire_cpus_lock(void) { - rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 0, _THIS_IP_); + rwsem_acquire(&cpu_hotplug_lock.dep_map, 0, 1, _THIS_IP_); } static void lockdep_release_cpus_lock(void) From patchwork Mon Jun 26 11:56:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112917 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7448317vqr; Mon, 26 Jun 2023 05:34:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5DSCnMwLvuP9ud7+/i+w0EBXgZFYfxJwbmVdJdkCwHvQPylgp7bSv/YWiquL7qzg6MiaVu X-Received: by 2002:a05:6a20:54a7:b0:125:517c:4f18 with SMTP id i39-20020a056a2054a700b00125517c4f18mr11103044pzk.8.1687782859753; Mon, 26 Jun 2023 05:34:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782859; cv=none; d=google.com; s=arc-20160816; b=GnibILcihjNByYa+9NBQIXJxX2LzRyXb8FrZ7X9EYkrtV1zpx82LFWTa03r7TImgzX G9WXEv3cnHsVqRS7wsOpVqAaI7w8GgdsIqGv07683Izwpb655Kc5qsVFHosWIF3B3vPd FQd7SiSHhRfYtQhnaomyDEdVvxPM+S8tuIV1f9BWjbQ3XSFqAz4xU+cWA8EKgeLGN0yT JZxWAgVuDkAx01Mtd6T5HOOBX6vyqsGATqFHmxgBB1VLY/XXZbGu1HlVz43qB7jFB7Ce L5HxBbLxAjgsim0bomkeCkpKKXoRJ1MVrQK0DIVtCyThi4h003rMfKLISrScxGCXYJQs mdjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=0Zjia+M7Fuvtj1FXetUg2PMVvH2dSwC0rHdkuHbJkG0=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=EmqdB6eCE59WNUXA56Ltsj09jMilkgSnAeAvQKlmZEwd7y19bQW+EmGJmEy1nkHdL3 ltQ5xDhyc/xV6+ZfUY7fhz93PvrvfHqVqJdaus8b5946Vokte/boQf0fux981kBgwzyX 9kQKwxYXOcTtdE6A7HJmXxesXL0R0scV43tBuMzvuq8oo3F+QaKwRiC92fysGLn6zi8Z +eV+WU1apfMWUODHrNHB/R23JoYdz6wyZhYQSPhmp9wIuxNQFPgwiBROIaIF7PnYl3UB +QFBJwMxCsEQdKkbbPKhNY4DktF4c3ShlGGn7x9F4qc+KOVQTTmoE10jju+fbi7nQV1J Rayw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 185-20020a6308c2000000b0053f7fcd4705si4962811pgi.541.2023.06.26.05.34.06; Mon, 26 Jun 2023 05:34:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230303AbjFZMTe (ORCPT + 99 others); Mon, 26 Jun 2023 08:19:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230176AbjFZMT2 (ORCPT ); Mon, 26 Jun 2023 08:19:28 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E9B810DC; Mon, 26 Jun 2023 05:19:03 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-9e-64997d6d97f8 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 16/25] dept: Apply sdt_might_sleep_{start,end}() to dma fence wait Date: Mon, 26 Jun 2023 20:56:51 +0900 Message-Id: <20230626115700.13873-17-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSf0yMcRwHcN/nx/e5jsvj/HrEsNuaYYgpHz+Gv/TMMjZjxkbHPXTr7thF P2xtxZVIjUhFcq7byXWou7S4jhylpBy1lF1Nt0iUtrjqFLnT/PPZa+/3Z5+/PiJS+pQOESk1 JwWtRq6SYTElHphmWKFOLlCE9dwMgcsXw8D7M4OCwgcWDK77pQgsFakE9NVGwvvhfgRjTW9I yMt1Ibjd3UlCRV0XAkfJGQwtPcHQ6h3E0JCbieFs8QMMb7+NE+C+lkNAqXUHNF4yEFDj66Ug rw/DjbyzhH98IcBnMjNgSgkFT8l1Bsa7V0NDVxsNjg/LoaDIjaHa0UBBXZWHgJbHhRi6LBM0 NNbVU+C6nEXDve8GDN+GTSSYvIMMvKvRE1Cm8x/6Ou4gIP3HHxpeZtX4ZSwnoLXDjuBJxkcC rJY2DM+9/QTYrLkk/LpTi8CTPcBA2kUfAzdSsxFkpl2j4M3vlzTo3OEwNlqIt27kn/cPkrzO lsA7hvUU/8rA8Y+udzK87skHhtdbT/G2kmV8cXUfwd8e8tK81Xwe89ahHIa/MNBK8O62asx/ b25m+Pr8MWrXgv3iTQpBpYwXtKs2R4tj7GkP6ROpksSRxg6Ugj6JL6AgEceu5dJsOfR/e/Qt /4zZJVx7u48MeBa7mLNlffbnYhHJFk/leutfMIFiJruXS3V5qIApNpQrshtxwBI2gnudcZec PLqIKy2r+ecgf25/bUABS9lw7ozbiSd3rgRxFeWnJz2Pe1bSTl1CEj2aYkZSpSZeLVeq1q6M SdIoE1ceOa62Iv9bmZLHD1ShIdduJ2JFSDZNErYwXyGl5fFxSWon4kSkbJZkzmieQipRyJNO C9rjh7SnVEKcE80XUbK5kjXDCQope0x+UogVhBOC9n9LiIJCUlDs1c+Ruukfq85tLm7trHQd 7FG/v7V4zpayPUWRQspRJB2kFPNCyyIyoozaxIiJbKkxll5XP3tnZXB4woamZk1mctOG9J59 ZG9yfnDL3pGBPdtn2wnHlSPrKyOTat+ptmhejZoPT9BLt3/hfMEFUSO3ohuiflQ5Kcs2Y7e5 I33VDBkVFyNfvYzUxsn/AqE8+OpSAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSbUxTZxgG4L3n4z2Hauexopzg1KUGZnRqdWKeRLbwQ8OricbMRJPFRKs9 kU4KpFWUZSRUQBGFCIoVRVOLdgjddKeaoG2VgFQqKmhJRa1EGhWZVZJqUaR+tFv258mV+07u Xw9Pq2xsKq/P2yEZ87S5aqxgFGuWlc43FNfrNI/KKag5qIHo2woGGs47MPT+1YLAcdFMwXBn NtwfDSMYv91Dg6WuF8Hpwcc0XPQOIPA07cHgf/o19EVHMPjqDmAobTyP4e7LGAXBo7UUtMir ofuQjYK2sSEGLMMYTlhKqfh5QcGYvZkDe0kahJqOcxAbXAS+gQALHSd9LHgezoP6U0EMbo+P AW9riAL/lQYMA47PLHR7uxjorali4c/XNgwvR+002KMjHNxrs1JwoSy+9k/MQ8HeN59YuFHV FteZvynoe+BCcLXiCQWyI4ChIxqmwCnX0fDhj04EoepXHJQfHOPghLkawYHyowz0fLzBQlkw A8bfN+CsTNIRHqFJmXMX8YxaGXLTJpLLxx9zpOzqQ45Y5Z3E2TSXNLqHKXI6EmWJ3LwfEzlS y5HKV30UCQbcmLy+c4cjXcfGmbUzflFk6qRcfaFkXPjTZkWOq/wSW2BW7n7X/QCVoGeKSpTE i8ISMWT1swlj4Tuxv3+MTjhZ+FZ0Vj2P5wqeFhoniENd17lEMUVYL5p7Q0zCjJAmnnKdwQkr haXirYpz9H+js8SWC23/Oimeu27ZUMIqIUPcE2zHh5DCir5qRsn6vEKDVp+bscC0PacoT797 wdZ8g4zin2MvjtW0orf+7HYk8Eg9UamZeUynYrWFpiJDOxJ5Wp2snPbeolMpddqi3yRj/ibj zlzJ1I6m84w6Rblqg7RZJWzT7pC2S1KBZPy/pfik1BJUsZr2vmE1xciYcjulf/Jg9YuptfNX LN3Wyc/0L3cenn323N0N10q9mnoyEbLWp/7u3tJ/pCd9X2Tt5NmhcDjSOulk/vdqt/3n7DkB 9xNX18rrSkPkhyFf4Ef/vtiyNNU64zdiAb1VzrLMaw5UTq/Uyuaip9kbM1WLW8/a0tMdv6oZ U4520VzaaNJ+AT5cFyg1AwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768599709539266?= X-GMAIL-MSGID: =?utf-8?q?1769768599709539266?= Makes Dept able to track dma fence waits. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 406b4e26f538..1db4bc0e8adc 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -16,6 +16,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -782,6 +783,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); + sdt_might_sleep_start(NULL); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -795,6 +797,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); if (!list_empty(&cb.base.node)) list_del(&cb.base.node); @@ -884,6 +887,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } + sdt_might_sleep_start(NULL); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); @@ -898,6 +902,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, if (ret > 0 && intr && signal_pending(current)) ret = -ERESTARTSYS; } + sdt_might_sleep_end(); __set_current_state(TASK_RUNNING); From patchwork Mon Jun 26 11:56:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112912 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7442599vqr; Mon, 26 Jun 2023 05:25:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ66PsyODtDjec30SROOI5gYn71A9eX2bklgIRJP9GfLTapWpS98Fpnkqwp2BByc7T8M4mTZ X-Received: by 2002:a17:907:d0c:b0:96f:4225:9009 with SMTP id gn12-20020a1709070d0c00b0096f42259009mr31770499ejc.0.1687782307923; Mon, 26 Jun 2023 05:25:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782307; cv=none; d=google.com; s=arc-20160816; b=B2AZkR34/31CBm7rGBzEWZ+jLLFSzkJ9LBlDgTSg51uYonB3JX9qGXJUsfMLob5g8G B/dfHaWhMEikf6NtzS9G6WoD2yooSz8OVBxyTVCNd+dQ2WCUjP+uegwxKkqQU01qgNM0 nB4HKT3rua1Kcc+v9ndzgaGe8hvB7q819ZwkI/P9E0fvcVSxRgNZ3qkdqtiaYNuA+i6M nkgYMqklsm7t2f7JLr1Cgc+RdSgeMWbrTO52s+dHNllq66Th9gY+qcqbzHzYJQCqWnVe c8VlesjhSywzZAPMEM2I7cvIXA+9VH8Zp4sw0L0YsDSR2yTASrR/F9ErnL5o9Z8Gr5ux aCOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=8OyULeunrmDRO8IFIQEogwpAm65AvpA+a/d/URofCqI=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=o+O2guAYu1eH7JrCdn07ju1ov3zlI96ygG+Xh5K9qMuj0bGjBOau8VWi5Bwd55Y0rO OUzmX+GKLmMIojXWo5xp6xD6tLghUEC088WDXPRiieK6AbW5qy24mvLj1mpEbUHEkdjM EZAdiTQ56iH7rvsPM/RZn/+7coJVCRwOngvuwBmiYcTeyVCIxe7XBBrthmPUiDJfJDAP +p/NtnnGs2124riqZxLSObcVlq7FOMMZcumAw9Qj11J22PW46lcysy4GxDoDiJtj8J0P ZSbao5R221Dt4BtTxr/Ii0CPV5BUohLmUcfvVt/s2PDRxgi2rmOB6kAKKphB0x+ssoW5 zbUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lw2-20020a170906bcc200b0094f697070f6si2748814ejb.56.2023.06.26.05.24.43; Mon, 26 Jun 2023 05:25:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229908AbjFZMTt (ORCPT + 99 others); Mon, 26 Jun 2023 08:19:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229743AbjFZMT3 (ORCPT ); Mon, 26 Jun 2023 08:19:29 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E2B910D7; Mon, 26 Jun 2023 05:19:03 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-af-64997d6d3dd7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 17/25] dept: Track timeout waits separately with a new Kconfig Date: Mon, 26 Jun 2023 20:56:52 +0900 Message-Id: <20230626115700.13873-18-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSeUiTcRjH+733lou3GfV20DGwO7OyeKCDiI4fUSBFhx3kaC+5cmazTIvA akXNJhaYqSt0xhq61GZ3myzLK0uthlotK8tsHhnqxLPajP55+PB8n+/nr4cj5U56EqeOPipq o5VRCkZKSTsCshdoTqWrQlo/z4bLl0LA23OBAmOBlYHa/DwE1runCfCUboD63nYEg69qSEhL rUWQ/eUjCXfLGhE4LGcYePttDLi8nQxUpiYxcDangIHXbUMEuK9eISDPthmqUkwEOPtbKEjz MJCZdpbwjR8E9JtzWTAnBkGTJYOFoS+LoLKxjgbH+3mQfsPNgN1RSUHZwyYC3j42MtBo/UND VVkFBbWXDTTc/mlioK3XTILZ28nCG2cWAYU6n6h1yEHA+e7fNJQbnD66eYcA17snCIovfCbA Zq1j4Jm3nYAiWyoJA7dKETQld7Bw7lI/C5mnkxEknbtKQc1wOQ0691IY7DMyq5fjZ+2dJNYV HceO3iwKvzAJ+FHGRxbrit+zOMt2DBdZ5uIcu4fA2V1eGttyLzLY1nWFxfoOF4HddXYG/6yu ZnHFtUEqbMou6QqVGKWOE7ULV0VII+/nm9mYroh4z28TnYiebtIjCSfwoYI+5Tbzn1s/NZN+ ZvhZQkND/wiP46cLRYbvtB5JOZLPGS20VDxn/UEgv0Vw20tGyhQfJFTb39F+lvHLhNbuSuqf dJqQV+gcEUl8+ycvTcjPcn6pcMbt70p9N0kSIf9eIfGvMFF4ammgUpAsC43KRXJ1dJxGqY4K DY5MiFbHB+8/rLEh31+ZTw3tfoi6areWIJ5DigBZyNRrKjmtjItN0JQggSMV42Tj+9JUcplK mXBC1B7epz0WJcaWoMkcpZggW9x7XCXnDyiPiodEMUbU/k8JTjIpEa1Zq5ToAu9VzNzYs60+ +c9aXnI9bD05XDwY0WzsDA/es/3XrnCV7UGgseZi/PThbzhAs3cl6ltQntlzxECuptO/ftht MczfWRj6I7du5Z39YaXtY4arAlKso8YObBw42f2pvnnH9xg7fbBlhX5d0K2XHdtmeFyRSyTc rKSenLY5LqSgYiOVi+aS2ljlX8aXH1BTAwAA X-Brightmail-Tracker: H4sIAAAAAAAAAzXSeUiTcRgH8H7v8XtfZ1tvy+qlsGMU0WUKrR5IIoTqJSpCovOPHO0th9uS TS2Fyqsoj8jAK610xjKvakp0bGXzyJlNS7GDaSlZeSVZE5fr2Ir+efjw/cL3r4cl5SZ6HqvR x4kGvUqrwBJKsnND2mrdyUJ1aNuNQMjJCgX393MUFN+qwtBRU4mgqi6FgMGmrfBqYgTB1PN2 EvJzOxCU9vWQUNfci8BWnoqh84MMutxjGBy5mRjSym5heDHsJcCVd4mASssOeHbRREC95xMF +YMYivLTCN/5TIDHXMGAOXkp9JdfZsDbFwaO3m4aGq44aLC9XQmFV10YrDYHBc33+gnofFCM obfqNw3Pmlso6MjJpqH6iwnD8ISZBLN7jIGX9SUE3E73rQ15bQSc/faLhqfZ9T5dv0NA15uH CB6de0+ApaobQ4N7hIBaSy4JP240Iei/MMrAmSwPA0UpFxBknsmjoP3nUxrSXUqYmizGm8KF hpExUkivPS7YJkooodXEC/cv9zBC+qO3jFBiiRdqy1cIZdZBQigdd9OCpeI8FizjlxghY7SL EFzdVix8cToZoaVgitoVfEASrha1mgTRsGZjlCT6bo2ZiR2POjH4y0QnoyfbM1AAy3Nr+aF3 A6TfmFvGv37t+esgbhFfm/2RzkASluTKAvlPLY2Mv5jFRfIuqx37TXFLeaf1De23lFvHD31z UP9GF/KVt+v/DgX48odtJuS3nFPyqS47vogkJWhaBQrS6BN0Ko1WGWKMiU7Ua06EHD6msyDf 55hPenPuoe+dW+2IY5FiujR0QYFaTqsSjIk6O+JZUhEknTOZr5ZL1arEJNFw7JAhXisa7Wg+ SynmSrftFaPk3FFVnBgjirGi4X9LsAHzklG0tfWmLO9n5kDSbtk+Ajmube95VxDxYTLWwy8f iNA/DtMWqZNmOU9dXR4s1wanNG6JCLfL+loxuXnJ6OyPw42NcUXevq/xqbKdIRDZXqfU6PYE rj/4onlV282Fyj2Fa2eoc1rWnS0fcA5dn3lEE/K+idVUl56OUcpXZu3PuPN7sYIyRqvCVpAG o+oPf2YM3TUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768021665733747?= X-GMAIL-MSGID: =?utf-8?q?1769768021665733747?= Waits with valid timeouts don't actually cause deadlocks. However, Dept has been reporting the cases as well because it's worth informing the circular dependency for some cases where, for example, timeout is used to avoid a deadlock but not meant to be expired. However, yes, there are also a lot of, even more, cases where timeout is used for its clear purpose and meant to be expired. Let Dept report these as an information rather than shouting DEADLOCK. Plus, introduced CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT Kconfig to make it optional so that any reports involving waits with timeouts can be turned on/off depending on the purpose. Signed-off-by: Byungchul Park --- include/linux/dept.h | 15 ++++++--- include/linux/dept_ldt.h | 6 ++-- include/linux/dept_sdt.h | 12 +++++--- kernel/dependency/dept.c | 66 ++++++++++++++++++++++++++++++++++------ lib/Kconfig.debug | 10 ++++++ 5 files changed, 89 insertions(+), 20 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 583e8fe2dd7b..0aa8d90558a9 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -270,6 +270,11 @@ struct dept_wait { * whether this wait is for commit in scheduler */ bool sched_sleep; + + /* + * whether a timeout is set + */ + bool timeout; }; }; }; @@ -458,6 +463,7 @@ struct dept_task { bool stage_sched_map; const char *stage_w_fn; unsigned long stage_ip; + bool stage_timeout; /* * the number of missing ecxts @@ -496,6 +502,7 @@ struct dept_task { .stage_sched_map = false, \ .stage_w_fn = NULL, \ .stage_ip = 0UL, \ + .stage_timeout = false, \ .missing_ecxt = 0, \ .hardirqs_enabled = false, \ .softirqs_enabled = false, \ @@ -513,8 +520,8 @@ extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, con extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); -extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l); -extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn); +extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); +extern void dept_stage_wait(struct dept_map *m, struct dept_key *k, unsigned long ip, const char *w_fn, long timeout); extern void dept_request_event_wait_commit(void); extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); @@ -566,8 +573,8 @@ struct dept_task { }; #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_copy(t, f) do { } while (0) -#define dept_wait(m, w_f, ip, w_fn, sl) do { (void)(w_fn); } while (0) -#define dept_stage_wait(m, k, ip, w_fn) do { (void)(k); (void)(w_fn); } while (0) +#define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) +#define dept_stage_wait(m, k, ip, w_fn, t) do { (void)(k); (void)(w_fn); } while (0) #define dept_request_event_wait_commit() do { } while (0) #define dept_clean_stage() do { } while (0) #define dept_stage_event(t, ip) do { } while (0) diff --git a/include/linux/dept_ldt.h b/include/linux/dept_ldt.h index 062613e89fc3..8adf298dfcb8 100644 --- a/include/linux/dept_ldt.h +++ b/include/linux/dept_ldt.h @@ -27,7 +27,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_L, i, "trylock", "unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_L, i, "lock", sl); \ + dept_wait(m, LDT_EVT_L, i, "lock", sl, false); \ dept_ecxt_enter(m, LDT_EVT_L, i, "lock", "unlock", sl);\ } \ } while (0) @@ -39,7 +39,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_R, i, "read_trylock", "read_unlock", sl);\ else { \ - dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl);\ + dept_wait(m, q ? LDT_EVT_RW : LDT_EVT_W, i, "read_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_R, i, "read_lock", "read_unlock", sl);\ } \ } while (0) @@ -51,7 +51,7 @@ else if (t) \ dept_ecxt_enter(m, LDT_EVT_W, i, "write_trylock", "write_unlock", sl);\ else { \ - dept_wait(m, LDT_EVT_RW, i, "write_lock", sl); \ + dept_wait(m, LDT_EVT_RW, i, "write_lock", sl, false);\ dept_ecxt_enter(m, LDT_EVT_W, i, "write_lock", "write_unlock", sl);\ } \ } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 12a793b90c7e..21fce525f031 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -22,11 +22,12 @@ #define sdt_map_init_key(m, k) dept_map_init(m, k, 0, #m) -#define sdt_wait(m) \ +#define sdt_wait_timeout(m, t) \ do { \ dept_request_event(m); \ - dept_wait(m, 1UL, _THIS_IP_, __func__, 0); \ + dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) +#define sdt_wait(m) sdt_wait_timeout(m, -1L) /* * sdt_might_sleep() and its family will be committed in __schedule() @@ -37,12 +38,13 @@ /* * Use the code location as the class key if an explicit map is not used. */ -#define sdt_might_sleep_start(m) \ +#define sdt_might_sleep_start_timeout(m, t) \ do { \ struct dept_map *__m = m; \ static struct dept_key __key; \ - dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__);\ + dept_stage_wait(__m, __m ? NULL : &__key, _THIS_IP_, __func__, t);\ } while (0) +#define sdt_might_sleep_start(m) sdt_might_sleep_start_timeout(m, -1L) #define sdt_might_sleep_end() dept_clean_stage() @@ -52,7 +54,9 @@ #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) #define sdt_map_init_key(m, k) do { (void)(k); } while (0) +#define sdt_wait_timeout(m, t) do { } while (0) #define sdt_wait(m) do { } while (0) +#define sdt_might_sleep_start_timeout(m, t) do { } while (0) #define sdt_might_sleep_start(m) do { } while (0) #define sdt_might_sleep_end() do { } while (0) #define sdt_ecxt_enter(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 8454f0a14d67..52537c099b68 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -740,6 +740,8 @@ static void print_diagram(struct dept_dep *d) if (!irqf) { print_spc(spc, "[S] %s(%s:%d)\n", c_fn, fc_n, fc->sub_id); print_spc(spc, "[W] %s(%s:%d)\n", w_fn, tc_n, tc->sub_id); + if (w->timeout) + print_spc(spc, "--------------- >8 timeout ---------------\n"); print_spc(spc, "[E] %s(%s:%d)\n", e_fn, fc_n, fc->sub_id); } } @@ -793,6 +795,24 @@ static void print_dep(struct dept_dep *d) static void save_current_stack(int skip); +static bool is_timeout_wait_circle(struct dept_class *c) +{ + struct dept_class *fc = c->bfs_parent; + struct dept_class *tc = c; + + do { + struct dept_dep *d = lookup_dep(fc, tc); + + if (d->wait->timeout) + return true; + + tc = fc; + fc = fc->bfs_parent; + } while (tc != c); + + return false; +} + /* * Print all classes in a circle. */ @@ -815,10 +835,14 @@ static void print_circle(struct dept_class *c) pr_warn("summary\n"); pr_warn("---------------------------------------------------\n"); - if (fc == tc) + if (is_timeout_wait_circle(c)) { + pr_warn("NOT A DEADLOCK BUT A CIRCULAR DEPENDENCY\n"); + pr_warn("CHECK IF THE TIMEOUT IS INTENDED\n\n"); + } else if (fc == tc) { pr_warn("*** AA DEADLOCK ***\n\n"); - else + } else { pr_warn("*** DEADLOCK ***\n\n"); + } i = 0; do { @@ -1564,7 +1588,8 @@ static void add_dep(struct dept_ecxt *e, struct dept_wait *w) static atomic_t wgen = ATOMIC_INIT(1); static void add_wait(struct dept_class *c, unsigned long ip, - const char *w_fn, int sub_l, bool sched_sleep) + const char *w_fn, int sub_l, bool sched_sleep, + bool timeout) { struct dept_task *dt = dept_task(); struct dept_wait *w; @@ -1584,6 +1609,7 @@ static void add_wait(struct dept_class *c, unsigned long ip, w->wait_fn = w_fn; w->wait_stack = get_current_stack(); w->sched_sleep = sched_sleep; + w->timeout = timeout; cxt = cur_cxt(); if (cxt == DEPT_CXT_HIRQ || cxt == DEPT_CXT_SIRQ) @@ -2338,7 +2364,7 @@ static struct dept_class *check_new_class(struct dept_key *local, */ static void __dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, - bool sched_sleep, bool sched_map) + bool sched_sleep, bool sched_map, bool timeout) { int e; @@ -2361,7 +2387,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, if (!c) continue; - add_wait(c, ip, w_fn, sub_l, sched_sleep); + add_wait(c, ip, w_fn, sub_l, sched_sleep, timeout); } } @@ -2403,14 +2429,23 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, } void dept_wait(struct dept_map *m, unsigned long w_f, - unsigned long ip, const char *w_fn, int sub_l) + unsigned long ip, const char *w_fn, int sub_l, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (dt->recursive) return; @@ -2419,21 +2454,30 @@ void dept_wait(struct dept_map *m, unsigned long w_f, flags = dept_enter(); - __dept_wait(m, w_f, ip, w_fn, sub_l, false, false); + __dept_wait(m, w_f, ip, w_fn, sub_l, false, false, timeout); dept_exit(flags); } EXPORT_SYMBOL_GPL(dept_wait); void dept_stage_wait(struct dept_map *m, struct dept_key *k, - unsigned long ip, const char *w_fn) + unsigned long ip, const char *w_fn, + long timeoutval) { struct dept_task *dt = dept_task(); unsigned long flags; + bool timeout; if (unlikely(!dept_working())) return; + timeout = timeoutval > 0 && timeoutval < MAX_SCHEDULE_TIMEOUT; + +#if !defined(CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT) + if (timeout) + return; +#endif + if (m && m->nocheck) return; @@ -2481,6 +2525,7 @@ void dept_stage_wait(struct dept_map *m, struct dept_key *k, dt->stage_w_fn = w_fn; dt->stage_ip = ip; + dt->stage_timeout = timeout; unlock: arch_spin_unlock(&stage_spin); @@ -2506,6 +2551,7 @@ void dept_clean_stage(void) dt->stage_sched_map = false; dt->stage_w_fn = NULL; dt->stage_ip = 0UL; + dt->stage_timeout = false; arch_spin_unlock(&stage_spin); dept_exit_recursive(flags); @@ -2523,6 +2569,7 @@ void dept_request_event_wait_commit(void) unsigned long ip; const char *w_fn; bool sched_map; + bool timeout; if (unlikely(!dept_working())) return; @@ -2545,6 +2592,7 @@ void dept_request_event_wait_commit(void) w_fn = dt->stage_w_fn; ip = dt->stage_ip; sched_map = dt->stage_sched_map; + timeout = dt->stage_timeout; /* * Avoid zero wgen. @@ -2552,7 +2600,7 @@ void dept_request_event_wait_commit(void) wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); WRITE_ONCE(dt->stage_m.wgen, wg); - __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map); + __dept_wait(&dt->stage_m, 1UL, ip, w_fn, 0, true, sched_map, timeout); exit: dept_exit(flags); } diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 611fd01751a7..912309bd30ba 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1282,6 +1282,16 @@ config DEPT noting, to mitigate the impact by the false positives, multi reporting has been supported. +config DEPT_AGGRESSIVE_TIMEOUT_WAIT + bool "Aggressively track even timeout waits" + depends on DEPT + default n + help + Timeout wait doesn't contribute to a deadlock. However, + informing a circular dependency might be helpful for cases + that timeout is used to avoid a deadlock. Say N if you'd like + to avoid verbose reports. + config LOCK_DEBUGGING_SUPPORT bool depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT From patchwork Mon Jun 26 11:56:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112923 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7454926vqr; Mon, 26 Jun 2023 05:46:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6dtofRGWc68aiU64C0gT6XICYFdd9rf18STs1TyBth39HeRcRXn4OwgIWJl6d+3TtLi+Kf X-Received: by 2002:a17:907:d0c:b0:96f:4225:9009 with SMTP id gn12-20020a1709070d0c00b0096f42259009mr31820314ejc.0.1687783578682; Mon, 26 Jun 2023 05:46:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783578; cv=none; d=google.com; s=arc-20160816; b=uBlfX0AkFRGBJdel0sX0qe2bgvd/ggdUksFjzZApii/0DgFhonzhU0rnhn2USGLzHy C4zYt14SFV6rPDI7yr6wj82BR1m8miq/z2Y8XL4QzLnxoEyW0ZVwzGA5S25Vu1knc+OF 5EG7jaKoDxpcdBG32dYY9JutDfiVKcGtjCcz3L9LWew/sQYD9lghCe/4Z/qdVOdvMjzS kSH3jxYm9fKhxzo8TDM11Mu1fY6IM7zOTENClSv0rUqRJ/HgpYgLaMAdxxHJP5GpviGp JxdUv4Xf7ie4IL6hOFqwTAFZjc933RRfzU2G9J1nvEXXdrSdn/XsS1a7Xdzt9QEICmqv qyQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=kmb2s4MSRQPbUE3zQMzsQm+h7YpxC3Zof1Ki3W/MRZI=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=g00byanv+nORsZauvAx3muwEL37oOWWokBVubOyv+eKX+ysEnaHPbjwTfBBpa7QfZb EL+QjcQm5DVnTWIZZ7mJpmHPfUIAOo9qZAsRzELRme+009jczFuEcaUfuW0DXsf/dH0T qylbnW8TjlO26GTHxmHx+JZBgVoB5Zhm6+1RJCZUsjmFjHvbHXGMNs1HW/EtQXBDsU1u qn1kGUSZ60NFINaFzJn2oqaQtI3hy04aTtya8Zfq8xvnZdT//6nhI87icm0s3V+XRIX7 HFDj3cbOL1uDvBRo/0PGXSsCSn7WwCkJ59YhhRibf2T4WrHtni7eFP7/HfrJ8w1RduSZ +v1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j2-20020a17090643c200b0094f34ed62d2si2636259ejn.366.2023.06.26.05.45.54; Mon, 26 Jun 2023 05:46:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229800AbjFZMTq (ORCPT + 99 others); Mon, 26 Jun 2023 08:19:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230013AbjFZMT2 (ORCPT ); Mon, 26 Jun 2023 08:19:28 -0400 X-Greylist: delayed 323 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Mon, 26 Jun 2023 05:19:04 PDT Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4EFDA10E7; Mon, 26 Jun 2023 05:19:03 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-c0-64997d6d15c4 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 18/25] dept: Apply timeout consideration to wait_for_completion()/complete() Date: Mon, 26 Jun 2023 20:56:53 +0900 Message-Id: <20230626115700.13873-19-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTZxjHfc/lPYfObieVyREXN5vhdSo4MM/ispgYt/fDlriYaKLxcmKP 0nDRFUFZRsKlGLkp4KBaCNaiXUNRtBi8QAmCchkC3SQIpFQgOscEqsxWGFVsWfzy5Jfn/3v+ nx6eVjWy4bw28ZisS5Ti1VjBKCYWmtclpJ3XRHqMX0BRfiR4X51ioLymGoPzqg1B9Y0MCsbu fwePfOMIZrt6aDCUOBFcHBmi4UarG4HDmonh4ZMPodfrwdBRkochq7IGwx/P/RS4SospsNl/ gM5CMwVNM88YMIxhKDNkUYHxNwUzlioOLOkRMGo1cuAfiYIOdx8LjsG1cL7ChaHB0cFA661R Ch7eKcfgrp5jobO1nQFnUQELVybNGJ77LDRYvB4O/mwyUXBNHyj6x++g4OS/b1loK2gK0KXr FPQO1CNoPDVMgb26D0OLd5yCWnsJDf/9dh/B6OkJDrLzZzgoyziNIC+7lIGeN20s6F0xMDtd jrdsJi3jHproa48Th8/EkN/NIrltHOKIvnGQIyZ7Mqm1riGVDWMUuTjlZYm9KgcT+1QxR3In eini6mvAZLK7myPt52aZ7Z/sVnytkeO1KbJuwzcHFLHGOQN3tEJxYiDjOpWOhvlcxPOiEC3e LE99j5dORueiEB4LK8X+/hk6yKHCZ2JtwV9sLlLwtFD5gfis/R4XDBYJB8TZzOJ5iREixO7C DDbISmGTmDeon3dE4VPRdq1p3gkJ7OsfmFGQVUKMmOlqxsFSUSgLESu8c+z/B0vEu9Z+phAp TWhBFVJpE1MSJG189PrY1ETtifUHjyTYUeCrLGn+PbfQlHNHMxJ4pF6ojFx2TqNipZSk1IRm JPK0OlS5eNqgUSk1UurPsu7Ifl1yvJzUjJbyjDpMudF3XKMSDkvH5DhZPirr3qcUHxKejr6t H8ChLa/vhtdVhK2IqiG+J759OdtUxpc9YbZlh3PkQ/eG+zc/Hc1fXvejc+fq/Wd3fiy5pfTY OH9XZ2Sypstt6riw5dHejT/V+dqzb5pefP84Yt9Q1lauYHKVdeT2ZF/pR0tilHW/cp7PK88U tR3cdWG66JevWhbE0bYv015uuqx+oWaSYqWoNbQuSXoHiDIgplEDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTdxjG/Z87HTXHivPMOSVdiIKZAyP6bhqjcYn/GHVbNDHqB2nsiW0o oK1yMRIp1IIgCCS1CEhoMZVAvXBAA0IJF7m6IQ7SKbYMiVFRhAiU2cGmhWRf3vzyPL88n16O VNjoVZw2/oyoj1fplIyMkh3YlvFdXOo1dWSfZTkUXI4E30wWBaV3nAz0365G4KwzEjDWsQf+ nB1HMPf7YxKsln4EthdeEuo6hxG4KtMZGHi5FAZ9kwz0WHIYyKi4w8CTd/MEeK4WElAt7YdH +XYCWvyvKbCOMVBizSAC5w0BfkcVC460MBitLGZh/kUU9Ay7aWi/3kODa2gDXCvzMNDk6qGg s36UgIEHpQwMOz/R8Kizm4L+glwabk3YGXg36yDB4Ztk4Y+WcgLumgJrb+ddBJin/6OhK7cl QDdqCBh81oigOWuEAMnpZqDdN05ArWQh4Z+bHQhG896zcPGyn4USYx6CnItXKXj8bxcNJk80 zH0sZXZux+3jkyQ21SZh12w5hXvtAm4o9rLY1DzE4nLpLK6tjMAVTWMEtk35aCxVXWKwNFXI 4uz3gwT2uJsYPNHXx+Luojnql2+OyrarRZ02UdR/vyNGpin+ZGVPlcmSnxlriDQ0wmUjjhP4 zcIN8+ZsFMQx/Drh6VM/ucAhfKhQm/uKzkYyjuQrvhBedz9kF4rlfIwwl164KFF8mNCXb6QX WM5vEXKGTIuOwK8Vqu+2LDpBgbzxNztaYAUfLaR72ph8JCtHS6pQiDY+MU6l1UVvNMRqUuK1 yRtPJMRJKPA4jtT5gno0M7CnDfEcUgbLI9cUqRW0KtGQEteGBI5Uhsi//GhVK+RqVco5UZ9w XH9WJxra0NccpVwp33tYjFHwJ1VnxFhRPCXq/28JLmhVGsrcpM6c/rn1Su/5qRUXomM77itD 7Q1FM2+wzVSd2Rq8/knWX857JTXmc/UW7/Mfnv+dYDx4b8j9AWmsP62Oqtj6lX/wvGlgYteR 6R+P6w8Ee3fr9sml8Oa3Zinc/GtJ2OlNddbUpAi3q0vz7e5je3MaXt4MPmE7PbrCvcx7KNRe lZWrpAwaVVQEqTeoPgMlG0PBNAMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769354208880907?= X-GMAIL-MSGID: =?utf-8?q?1769769354208880907?= Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to wait_for_completion()/complete(). Signed-off-by: Byungchul Park --- include/linux/completion.h | 4 ++-- kernel/sched/completion.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/completion.h b/include/linux/completion.h index 32d535abebf3..15eede01a451 100644 --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -41,9 +41,9 @@ do { \ */ #define init_completion_map(x, m) init_completion(x) -static inline void complete_acquire(struct completion *x) +static inline void complete_acquire(struct completion *x, long timeout) { - sdt_might_sleep_start(&x->dmap); + sdt_might_sleep_start_timeout(&x->dmap, timeout); } static inline void complete_release(struct completion *x) diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c index d57a5c1c1cd9..261807fa7118 100644 --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -100,7 +100,7 @@ __wait_for_common(struct completion *x, { might_sleep(); - complete_acquire(x); + complete_acquire(x, timeout); raw_spin_lock_irq(&x->wait.lock); timeout = do_wait_for_common(x, action, timeout, state); From patchwork Mon Jun 26 11:56:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112913 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7443053vqr; Mon, 26 Jun 2023 05:25:51 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ54vkHwwW34HbcCOoYyd55CmGQ6FyuZKdL4GvGLcIDyq1NrQkn6V0BpMguoKF7bA3Fcvt98 X-Received: by 2002:a2e:948b:0:b0:2a7:adf7:1781 with SMTP id c11-20020a2e948b000000b002a7adf71781mr18779052ljh.2.1687782351443; Mon, 26 Jun 2023 05:25:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782351; cv=none; d=google.com; s=arc-20160816; b=wa1XbNXpNjLV4wQ2HyBZnESEIdc1f2p+oXGTiTsW+D3ZtdnzYhJn2JpQwQd81Vmux6 rpdIvwlnbkyD1kKrEZqo2BkLGhVAMOPvq0+rF9QBaOVtIAOA23JYOuZCSxTF4ZB6Fx3r PuVTwrXfnEUs0U+H8fj8yw8Qq/u9/q5oZ7hEIfQu/R/8+iSB85sdnQI/QF8a4XtI0eJF B0gDAa1CGHR2yT15R9vAGKcZll8lZnjkljTfnC3FKGvMOcfREHa3JfopjSvlKcSlM/Oz vJwu6MZU+S9cTpnDuLKIVWNtlog1/uk1IJb/clPCWM1teSlSHl7ph2PBbzX1Qn2ab8rK gC2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=GVu9evvztfteD0uC8OS35NYTBgjDrGL/ofWfYToOkoc=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=XyVnmGMMO45858Sdua+wxjoBEeOL2eBW3tFGk3TpfwMLj4H2wTUyjzTQDZbJ3BzaKc Qv4I1V0sfyRYjjCUuj++Cg2K20iJ4zqLI6VYyrPHunFxB7m9RZjB6wo2RRd3h6ceH/a2 mCeJoG+XfWWQ78TvnljKuMDRWpZGsYSws4X1nhcOELj3RcsD0c2hdaw1uKtHU0oHLx3z OMZOX5RCrUpqNHFQhXeARTDmpG05G91SilZsTd97MzbLJwfQQRm7x9xCC+mMODSpMgIB /yP38He3yJZ6hhlCGcrwH6kfb7OlA94ugG7Nfe4tYLl7eVQF7jszyeYUDxJU/9PtHPYR ae7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g25-20020a1709064e5900b0098ce34f0bbdsi2737666ejw.288.2023.06.26.05.25.27; Mon, 26 Jun 2023 05:25:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230497AbjFZMUd (ORCPT + 99 others); Mon, 26 Jun 2023 08:20:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbjFZMUF (ORCPT ); Mon, 26 Jun 2023 08:20:05 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 799851B7; Mon, 26 Jun 2023 05:19:27 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-d1-64997d6d3450 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 19/25] dept: Apply timeout consideration to swait Date: Mon, 26 Jun 2023 20:56:54 +0900 Message-Id: <20230626115700.13873-20-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAz2SbUxTZxTHfe7Lcy/Vmrti5p0aX2rIMhYcLOCOLxnGxPj4bmKcC0u21fVO mtFKiiIYMSB1VhRUFAqIWsrSEajTtXxAoA2ilDcFBgxQK0NgIgpC0DK7Ml3B6JeTX/7nf36f Dk8rXOwCXqM7IOl1qjglljGy0TmWMG1Kvjp8aHwlnDsdDt6XRgYKr9kwtP1WhsBWnkbBcN1G 6J4cQeC/20qDKacNQdGjhzSUu3sROEuOYegYnAud3jEMjTmnMKQXX8Pwx7MpCjy52RSU2bdB 81kLBTW+IQZMwxgumtKpwHhCgc9ayoE1NQT6Swo4mHoUAY29XSw4738K+Zc9GKqdjQy4K/op 6KgsxNBre8NCs7uBgbZzmSxcfW7B8GzSSoPVO8ZBe42ZguuGgOjplJOCn1+8ZqE+syZAv/xO Qee9KgQuYx8FdlsXhlveEQoc9hwa/v21DkF/1igHx0/7OLiYloXg1PFcBlr/q2fB4IkC/6tC vG4NuTUyRhOD4xBxTpoZ0mQRyY2ChxwxuO5zxGw/SBwloaS4epgiRRNelthLT2Jin8jmSMZo J0U8XdWYPG9p4UhDnp/ZuShGtlYtxWkSJf1nX34vi33Sty3+BU5Kc40zqaiJzUBBvChEikM5 xvf8OLV5hrHwsdjT46OneZ6wVHRkPg7kMp4WimeLQw23uQzE88HCBtF5InS6wwghYmu9H02z XFgpDvrq8VvnErHses2MJyiQV92xzHQUQpR4zFOLp52icCZItI3UMW8PPhJvlvQwZ5HcjGaV IoVGl6hVaeIiV8Qm6zRJK37Yr7WjwFdZU6a+qUATbbtqkcAj5Rx5+OI8tYJVJSYka2uRyNPK efIPX5nUCrlalXxY0u//Tn8wTkqoRQt5Rjlf/vnkIbVC2Kc6IP0kSfGS/t2W4oMWpCLt9jch UtOFyDsLo9coBr6WnyzO+xOKXof5TgQnHSW+1TGLhH92rP97a9hgXUd6X/NAZxa353DFKvfL q+ymI/vW8Ze6pT26ivIv5u9ebozpm/1BAYnavDElrf2rmxIZ/mTgyLcu3Y+3xyvd/ugWx9GI Zdl/PdhrzDJXnm9Xtd81XNkSrWQSYlURobQ+QfU/KDy/gVEDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/L/xxXi8OSPHQzVhEZ3iDjLUdEUJ2CLoQV3R3u1Fa6aivT otS0LG2SgZqXZLlYopY2AzWdLU03s9RS7LYkzS6WaVQbLe0yjb68/Hieh9+nlyVlRnoaq9Ee EXVaZbQcSyjJ+vDkwJiTuaoQW8EMyLwQAq7v5ygoKC/D0HGzFEHZ7SQCBppWw1P3IIKRR+0k 5GR1ILja+4qE2809CKzFpzF09k+GLtcwhpasdAzJpnIMjz+NEuDMvkRAqWUdtF4sIsDmeU9B zgCG/Jxkwns+EOAxlzBgTpwHfcV5DIz2hkJLTzcNjVdaaLC+WAi5hU4MddYWCpqr+wjovFOA oafsDw2tzQ4KOjINNNwYKsLwyW0mwewaZuCJzUhARYrX9nHUSsDZb79psBtsXrp2i4Cu57UI 6s+9JsBS1o2h0TVIQKUli4Sf15sQ9GV8ZuDMBQ8D+UkZCNLPZFPQ/stOQ4ozDEZ+FODlCqFx cJgUUiqPCVa3kRIeFPFCTd4rRkipf8EIRstRobI4QDDVDRDC1a8uWrCUnMeC5eslRkj73EUI zu46LAy1tTGC4/IItXHmdolCJUZrYkVd8LJIifrD63WHvuG4pPovVCJ6QKchH5bnFvHvElvH GXPz+WfPPOQY+3Kz+UrDO28uYUnONJF/77jPpCGWncKt5K2pAWMbipvHt9tH0BhLucV8v8eO /zn9+dIK27jHx5vXPiwa38i4MP60swFfRBIjmlCCfDXa2BilJjosSH9AHa/VxAVFHYyxIO/f mE+OZlaj752rGxDHIvkkacisyyoZrYzVx8c0IJ4l5b7SqT9yVDKpShl/XNQd3KM7Gi3qG9B0 lpL7SdduFSNl3D7lEfGAKB4Sdf9bgvWZlohOrGrbVDIn1G1oOvzkrnzDnv3bFYqq6RPLTVXz sxL8a+qiKGvg3Ijfszbf6P+ozrAbOtW7wo/fi5jDvzncHmDIqGh8mXc/1m/NqQXlvcsmO76U 7l4wJFuqzG5K2FKxY3fydUe63O4mg0CxcefbhCV7kXbKy8LUOL/wPzd9t5lWRNQHyym9Whka QOr0yr8K6JCoMwMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768067175168473?= X-GMAIL-MSGID: =?utf-8?q?1769768067175168473?= Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to swait, assuming an input 'ret' in ___swait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/swait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/swait.h b/include/linux/swait.h index 02848211cef5..def1e47bb678 100644 --- a/include/linux/swait.h +++ b/include/linux/swait.h @@ -162,7 +162,7 @@ extern void finish_swait(struct swait_queue_head *q, struct swait_queue *wait); struct swait_queue __wait; \ long __ret = ret; \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ INIT_LIST_HEAD(&__wait.task_list); \ for (;;) { \ long __int = prepare_to_swait_event(&wq, &__wait, state);\ From patchwork Mon Jun 26 11:56:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112918 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7450485vqr; Mon, 26 Jun 2023 05:38:05 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4R602WpHfEc2GGxavgSyUDkYQ/0U+QVv43OSEaaIXUyd+M4B3gUbejl9TlGedLAK2ThFRG X-Received: by 2002:a17:907:7842:b0:98d:9ded:f87f with SMTP id lb2-20020a170907784200b0098d9dedf87fmr5323628ejc.10.1687783085464; Mon, 26 Jun 2023 05:38:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783085; cv=none; d=google.com; s=arc-20160816; b=eMBKhiXx9EOq/MCaqwsq326BOljxo4rLrsapQCKuzcb7XI3/Mbs4bmt8yZL3JD39v9 DrV3kLaJpinGsB0HDjcslnxxxNvP7NyvRMGZh4Gd6kHAxP+Yq7MjUDMHTz+/Xi8ij9zR hXsMOgC2Hq0UH5nPfx+xu8o0Sz0ExfSKvdbD/Y+4reGXrSmJLAohj9VYEQBVavmfrGdd KayFAUGNnT4ON2KTfZwq3aAnX64+Cl7N+Qoqk/YNM6A1hafI6qG2KeAhG84vXiYxVtqR f6ltaXBZaKvWuCZHlPIthRsf21q8zEZA3RAnFNFaeM1HIyF306aoPjL3paM1ZMDLiAoS x1Pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=H0MAQpFzxA4s5RXLatsmNiKyUne2G3ye16I5Y8qdGHQ=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=QjMlK8FHsH4O1gEYFSJSfRtPMUUg2zBJ/wffujHfS2CrDjIdFydFi/C89r7O3rh542 TkjleCGYT8vh9vq9IwNwbHV8EQQGmMzAl2x51q/XBVgKg9YkRqiG/ae9DE5k65Xv90Bc 07ohumIMp8vrn5FBrz2ehwocS4rZtanhDd5ouWPrrjaGdgk/i09xNMORwjAmBZOO9ZrS ZihVpBCTwcKEmfYCTQLM3B70WtuINtQfGR15RSRdxjzUp4NdTKCk0eL2BkqH4UmLJ3wz SAWfDBP4VsbSB8Sq+f8hCHn5aLU1B5oAl58SkKSOEKCmmQGiZAHWQJU6HqaqYr8Q3IZY DC3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t16-20020a170906269000b00977ebf6b906si2488797ejc.887.2023.06.26.05.37.40; Mon, 26 Jun 2023 05:38:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbjFZMUi (ORCPT + 99 others); Mon, 26 Jun 2023 08:20:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231193AbjFZMUF (ORCPT ); Mon, 26 Jun 2023 08:20:05 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B8E13E53; Mon, 26 Jun 2023 05:19:28 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-e3-64997d6d0714 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 20/25] dept: Apply timeout consideration to waitqueue wait Date: Mon, 26 Jun 2023 20:56:55 +0900 Message-Id: <20230626115700.13873-21-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSW0xTWRSGZ5/LPqfVmmMlelSik0Y00ahoxKyo8TLRuOdhZlDfvASrPUK1 VNIiiNERsRoHxAiRAtqZlEsqAgq2qCiUYBEEUUElWEklUq8dQBRtBenotEZfVr78a/3f0+Jp ZQM7jdfqkyWDXq1TYTkjHxxfNF9/qFAT/cKMIOdkNPg/nmDAUlWJofNSBYLKmiMU+JrXw+PA AIKxex005Od1Iijqe0pDTUsvAmdZBoZHLyZAl38IQ1teFoajJVUYHvQHKfCYcymosP8G7aeL KWgcfc1Avg/DufyjVGi8oWDUVs6BLT0KvGVnOQj2LYK23m4WnD3zoPAfD4Z6ZxsDLbVeCh7d sGDorfzKQntLKwOdOdksXHxbjKE/YKPB5h/i4GGjlYJqU0j0b9BJwfEPX1i4nd0YotLLFHQ9 qUPQcOIZBfbKbgxN/gEKHPY8Gj6fb0bgPTXIwbGToxycO3IKQdYxMwMd/91mweSJgbERC169 nDQNDNHE5EglzoCVIXeKRXL97FOOmBp6OGK17yOOsrmkpN5HkaJhP0vs5X9hYh/O5UjmYBdF PN31mLy9f58jrQVjTGzkZvkKjaTTpkiGhSu3yxOs74fYpCvc/r99unRkwplIxovCErHgeDX3 gz8XDbJhxsIc0e0epcMcIfwsOrJfhXI5Twsl48TXrbe+FSYJv4vvex6HjnieEaJEc/CXcKwQ lopNroHv/pliRXXjN48slNfdLUZhVgoxYobHhcNOUciSia/On/lemCreLHMzp5HCin4qR0qt PiVRrdUtWZCQptfuX7Bzb6Idhb7Kdii4pRYNd25yIYFHqvGK6BkFGiWrTjGmJbqQyNOqCMXk kXyNUqFRpx2QDHvjDPt0ktGFpvOMaopicSBVoxTi1cnSHklKkgw/thQvm5aOYhLdq0YueimX OcPE/8HdvdqzkR6bfTDJV9NxuDfe8uFC7MTd1/5Mbs5MuVEbzNO2L5tpiXu3NWpNqXHHE/ev u/wrNqU6vDnpCatzJsVfnRiHm6sKxedrbm6z1Qk1nyakvazNiMwiazf0n/kyf1bkV39fvOS7 HKGxrssNzJ4KpbJYFWNMUC+aSxuM6v8BdImIxFEDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTcRjG+5/L/xxXi9OSOhjdFhYVlV2UF4qIvngKkvpSYDdXO+VoW7aZ aWRZrjJNUclLpjU1lsxVNgssXY1Z6rpaMy2bktJtpdnFDS/L2qK+vPx4noffp5clZUY6jFVp E0WdVqGWYwkliVmZvkiTel4Z0XQ6HPLORoB3MIOC0usWDK3XqhFYbh4nwPMgGjp8fQhGnzwj oaigFUF5TxcJN5u6EdiqTmBwvZsIbd4BDM6CLAzpldcxPP/iJ8BdmE9AtXUDPMqtIMA+/JGC Ig+GC0XpROB8ImDYZGbAlBYOvVUlDPh7loKzu52GxjInDbbOhXD+ohtDg81JQVNdLwGuO6UY ui2/aXjU1EJBa142DVe/VmD44jORYPIOMPDCbiSgxhCwffbbCDj1c4yG5mx7gC7fIKDtdT2C uxlvCbBa2jE0evsIqLUWkDBy5QGC3px+Bk6eHWbgwvEcBFknCyl49quZBoM7EkaHSvGaVUJj 3wApGGoPCTafkRIeVvDC7ZIuRjDc7WQEo/WgUFu1QKhs8BBC+Q8vLVjNZ7Bg/ZHPCJn9bYTg bm/AwtenTxmhpXiU2jg9VrJKKapVSaJuyeo4Sbzx+wCdcItJLvOo05ABZ6IQludW8CPl/XSQ MTePf/VqmAxyKDeLr83+EMglLMlVjuc/ttxngsVkLob/3tkRGLEsxYXzhf61wVjKRfGNjr5/ zpl8dY39ryckkNc/rkBBlnGR/Am3A+ciiRGNM6NQlTZJo1CpIxfr98WnaFXJi3fv11hR4G9M qf68OjToinYgjkXyCdKIGcVKGa1I0qdoHIhnSXmodMpQkVImVSpSDou6/Tt1B9Wi3oGmsZR8 qnT9FjFOxu1VJIr7RDFB1P1vCTYkLA3FTlhvZtkCza51rs1HoqJjLh5w0TXnjmrndEXVDDab O9q3f54KL9GbnDHnpSnLX0/37dH7difEWu8ZjJna3Cuu+cfqw4qrrm6wvDclzvZ1zq2fsWz+ ZVmMbNuSmSNZYWO/G0pS3+2waOx14bu2ZWf8mr3s28jWtcuH8j3cppWTpC/j5JQ+XrF0AanT K/4AG6vq3jMDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768836414368952?= X-GMAIL-MSGID: =?utf-8?q?1769768836414368952?= Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to waitqueue wait, assuming an input 'ret' in ___wait_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait.h b/include/linux/wait.h index ff349e609da7..aa1bd964be1e 100644 --- a/include/linux/wait.h +++ b/include/linux/wait.h @@ -304,7 +304,7 @@ extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags); struct wait_queue_entry __wq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\ From patchwork Mon Jun 26 11:56:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112920 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7452380vqr; Mon, 26 Jun 2023 05:41:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Ak2yxq9ma+Ony7lhxMUkOeQEKcvDUcOxSpC66PruK0V+uq5x/K0xZnHAtNtrunQSUWCOw X-Received: by 2002:a05:6402:45a:b0:51a:2013:583 with SMTP id p26-20020a056402045a00b0051a20130583mr18957794edw.13.1687783300087; Mon, 26 Jun 2023 05:41:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783300; cv=none; d=google.com; s=arc-20160816; b=B5eQ0BBZY0wpmRbAf4uWq0bsfbcsMZBhuhwvWdhj9Iz1fTRk8RATENGdkEcngnqb8N AyqPuO5+j9wSpmmjtIZJj4NpoaoxXNAtxQ7kh4yNDk6mz1hVXIXvtkvHo5UtrLOFSwPp FzLblTF5YnxtuJGXaTqQFNYim0kXatYsqlVt8zksAXcomQevU7t/kzLcSzPeNlBAh4Pu TX7A1yoegZOiZpjJ96E930V48ELnVcWbeyxJ4bAMt4DSm2knlxZxj7vRD3wZl3KLcM3B zHAtynL43JBMZMunmoelxuAEUCXlOfwh2A0HJmckK5BBwJka7hw4AfUztNgjUaPIjhfJ 72Bw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=09EUZJPmXW7lOZvbeu5TNUPkzhdC2Dat/aeOn/0RYwc=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=fifkGPpZI+K1BmCLymWXVqyovLYMC9mHoF7AJ3QmXtGxRawMG/M1fR8Cgcte+qz776 bVjlHx6W7SLjPekhaLwGlcMWLxvXBp7j8uINCggWVIM9G4TkGSmi+MG5v8TXXNKrQ+8M M3RIcQe8wsx9lcKN7XSiU72YjxnDWjral04Cn00LUGD0ZM/YX+rcNKK4gQB0na0ioRsM LnUSJlc/k/g0YgMfyuarrQgdN/Mql88zFVdBRrztlLC2CDlAPdYHY3DC5e1g8LYi+Nhv a5GcMlEhWn5yxmzEELiHsy7B0M4zPsVa0mMKmjLuf7i9pvJtR0Bx6mZ2wE6IcRbmHiwz vXAA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i15-20020aa7dd0f000000b0051d97a21bf6si1362352edv.72.2023.06.26.05.41.16; Mon, 26 Jun 2023 05:41:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230115AbjFZMU2 (ORCPT + 99 others); Mon, 26 Jun 2023 08:20:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbjFZMUF (ORCPT ); Mon, 26 Jun 2023 08:20:05 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DC8C39B; Mon, 26 Jun 2023 05:19:32 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-f4-64997d6d30d7 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 21/25] dept: Apply timeout consideration to hashed-waitqueue wait Date: Mon, 26 Jun 2023 20:56:56 +0900 Message-Id: <20230626115700.13873-22-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSbUxTZxTHee7Lc2+rNTdV5yMum2nCJBoVGCwn0RD3wXBntmzJ1A9bjDT0 RhqhmlZ5S4ggRRQpEQZUgUwsWyHQibaQoFDFIm8iUpUwJIUALiJCYbK1oYK4FrMvJ7+c88/v fPnztPI+G85rdWckvU6dosJyRu5db9mty76miSrrlkFJURT4/r3IQHWTDYP7ZiMCW3MuBTNd CfCnfw7B8sAgDeZyN4Ibk2M0NHePI3DWn8fw/K8NMORbwNBXfhlDXm0ThqezKxR4KkopaLR/ B/1XLBR0BKYZMM9gqDLnUcHxmoKAtYEDa04ETNVXcrAyGQ1948MsOEd3wbVfPRjanX0MdLdO UfD8bjWGcdsHFvq7exlwl5hY+GPegmHWb6XB6lvg4FlHDQW3jEHRmxUnBRf+WWWhx9QRpN9u UzD0og3BvYsTFNhtwxg6fXMUOOzlNLyr60IwVezlIL8owEFVbjGCy/kVDAy+72HB6ImD5aVq fGCf2Dm3QItGR7ro9Ncw4iMLEe9UjnGi8d4oJ9bYz4qO+p1ibfsMJd5Y9LGiveESFu2LpZxY 6B2iRM9wOxbnnzzhxN6ry8wPn/4k36+RUrRpkn5vfKI8+XFeEzpdzGX87TFROWiSLUQ8T4RY 0l8eWYhka+h+Y+JCjIUdZGQkQId4k7CdOEyvgnE5Twu168h078O10EbhCBkpGGdDzAgRZHrQ iEOsEL4izRMTzEfp56TxVseaSBbctz22oBArhThy3uPCISkRCmTklevjZyJsJQ/qR5grSFGD whqQUqtLS1VrU2L3JGfqtBl7kk6l2lGwVtbslZ9b0aL7RxcSeKRar4j67KpGyarTDJmpLkR4 WrVJ8cmSWaNUaNSZWZL+1HH92RTJ4ELbeEa1RRHjT9cohRPqM9JJSTot6f+/UrwsPAdlzfo7 q3pelh5aXQzMhr1wv+1pIXsjChib4nrCutG0smx6e2S649uEgfASb+y7DV8e8Ma3HMwqy6uN +Ho++e77w4+SwvSXMppzVXX7Cnd9sWXb0VTz7+cM5JfEmCH8fdsOU9/mioHIQ8JyzLGx6Mpw +zexE3EHP7xExdcXumx3rPIiFWNIVkfvpPUG9X+WuxbJUgMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0iTYRTHe97L826rxcuyerErC5HsooHGgSQKujwUhX2IqIxc7SWXOm1T S6Hwsm7abEpmXiqbsYZay+kHK1eiZS5LLcUspjiJzEsa2iRTs2n05fDjnP/5ffpLaIWZ9ZVo tPGiTquKVmIZI9u/JX1DzPl8dZDRvh6yrwWB5+cVBops5RhaH5UhKK9KpaD/1W74OD6EYPJd Cw15ua0I7rm7aKhq6EbgsKZhaPuyENo9IxicuZkY0ktsGN4PTlHguplDQZl9HzSZzBTUTvQx kNePoTAvnfKObxRMWEo5sKT4Qa+1gIMp9yZwdnewUH/byYLj8zrIv+PCUONwMtBQ3UtB29Mi DN3lMyw0NTQy0JptZOHhsBnD4LiFBotnhIMPtcUUPDZ4bQNTDgoujf1h4bWx1kv3Kyho//QM wfMrPRTYyzsw1HuGKKi059Lw+8ErBL1Z3zm4eG2Cg8LULASZF28y0DL9mgWDKwQmfxXhbaGk fmiEJobKs8QxXsyQN2aBPCno4ojh+WeOFNsTSKU1gJTU9FPk3qiHJfbSq5jYR3M4kvG9nSKu jhpMhpubOdJ4a5IJW3FEFqoWozWJoi5wa4Qs8m26DcVlced+uIxUCnKzGUgqEfhgoXXAyM0y 5v2Fzs4JepZ9+NVCpfGrNyOT0HzJfKGv8eVcaBF/UOi83D33zPB+Ql+LAc+ynN8sVPX0MP+k q4Syx7VzIql3/+ytGc2ygg8R0lx12IRkxWheKfLRaBNjVJrokI36qMgkrebcxpOxMXbkbY7l /FR2NfrZtrsO8RKkXCAPWnlLrWBVifqkmDokSGilj3zJrzy1Qq5WJSWLutjjuoRoUV+HlkkY 5VL5nkNihII/pYoXo0QxTtT9v1ISqW8KIhnX7x5YOqYeP/N1JXSNvuhI2+L2tsA2dkGfPGyr DgyJ53t2FcZVnG5vCsZW4wnT6WPhgjo/PKpv+VpTQoInM3Q6OQu7tCaF23p3RhomXRZ0I/Co /wXf3wZnwUBOwOBen2+GL22pO9eETYfuWOxk5pXOrGWV4cGHcxZvDy+yKBl9pGpTAK3Tq/4C 5Kvx3TUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769061541864725?= X-GMAIL-MSGID: =?utf-8?q?1769769061541864725?= Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to hashed-waitqueue wait, assuming an input 'ret' in ___wait_var_event() macro is used as a timeout value. Signed-off-by: Byungchul Park --- include/linux/wait_bit.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/wait_bit.h b/include/linux/wait_bit.h index fe89282c3e96..3ef450d9a7c5 100644 --- a/include/linux/wait_bit.h +++ b/include/linux/wait_bit.h @@ -247,7 +247,7 @@ extern wait_queue_head_t *__var_waitqueue(void *p); struct wait_bit_queue_entry __wbq_entry; \ long __ret = ret; /* explicit shadow */ \ \ - sdt_might_sleep_start(NULL); \ + sdt_might_sleep_start_timeout(NULL, __ret); \ init_wait_var_entry(&__wbq_entry, var, \ exclusive ? WQ_FLAG_EXCLUSIVE : 0); \ for (;;) { \ From patchwork Mon Jun 26 11:56:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112919 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7452246vqr; Mon, 26 Jun 2023 05:41:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6/cRjvcEL1vCFIG7ud3ORzqLH7gnDwc5+KBLhDgSKXXbbA4wnmDj+ZcGdef3jM8m/+IXeY X-Received: by 2002:a17:907:7ba5:b0:982:4b35:c0b6 with SMTP id ne37-20020a1709077ba500b009824b35c0b6mr28109515ejc.1.1687783284572; Mon, 26 Jun 2023 05:41:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783284; cv=none; d=google.com; s=arc-20160816; b=jUlahin26UsFKG93lJRWOCkF2sRuL66nGQ+UuT6/xeQKcuV8V1lVHur9zMgLM40HTX mOZMt/mrhsAevf/YjZAdd++frQPdwWxyaTyFKOadqbo80CKOp7D9uU4OlMXUscp5D9sn kcT2Hd9Vl8crupYyezmSqpAwJILaPhmc6lbCcWW8s38PJ89WdLsk9oJZ/8bd6w0o5T1o ugPoPApHTPlDhH123+i+c4LE4HUhaG3gf7IDHwL4Lyt4SAx7lvJKSW4aJMGPh0zXcjQa 8qifJhOA5Gpa3lQlVHPSfGBH90skmDxRi30hYXUg4XlZkdtQ5TqS7nFMdgRWofUobEUN 9dSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=h3di72I7R2VgeiF8z6AYHKaLQ80w8Bd5V6CJo9uw7fQ=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=xe1+Qm8gCcYGqQy+hXfdnFH/at9nJlg53VnoWnsgzndPBvfDIGYEamV3nnO1Gp7K4W kGhTStwgaEnJyks9d6x22w58vpAoRKl5/CA8xf7m/fStlzLGUt4XtGcXwSdniL17bPxz fESt2OSEC43MGpeDy35TK3q8DwVgzFLJyT3Y9HRDRWKsGdGdopTeGh3ewXnp72wQvEQB 75i5tN5lE9kq7BSOinr/a1lmT+up2gC2YxH8FmFSoRmn6SfuduOzgGOx+jDglFyDR0qB o2Jc1NOrSffhBFqVe8xqhfwUpr61MLSqtAyauEFVZp/pnS31BySAV7YnxgRD+2weZGiS Rx2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mc27-20020a170906eb5b00b00991d19c47e2si751196ejb.520.2023.06.26.05.40.58; Mon, 26 Jun 2023 05:41:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231192AbjFZMVp (ORCPT + 99 others); Mon, 26 Jun 2023 08:21:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229757AbjFZMU1 (ORCPT ); Mon, 26 Jun 2023 08:20:27 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4E5CB1990; Mon, 26 Jun 2023 05:20:05 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-06-64997d6d445e From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 22/25] dept: Apply timeout consideration to dma fence wait Date: Mon, 26 Jun 2023 20:56:57 +0900 Message-Id: <20230626115700.13873-23-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0xTdxjG/Z/L/5x21pzUyw6M6FKDJl5hUfNmzsXED/4THbpo1OgHbehR qqWyoiguKgreqhDUAApMazGlFlAsqCjU1Co3FYGBtZJSJxMVuS3MVi7dpdT45c0vz/s8z6eH p5VONpLX6vdIBr1ap8JyRt4/0Txff/CiJqbJJcDZMzHg/3iSgcIbpRharpcgKK08QkFP7Up4 EehDMNbUTENeTguCK687aais8yFwWI9iaHszCdr9gxgac05jSC+6gaG1N0iBN/ccBSX2n+BJ tpkC58g7BvJ6MBTkpVOh856CEYuNA0taNHRZ8zkIvo6FRp+bBUfHXLh4yYuhxtHIQF1VFwVt 9wox+Er/Y+FJXQMDLWczWSgbMGPoDVhosPgHOfjdaaKgPCNU9CHooOD43/+yUJ/pDNHVmxS0 v6xGcP/kHxTYS90YHvr7KKiw59AwWlyLoCurn4NjZ0Y4KDiSheD0sVwGmv+pZyHDuxjGhgvx 8qXkYd8gTTIq9hFHwMSQx2aR3M3v5EjG/Q6OmOx7SYV1Dimq6aHIlSE/S+y2U5jYh85xxNjf ThGvuwaTgWfPONJwYYxZG7VZ/oNG0mlTJMPCH7fJEx6N2dikNn6/57kNp6ESzohkvCgsEqu7 XfQXNl3rZcYZC7NFj2ckrE8RvhUrMt+yRiTnaaHoK/Fdw6NweLIQJ9Z6ysImRogWy//KosZZ ISwRK0dN6HPpDLGk3Bn2yEJ69VNzWFcKi8WjXhceLxWF8zKx+Wkm8zkQIT6wephspDChCTak 1OpTEtVa3aIFCal67f4F8bsT7Si0K8vB4JYqNNSyzoUEHqkmKmKmX9AoWXVKcmqiC4k8rZqi mDacp1EqNOrUA5Jh91bDXp2U7ELf8Izqa8V3gX0apbBDvUfaJUlJkuHLl+JlkWlIGf1b66uO jT2W69r8VT9PvXPgTcTmD2/XyOJWPK7yde5M37S9bFn0su7RWPf61ZjT5W6s/6g69GfHiWJ3 xC/kUHzU2tbvzYFNp+Ydjqcbgxua9Nt86bd+zT3syMqfFWWsjIgxFt+eZXV/ulxHz0yKz74a OVA4L9HavaTLU2CZELdi57CKSU5Qx86hDcnq/wHaIHOuUwMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSf0iTeRzH7/t9nuf7PC4XD0vqodBiFEKRGWV8oIgiwieP6zqIO7grcrWH XG6rNlsaBpYWnTVTyaZpMafsbLOyzT+snIi/l3dpabZkWdrPpenRNWlO66bRP29evD/v9/uv D0cpbMxiTqPPkAx6lVZJZLRs58bc1bqTZerEztfroOhCIgQ/naOh4lYtgd6bTgS19acwBNqT 4cnkGILwPz0UWEp6EVQOP6OgvmMIgafmNIG+V/OhPzhBwFtynkBu1S0CD0enMfgvF2Nwun6C 7kIbhubQWxosAQLlllwckXcYQnYHC/acFTBSc4WF6eG14B0aYKD1qpcBz+AqKLvmJ9Do8dLQ 0TCCoe9uBYGh2q8MdHd00dBbZGbgxriNwOiknQJ7cIKFR81WDHV5kbX30x4MZ//7wkCnuTlC 1bcx9D+9h6Dp3AsMrtoBAq3BMQxuVwkFU3+1Ixgp+MDCmQshFspPFSA4f+YyDT0znQzk+ZMg /LmCbNkkto5NUGKe+7jombTS4n2bIN658owV85oGWdHqOia6a1aKVY0BLFZ+DDKiy/EnEV0f i1kx/0M/Fv0DjUQcf/CAFbtKw/Su2N9lm9SSVmOSDGs2p8rS2sIO5kgfl+l77CA5yMnmoyhO 4NcL1uuj9CwTPl7w+ULULMfwywS3+Q2Tj2QcxVfNE952tc0VFvA7hXbfjbkQza8Q6v4twLMs 5zcI9VNW9G10qeCsa57LREX8e3/b5nwFnySc9reQQiSzoh8cKEajN+lUGm1SgjE9LUuvyUw4 cFjnQpHPsZ+cLmpAn/qSWxDPIWW0PDGuVK1gVCZjlq4FCRyljJEv/GxRK+RqVdYJyXB4n+GY VjK2oCUcrVwkT/lNSlXwB1UZUrokHZEM36+Yi1qcg6LLeQXhMyoafzTrtVUjgbK9hTteL+81 jCUvmfTW52b/OmOMLYl+uS7z6bgJF3elzO/ZuqV0z66bqzQJvO9S+FAgOrtta7fOm/rLxe3b 4hyWyod/+PfY02LLB2emqhfGhZ5Xx7tfOFtTSrObUtIPDHNHs813GnQ79nc0TT3/md9tUtLG NNXalZTBqPofGOA1JDUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769045448271240?= X-GMAIL-MSGID: =?utf-8?q?1769769045448271240?= Now that CONFIG_DEPT_AGGRESSIVE_TIMEOUT_WAIT was introduced, apply the consideration to dma fence wait. Signed-off-by: Byungchul Park --- drivers/dma-buf/dma-fence.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 1db4bc0e8adc..a1ede7b467cd 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -783,7 +783,7 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout) cb.task = current; list_add(&cb.base.node, &fence->cb_list); - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { if (intr) __set_current_state(TASK_INTERRUPTIBLE); @@ -887,7 +887,7 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } } - sdt_might_sleep_start(NULL); + sdt_might_sleep_start_timeout(NULL, timeout); while (ret > 0) { if (intr) set_current_state(TASK_INTERRUPTIBLE); From patchwork Mon Jun 26 11:56:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112921 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7452993vqr; Mon, 26 Jun 2023 05:42:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Mc9P+Sx50NKBFjZ+3wKJFoYmCwEFplxQO9F/siRRG8fM7oc4iWY3TdL4I94AZVqfH0vd3 X-Received: by 2002:a17:907:360a:b0:94a:35d1:59a with SMTP id bk10-20020a170907360a00b0094a35d1059amr24513999ejc.14.1687783365277; Mon, 26 Jun 2023 05:42:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783365; cv=none; d=google.com; s=arc-20160816; b=wBw0BKjEOI/PUDBxp4WcLUuDJWbsV73gmedI/9l3M96+oJw3/gG930E6Hk8KR9naBG uchu2At4AKcOagt61let6bwBlZEwsVkZ2aNsJGFtFKK8p+PxKe0aelcNBRfKIqqpvNGP 0jF0idnc5YnE3T7SaLd6KcrXtzpr4djNmlDOKNAmXGjT7xvZCdmvKxp5Pql5+C9sLQ19 7ZdhjadvZTBheLHaRqsRQHTb9QaNEK2csZ6T3VQLa4FqrNDyBcBtEKID9Enr2HV++fhw iyx9+VZXfe9ldVbdTv4wU3S2v7wc0VN6qz/slkUL48dTF0oHpFrZP46z+WU8zWhcKqAU ERLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=7VSmtydfVXVr/jsL1vG1bxg8h8ezelWaDitAIzEFS3w=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=BXrixGjxr4OOpkxNND1JF5ekk+bClmMlFvdc5OILmBaCM1IG7oV5Uk0y0+cOSroUS8 P2z+2+iuRGA6S/u3SLfW0i3huBCknSmeiruu4MNOr/XRALQ+c8kDGT96i++aSbtKGjAy V/L5ZgIdr7MTLKthZ5MrqIq6B27/m5Zit8Mn8r4xcJIm/0kJEEdvSYIOwSvg/cZU8YeB 6wfic9hyOjku3l0l5VHRdAhoeIuf8SAE2Tp7Y88yX7iWVJDHMOV2zQLLU/JrDDHugeXi NnwQ9yKp3fSpyz74YbUEhsNYnU2hIniaIj5XsdzuapvQX+LpDS8/HnEpDdeL3odS3K0J m+Gw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f21-20020a170906561500b009889a257cb3si2687839ejq.602.2023.06.26.05.42.20; Mon, 26 Jun 2023 05:42:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229679AbjFZMXF (ORCPT + 99 others); Mon, 26 Jun 2023 08:23:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230192AbjFZMWr (ORCPT ); Mon, 26 Jun 2023 08:22:47 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D76612D79; Mon, 26 Jun 2023 05:21:06 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-17-64997d6e5ce2 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 23/25] dept: Record the latest one out of consecutive waits of the same class Date: Mon, 26 Jun 2023 20:56:58 +0900 Message-Id: <20230626115700.13873-24-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0hTYRjHe8/lPWfLyWHdThcqBhEVlXZ9oIj8EL0QURBR2Ie22kFXOmuW aRRYWth0lpJOTcpLLfGSNosuOllWpmWlqdNiWdnFtKlhznRbly3py8OP//95fp8enlbWsbN4 nf6IZNBrolRYzsgHggqX6k/makOqXDMgIy0E3CMpDORXlmNouVGGoPzWKQr6Hm+GzlEXAu/z lzSYs1oQFH54S8Othm4EtpLTGNo+BUO7ewhDU1YqhqTiSgyt33wUOLMzKSizboVnF4oosI/3 MmDuw3DJnET5x1cKxi2lHFgSF0BPSR4Hvg+h0NTtYMH2ZgnkXnZiqLU1MdBwt4eCtvv5GLrL /7DwrKGRgZYMEwsVg0UYvo1aaLC4hzh4ZS+goCrZL+r32Sg4++M3C09Mdj9dvUlB++saBHUp 7ymwljswPHS7KKi2ZtHguf4YQU/6AAdn0sY5uHQqHUHqmWwGXv56wkKyczV4x/LxxnXkoWuI JsnVx4httIAhT4tEci/vLUeS695wpMB6lFSXLCbFtX0UKRx2s8Raeg4T63AmR4wD7RRxOmox GXzxgiONOV5m+5xw+XqtFKWLkwzLN6jlkV/yc9hDPyfHm1rrmETUJjMiGS8Kq8TmwXH6P/d6 jFSAsbBQ7OqayKcK88Vq0xfWiOQ8LRRPFnsbH3FGxPNTBI14cSgmsMMIC8SqYgcKxAphjeis CJtQzhPLquz/NDJ/XNNchAKsFFaLp531OKAUhfMyMd3rwBMHM8UHJV3MBaQoQJNKkVKnj4vW 6KJWLYtM0Ovil+2PibYi/1dZTvr23EXDLTvqkcAjVZAiZG6OVslq4mITouuRyNOqqYrpY2at UqHVJByXDDF7DUejpNh6NJtnVDMUK0aPaZVChOaIdFCSDkmG/y3Fy2YlomzX2QM3TT2DuWE7 44PUFWrSPJKaZ+u4Funpx9uEK+yruR519sd3WYsM8ginYnro7UfhvzqD1dNOrG0Ne5pkT9Fu Eb1tEbYtuZ8SP4/4jP0Rh+/omzM6Qu2792V2fv3+08xNkdUsCQr2/AhbE20gK3d1D8+pZDLe d5zblBZ/cOuYiomN1IQupg2xmr8IlczpUQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSfUzMcRzHfb+/xzsuv53GD0NunlaT2mSf0TxtrR8bM3+wmdHN/dSpK+4S ebxU1HEp06NwXZx0IZc/UNdy0YNIdEvshJhEOouLHsQd8897r70/n73+erOE3ERNY9XxiaI2 XhmnoKWkdP2y1IWaw4WqkMevl0HOqRDwfM8gofhGBQ1t160IKm6lYOh9EAnPB/sQjDx+QkB+ bhuCkrevCLjV0IXAXnaMhvb3fuD0uGlozj1JQ2rpDRqefh7F4Mo7g8FqWwct2WYMdUM9JOT3 0nAuPxV74yOGIUs5Axb9XOguK2Jg9G0oNHd1UFB/vpkC+8sgKLzgoqHG3kxCw+1uDO13i2no qvhNQUtDEwltOUYKrvWbafg8aCHA4nEz8KzOhKEyzWv7NGrHcPzbGAWNxjovXbqJwfmiGkFt xhsMtooOGuo9fRiqbLkEDF95gKA76wsD6aeGGDiXkoXgZHoeCU9+NVKQ5gqDkZ/F9Mpwob7P TQhpVfsE+6CJFB6aeeFO0StGSKt9yQgm216hqixQKK3pxULJgIcSbOWZtGAbOMMIhi9OLLg6 amihv7WVEZoKRsgNM7ZIw1VinDpJ1C5aHiWN+VBcQO3+MX6/8WktqUftEgOSsDy3mO8ZNmAf 09x8vrNziPCxPxfAVxk/UAYkZQmudDzf03SfMSCWncQp+bPuBN8Pyc3lK0s7kK+WcUt417VV /5SzeGtl3V+NxFtXPzIjH8u5MP6Yy0FnI6kJjStH/ur4JI1SHRcWrIuNSY5X7w/ekaCxIe9u LIdHc26j7+2RDsSxSDFBFjKzQCWnlEm6ZI0D8Syh8JdN/pmvkstUyuQDojZhu3ZvnKhzoOks qZgiW7tZjJJz0cpEMVYUd4va/1fMSqbpUezl00GujfMCLm1fmtsTYNfPOc7fjFrgrJfUpLQE XwkcHjP5rdjkMK+bmKhfvbMxepdk/td3+MhU/fCqkg3ZR4x7Ln4svKoJjzm0JvFgbeTsQEV/ Z9bR68Gh807cu+O6GzGhwW1C2yLSX2/J3GolWvdt6l5kUY85Dffzipb/sE40TJmqIHUxytBA QqtT/gH3/dofMwMAAA== X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769129901572899?= X-GMAIL-MSGID: =?utf-8?q?1769769129901572899?= The current code records all the waits for later use to track relation between waits and events in each context. However, since the same class is handled the same way, it'd be okay to record only one on behalf of the others if they all have the same class. Even though it's the ideal to search the whole history buffer for that, since it'd cost too high, alternatively, let's keep the latest one at least when the same class'ed waits consecutively appear. Signed-off-by: Byungchul Park --- kernel/dependency/dept.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index 52537c099b68..cdfda4acff58 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -1522,9 +1522,28 @@ static inline struct dept_wait_hist *new_hist(void) return wh; } +static inline struct dept_wait_hist *last_hist(void) +{ + int pos_n = hist_pos_next(); + struct dept_wait_hist *wh_n = hist(pos_n); + + /* + * This is the first try. + */ + if (!pos_n && !wh_n->wait) + return NULL; + + return hist(pos_n + DEPT_MAX_WAIT_HIST - 1); +} + static void add_hist(struct dept_wait *w, unsigned int wg, unsigned int ctxt_id) { - struct dept_wait_hist *wh = new_hist(); + struct dept_wait_hist *wh; + + wh = last_hist(); + + if (!wh || wh->wait->class != w->class || wh->ctxt_id != ctxt_id) + wh = new_hist(); if (likely(wh->wait)) put_wait(wh->wait); From patchwork Mon Jun 26 11:56:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112916 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7446007vqr; Mon, 26 Jun 2023 05:30:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ58ZMGISsCj88j2kMeAYlifU8glkmBGHFV5WHMsJvgTKCuQTiPwb81jqE22Zt6d5OaTZOwQ X-Received: by 2002:a05:6402:4d8:b0:50b:d421:a0f1 with SMTP id n24-20020a05640204d800b0050bd421a0f1mr15382396edw.41.1687782649264; Mon, 26 Jun 2023 05:30:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687782649; cv=none; d=google.com; s=arc-20160816; b=nHoiv9Gfda9XM+i9WagDAlglNluvgtlkJsMfADkYpS3h1gGJ0YSEb5OuJD8fsNpUXI Zy86oIz9uICmXN8CpsIPHr/l8kUPM0vOVgkysqO3+vFFfnYyJ+nKGh2l78IyKsDieMrs SavV2XqANrTbbhvVqCyXPFeQ4ehDLnZkv2rXrN4A7S09pKQX/qCYCaKAJ5kH7P8JMdWg 1kDQPY+gylPY9yEzotIO72sZDGVrzmz/RSyQgV5R0LqBDSgpzvPicyLMl5vLkMgKpH00 DUThoKwrM7b6gftYHw5Dxqk0W6yWGw87cUwLuQSURqCGQcEQiKh2bLRaBDvQZjmMCsry gHlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=/gN0rDgZKuKJrrbf3n4RLoMjUvAZk1xheVlZ7MQw3f8=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=o/2Jz/JUA+zAE91Gm4YksnEwHcuSbPOO4xPXGDsQUosq52D8a+s5uOXD6n8OCehM6k /ZiLWil/1+V7AC1W3PR9kDI0l05Is6urtRh5o25KJqOAsvgK6vBzkNgco/LTef6s5cCS XdBJ0TH4GRy3ZSYlTujw1wewIeKZohf1EWuubKfWMyPaPg7Y+oravGdIjxZr77I9EmL4 +oQqYsD24yf2F8CK//+TgPDpeCWACWJK3qyilTRDt+xZSfuWaYD1gWBOeakZrK88JD2u anYL+xl6HlRzXAEo3UgOpbQeCRDJ1Mne3Hq4KsQYveo12f3z+ADgQndgLB7xU02eCG96 u0rw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i22-20020a056402055600b0051d9231dad2si1828304edx.483.2023.06.26.05.30.23; Mon, 26 Jun 2023 05:30:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231263AbjFZMYX (ORCPT + 99 others); Mon, 26 Jun 2023 08:24:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229754AbjFZMYF (ORCPT ); Mon, 26 Jun 2023 08:24:05 -0400 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E4B80213E; Mon, 26 Jun 2023 05:22:26 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-28-64997d6edf5b From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 24/25] dept: Make Dept able to work with an external wgen Date: Mon, 26 Jun 2023 20:56:59 +0900 Message-Id: <20230626115700.13873-25-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzXSbUxTZxQH8D3Pvfe5baXLTSXu+hK3dCFOFxE31ONLdItLvHExWeYXnTps 7A1UoXRFwTpNeKmLo6JoUt7VUkglUFGLMSgtqSBQZEI3CQKpMIibqxSJlSIF1LUavpz8ck7O /3w5EkrhZpZINNqjol6rSlUSGS0bj7Gu1p4qVSd4Rj6HC2cTIDR5hoaK63YC3vo6BPZbORj8 bTvg8VQAwezDHgqKzV4ElSNPKLjVPoTAVZNL4NHTj6E3NEGg02wikFd1ncCfY3MYfEUXMdQ5 dkFXoRWDO/yMhmI/gfLiPBwp/2EI22pZsGXHwWhNGQtzI2uhc6iPAdfgl1B62UfA6eqkob1x FMOjuxUEhuzvGOhq99DgvVDAwLUXVgJjUzYKbKEJFv5yWzDcMEaCns+5MPz26i0DHQXuiKpv YugdaELQfOZvDA57H4HWUABDg8NMwczVNgSj58ZZOH02zEJ5zjkEptNFNPS86WDA6FsHs9MV 5JvNQmtgghKMDVmCa8pCCw+svHCn7AkrGJsHWcHiOCY01KwSqpx+LFQGQ4zgqP2dCI7gRVbI H+/Fgq/PSYQX3d2s4CmZpX9Y9pNsi1pM1WSK+jVbD8pS7FVdjM657XhRoB5no5Kv85FUwnOJ fM/rGTzv2zOlKGrCreD7+8NU1LHcZ3xDwb9MPpJJKK5qAf/Mc5+NDhZyu/jOS9dI1DQXxw8H Gumo5dx63m1zUh9CP+XrbrjfWxrpN/1hfX9Awa3jc30tJBrKcyYpf2V4AH1YWMzfq+mnC5Hc gj6qRQqNNjNNpUlNjE8xaDXH4w+lpzlQ5K9sp+b2NaKgd3cL4iRIGSNPWF6iVjCqzAxDWgvi JZQyVr5oulitkKtVhhOiPj1JfyxVzGhBSyW08hP5V1NZagWXrDoqHhFFnaifn2KJdEk20v9Y 0hGz8eSG/VvI0tjb1ZOmL8KJ95NGVxou6VZfUZh/SVJ2HfDnfU/Uyz06dQEnfZ48nbwnlL/B Yb93c8Brkf3j//Xxz2T44UHDysPLdi767unktydqx156cnSmbQfwYLC6sjVdmri9/nzuRPwC aXnbiqw4Yu5GrYUbN6UE9xq9SjojRbV2FaXPUP0PuU4rIFMDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAAzXSa0hTYRgH8N5zec9xtTgsqVNhl4FERWUX46mk+pQvQWEFBX3J4U651Cmb WhbG0tnF1Frh3UK3WqYzbVZYbTEvmdMulmYXlqVEZc2EcpK3aiv68vDj+cP/+fLwtMLMzuE1 2mRJp1XFK7GMkW3fkLlMm16sDjM18GDKCQPf8CkGymptGDqvVyOw3TxOwcCDSHg54kUw/vgp DYX5nQgq+t7ScLO1F4GzMgND14fp0O0bwuDOP4Mh01KL4dnXCQo8BecpqLZvg45zZgpco58Y KBzAUFqYSfnHZwpGrVUcWA2h0F9ZwsFE30pw9/aw0HzRzYLzzVIovuTB4HC6GWht6Keg624Z hl7bbxY6WtsY6DTlslDzzYzh64iVBqtviIPnrnIK6oz+ti8TTgpO/PjFwsNcl1+Xb1DQ/foe gvun3lNgt/VgaPZ5Kai359MwdvUBgv68QQ6yckY5KD2eh+BMVgEDTycfsmD0hMP4zzK8OYI0 e4doYqw/RJwj5QxpN4vkTslbjhjvv+FIuT2F1FcuIRbHAEUqvvtYYq86jYn9+3mOZA92U8TT 48Dk25MnHGkrGmeiQvbKItRSvCZV0q3YGC2LtVk62CTHpsMF3uuUARWtzkZBvCisEW+PFaOA sbBIfPVqlA44WFgg1ud+ZLORjKcFy1TxU1sLFwhmCNtE98UaHDAjhIrvvA1MwHJhreiyOuh/ pfPF6jrXXwf59/cemf8eUAjhYoanCZ9DsnI0pQoFa7SpCSpNfPhyfVxsmlZzeHlMYoId+T/H mj5hakDDXZFNSOCRcpo8bF6RWsGqUvVpCU1I5GllsHzmz0K1Qq5WpR2RdIn7dCnxkr4JzeUZ 5Sz51j1StEI4oEqW4iQpSdL9Tyk+aI4BRfVbFrfLrmyevNY8Vd12crYVmSMylr4OqWtdv5Ne +KJMt+tAKFt5bPZkSkdfyXBFdM3W2IW7ttzat6r4QzJ/KLfRnZYVJ2s/uiEmtdSANYPynCTj rBCiMDTOXGA6+CV9hqHXXJBy8lbLukRX5urdn6NWjO3fMXeTYeTC7rNHBSFfqWT0saqVS2id XvUHLnj9EDUDAAA= X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769768378890535166?= X-GMAIL-MSGID: =?utf-8?q?1769768378890535166?= There is a case where total maps for its wait/event is so large in size. For instance, struct page for PG_locked and PG_writeback is the case. The additional memory size for the maps would be 'the # of pages * sizeof(struct dept_map)' if each struct page keeps its map all the way, which might be too big to accept. It'd be better to keep the minimum data in the case, which is timestamp called 'wgen' that Dept makes use of. So made Dept able to work with an external wgen when needed. Signed-off-by: Byungchul Park --- include/linux/dept.h | 18 ++++++++++++++---- include/linux/dept_sdt.h | 4 ++-- kernel/dependency/dept.c | 30 +++++++++++++++++++++--------- 3 files changed, 37 insertions(+), 15 deletions(-) diff --git a/include/linux/dept.h b/include/linux/dept.h index 0aa8d90558a9..ad32ea7b57bb 100644 --- a/include/linux/dept.h +++ b/include/linux/dept.h @@ -487,6 +487,13 @@ struct dept_task { bool in_sched; }; +/* + * for subsystems that requires compact use of memory e.g. struct page + */ +struct dept_ext_wgen{ + unsigned int wgen; +}; + #define DEPT_TASK_INITIALIZER(t) \ { \ .wait_hist = { { .wait = NULL, } }, \ @@ -518,6 +525,7 @@ extern void dept_task_exit(struct task_struct *t); extern void dept_free_range(void *start, unsigned int sz); extern void dept_map_init(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); extern void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, const char *n); +extern void dept_ext_wgen_init(struct dept_ext_wgen *ewg); extern void dept_map_copy(struct dept_map *to, struct dept_map *from); extern void dept_wait(struct dept_map *m, unsigned long w_f, unsigned long ip, const char *w_fn, int sub_l, long timeout); @@ -527,8 +535,8 @@ extern void dept_clean_stage(void); extern void dept_stage_event(struct task_struct *t, unsigned long ip); extern void dept_ecxt_enter(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *c_fn, const char *e_fn, int sub_l); extern bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f); -extern void dept_request_event(struct dept_map *m); -extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn); +extern void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg); +extern void dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, struct dept_ext_wgen *ewg); extern void dept_ecxt_exit(struct dept_map *m, unsigned long e_f, unsigned long ip); extern void dept_sched_enter(void); extern void dept_sched_exit(void); @@ -559,6 +567,7 @@ extern void dept_hardirqs_off_ip(unsigned long ip); struct dept_key { }; struct dept_map { }; struct dept_task { }; +struct dept_ext_wgen { }; #define DEPT_MAP_INITIALIZER(n, k) { } #define DEPT_TASK_INITIALIZER(t) { } @@ -571,6 +580,7 @@ struct dept_task { }; #define dept_free_range(s, sz) do { } while (0) #define dept_map_init(m, k, su, n) do { (void)(n); (void)(k); } while (0) #define dept_map_reinit(m, k, su, n) do { (void)(n); (void)(k); } while (0) +#define dept_ext_wgen_init(wg) do { } while (0) #define dept_map_copy(t, f) do { } while (0) #define dept_wait(m, w_f, ip, w_fn, sl, t) do { (void)(w_fn); } while (0) @@ -580,8 +590,8 @@ struct dept_task { }; #define dept_stage_event(t, ip) do { } while (0) #define dept_ecxt_enter(m, e_f, ip, c_fn, e_fn, sl) do { (void)(c_fn); (void)(e_fn); } while (0) #define dept_ecxt_holding(m, e_f) false -#define dept_request_event(m) do { } while (0) -#define dept_event(m, e_f, ip, e_fn) do { (void)(e_fn); } while (0) +#define dept_request_event(m, wg) do { } while (0) +#define dept_event(m, e_f, ip, e_fn, wg) do { (void)(e_fn); } while (0) #define dept_ecxt_exit(m, e_f, ip) do { } while (0) #define dept_sched_enter() do { } while (0) #define dept_sched_exit() do { } while (0) diff --git a/include/linux/dept_sdt.h b/include/linux/dept_sdt.h index 21fce525f031..8cdac7982036 100644 --- a/include/linux/dept_sdt.h +++ b/include/linux/dept_sdt.h @@ -24,7 +24,7 @@ #define sdt_wait_timeout(m, t) \ do { \ - dept_request_event(m); \ + dept_request_event(m, NULL); \ dept_wait(m, 1UL, _THIS_IP_, __func__, 0, t); \ } while (0) #define sdt_wait(m) sdt_wait_timeout(m, -1L) @@ -49,7 +49,7 @@ #define sdt_might_sleep_end() dept_clean_stage() #define sdt_ecxt_enter(m) dept_ecxt_enter(m, 1UL, _THIS_IP_, "start", "event", 0) -#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__) +#define sdt_event(m) dept_event(m, 1UL, _THIS_IP_, __func__, NULL) #define sdt_ecxt_exit(m) dept_ecxt_exit(m, 1UL, _THIS_IP_) #else /* !CONFIG_DEPT */ #define sdt_map_init(m) do { } while (0) diff --git a/kernel/dependency/dept.c b/kernel/dependency/dept.c index cdfda4acff58..335e5f67bf55 100644 --- a/kernel/dependency/dept.c +++ b/kernel/dependency/dept.c @@ -2230,6 +2230,11 @@ void dept_map_reinit(struct dept_map *m, struct dept_key *k, int sub_u, } EXPORT_SYMBOL_GPL(dept_map_reinit); +void dept_ext_wgen_init(struct dept_ext_wgen *ewg) +{ + WRITE_ONCE(ewg->wgen, 0U); +} + void dept_map_copy(struct dept_map *to, struct dept_map *from) { if (unlikely(!dept_working())) { @@ -2415,7 +2420,7 @@ static void __dept_wait(struct dept_map *m, unsigned long w_f, */ static void __dept_event(struct dept_map *m, unsigned long e_f, unsigned long ip, const char *e_fn, - bool sched_map) + bool sched_map, unsigned int *wgp) { struct dept_class *c; struct dept_key *k; @@ -2437,14 +2442,14 @@ static void __dept_event(struct dept_map *m, unsigned long e_f, c = check_new_class(&m->map_key, k, sub_id(m, e), m->name, sched_map); if (c && add_ecxt(m, c, 0UL, NULL, e_fn, 0)) { - do_event(m, c, READ_ONCE(m->wgen), ip); + do_event(m, c, READ_ONCE(*wgp), ip); pop_ecxt(m, c); } exit: /* * Keep the map diabled until the next sleep. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wgp, 0U); } void dept_wait(struct dept_map *m, unsigned long w_f, @@ -2654,7 +2659,7 @@ void dept_stage_event(struct task_struct *t, unsigned long ip) if (!m.keys) goto exit; - __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map); + __dept_event(&m, 1UL, ip, "try_to_wake_up", sched_map, &m.wgen); exit: dept_exit(flags); } @@ -2833,10 +2838,11 @@ bool dept_ecxt_holding(struct dept_map *m, unsigned long e_f) } EXPORT_SYMBOL_GPL(dept_ecxt_holding); -void dept_request_event(struct dept_map *m) +void dept_request_event(struct dept_map *m, struct dept_ext_wgen *ewg) { unsigned long flags; unsigned int wg; + unsigned int *wgp; if (unlikely(!dept_working())) return; @@ -2849,32 +2855,38 @@ void dept_request_event(struct dept_map *m) */ flags = dept_enter_recursive(); + wgp = ewg ? &ewg->wgen : &m->wgen; + /* * Avoid zero wgen. */ wg = atomic_inc_return(&wgen) ?: atomic_inc_return(&wgen); - WRITE_ONCE(m->wgen, wg); + WRITE_ONCE(*wgp, wg); dept_exit_recursive(flags); } EXPORT_SYMBOL_GPL(dept_request_event); void dept_event(struct dept_map *m, unsigned long e_f, - unsigned long ip, const char *e_fn) + unsigned long ip, const char *e_fn, + struct dept_ext_wgen *ewg) { struct dept_task *dt = dept_task(); unsigned long flags; + unsigned int *wgp; if (unlikely(!dept_working())) return; + wgp = ewg ? &ewg->wgen : &m->wgen; + if (dt->recursive) { /* * Dept won't work with this even though an event * context has been asked. Don't make it confused at * handling the event. Disable it until the next. */ - WRITE_ONCE(m->wgen, 0U); + WRITE_ONCE(*wgp, 0U); return; } @@ -2883,7 +2895,7 @@ void dept_event(struct dept_map *m, unsigned long e_f, flags = dept_enter(); - __dept_event(m, e_f, ip, e_fn, false); + __dept_event(m, e_f, ip, e_fn, false, wgp); dept_exit(flags); } From patchwork Mon Jun 26 11:57:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Byungchul Park X-Patchwork-Id: 112922 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7454268vqr; Mon, 26 Jun 2023 05:45:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4YfsEYUmDKW0+KI8jbM9Mg+PqLx4sbU8M/7Yk/jtnh1oJCkRrm4VLmh0i/npAK4fYIqXdU X-Received: by 2002:a17:907:7e96:b0:989:21e4:6c6d with SMTP id qb22-20020a1709077e9600b0098921e46c6dmr16332396ejc.28.1687783512119; Mon, 26 Jun 2023 05:45:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687783512; cv=none; d=google.com; s=arc-20160816; b=Cm8GWFz4nxHKNMdTgPPXnNjk7lWdQ3Pg0G0e90FOY29kgiikFQ8N4oTySAaDcmme4d GgtAK5pZkY775GaUWBQ48I7u6/37aajNGgK/JvAaTI508HQmXCqf1yjX3upi30L41nO5 K7eSghIMZr0ByokAqL19sTRZiiUw0uTOTZBpDRqVbLP0waG3Ci7nEIT0yU7oCm+ZT1sQ wTEv9ahnz4gNsamFsc9Xjymd0n1bEOKwVMcz3pDJMvQ4pboDS0PDvoROAm/kytZJZ5iQ +QjkY1AEqyLqlQhFjWQUOKvKsWUs3aSp2/WnFMQ1ZXc5TBV0HgSGELKaFdRX0Dj2kEYl P+og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=6oehO8KJeDqxamO/ETp6uYHSNLX8bLRWqggez8bsjB8=; fh=XRjqVsBnH3A79ze0TZdn1GaMzocAY5TCjIrpk6ynN2Y=; b=Z3CNbxEp01dMSiJv1LWvK99Rjh8uV5uh8P4SF2HdO7vjT6SVtDD85z8Db0liEMXEZ1 5KEeh0/YR+i8kcczlgSZ39mhB3a6WVchE8aBg9lsAEGfiNM4cDXYHA9wtUJOO+P41HZi GB9cvn+8DIi8+5idEnz8+vkweK23iMd0gOXrcxuB7XBF/G5na4cB4pu+0ieRhaiAprY/ jvqpUf5PFetRk70mzWDelGyV5lCPzghWYfh4J00ydFC8F0tG6uPZ8TnedOWwDav42xQr I6vcgZyYj/3/tcKc+G+WdtGqaD1M9oDqr6YX/9n/8/mUUvjm5F52bXeQ8jEuuQih3RmX 3KUg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g10-20020aa7c58a000000b0051bed21a643si2650984edq.561.2023.06.26.05.44.47; Mon, 26 Jun 2023 05:45:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230372AbjFZMZI (ORCPT + 99 others); Mon, 26 Jun 2023 08:25:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbjFZMYg (ORCPT ); Mon, 26 Jun 2023 08:24:36 -0400 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 754562974; Mon, 26 Jun 2023 05:22:52 -0700 (PDT) X-AuditID: a67dfc5b-d85ff70000001748-39-64997d6ef944 From: Byungchul Park To: linux-kernel@vger.kernel.org Cc: kernel_team@skhynix.com, torvalds@linux-foundation.org, damien.lemoal@opensource.wdc.com, linux-ide@vger.kernel.org, adilger.kernel@dilger.ca, linux-ext4@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org, tglx@linutronix.de, rostedt@goodmis.org, joel@joelfernandes.org, sashal@kernel.org, daniel.vetter@ffwll.ch, duyuyang@gmail.com, johannes.berg@intel.com, tj@kernel.org, tytso@mit.edu, willy@infradead.org, david@fromorbit.com, amir73il@gmail.com, gregkh@linuxfoundation.org, kernel-team@lge.com, linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@kernel.org, minchan@kernel.org, hannes@cmpxchg.org, vdavydov.dev@gmail.com, sj@kernel.org, jglisse@redhat.com, dennis@kernel.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, vbabka@suse.cz, ngupta@vflare.org, linux-block@vger.kernel.org, paolo.valente@linaro.org, josef@toxicpanda.com, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, jack@suse.cz, jlayton@kernel.org, dan.j.williams@intel.com, hch@infradead.org, djwong@kernel.org, dri-devel@lists.freedesktop.org, rodrigosiqueiramelo@gmail.com, melissa.srw@gmail.com, hamohammed.sa@gmail.com, 42.hyeyoo@gmail.com, chris.p.wilson@intel.com, gwan-gyeong.mun@intel.com, max.byungchul.park@gmail.com, boqun.feng@gmail.com, longman@redhat.com, hdanton@sina.com, her0gyugyu@gmail.com Subject: [PATCH v10 25/25] dept: Track the potential waits of PG_{locked,writeback} Date: Mon, 26 Jun 2023 20:57:00 +0900 Message-Id: <20230626115700.13873-26-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626115700.13873-1-byungchul@sk.com> References: <20230626115700.13873-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0yTZxiGfb8zHTVfKs7PQ8TUoE4yRAP4NBqD/vHVRENiXMwWo936bTSW Ai1yMDFDDopVDJBAQYmB4koD3cS2BhVKKkWOAzokiKagMDKGnDa2NiBs2mL88+TOdee+fj0c KWuhN3FqbZqo0yo1ckZCSWZDq7/UXq5QRRsskVB8Mxp8/xZQUHnfyoDnl3oEVscVAqaeHYUX /hkEy739JBhLPQiqx0ZIcLSPInBachh4PrEWBn3zDHSV3mAgt+Y+A79NrxDgLSshoN52AnqK TAS4liYpME4xcMeYSwTOnwQsmetYMGdHwLjlNgsrY3uha3SIBuerSKi462Wg2dlFQfujcQKe P6lkYNT6noae9k4KPMWFNPw8Z2Jg2m8mweybZ2HAVUVAQ15A9HbFScDVf/6noaPQFUj3HhAw +LIJQUvBGwJs1iEG3L4ZAuy2UhLe1T5DMH5rloX8m0ss3LlyC8GN/DIK+v/roCHPGwvLi5VM /AHsnpkncZ49Azv9VRTuNgn48e0RFue1vGJxle0itlt245rmKQJXL/hobKu7zmDbQgmLDbOD BPYONTN4rq+PxZ3ly1TClq8lB1WiRp0u6vYcOi9JzGnrIVOun8m0uK1kNurABhTCCXyMMDB3 FxkQt5obHIogZvidwvDwEhnMYfw2wV74B21AEo7kaz4TJjvb2GCxjj8lPPZ2E8EtxUcIPznS gljKxwl93d3ooz5cqG9wrXpCArzpV9Mql/GxQo63lQk6Bd4YIpgbm9iPg43CU8swVYSkVWhN HZKptelJSrUmJioxS6vOjPouOcmGAl9lvrzyzSO04DnVingOyUOl0VvLVTJama7PSmpFAkfK w6SfLxpVMqlKmXVJ1CWf013UiPpWtJmj5Buk+/wZKhn/gzJNvCCKKaLuU0twIZuyUcSbq3/F G/f4e69pdnybevzYkZ5+ad341rb1tQ/d90YiE+Iuucue9k78npBw5qvpRsXL5rfF+0/Py6P9 r0dTlYrvh7dviPvC0bgr1f5atkOdq7Akn20SYtXtJ8Nd53o9Y2MxiwXx+fawotAXin10mL72 /d9pKWWm0ydG1h52/hjVdnBATukTlXt3kzq98gOx23jmUQMAAA== X-Brightmail-Tracker: H4sIAAAAAAAAAzWSa0yTZxiGfd/v2Lq6z0LGNxYP6UIwGAGTsTwqETcTfaORuGzJnH9GY7+M Cq2kRQ5GDUeDKERMoB7QFNDSQSf4lR8gVFk5lAJWNghDU4mQbcrkkDnKhKJba+KfJ1euO/f9 6+EptZWJ5vXGbMlk1GZqWCWtTN1VvM145qou8ecfBai6mAiBxTIaalscLIzcaUbgaCvEMNO3 H35bmkUQfPiIAkv1CIK6qacUtPVPInDZi1gY/X0djAUWWPBWX2ChuKGFhV9ermLw11zG0Cwf gqFL9Ri6l5/TYJlh4bqlGIfOCwzLtiYObAUxMG2/xsHq1HbwTo4z0HPDy4DryVa4etPPQpfL S0N/+zSG0Xu1LEw6/mNgqH+AhpGqCgZ+mq9n4eWSjQJbYIGDX7utGFpLQmt/rbownPvnLQOe iu4Q3bqLYexxJ4L7Zc8wyI5xFnoCsxiccjUFK419CKYr5zgovbjMwfXCSgQXSmtoePTGw0CJ PwmCr2vZPcmkZ3aBIiXOXOJastJksF4kHdeecqTk/hOOWOWTxGmPIw1dM5jUvQowRG46zxL5 1WWOlM+NYeIf72LJvM/HkYErQfrwhqPKZJ2Uqc+RTAm705TpRb1DVNb5I3n2HgdVgDykHPG8 KHwmtrbtKEcKnhVixYmJZSrMkcJm0VnxJ1OOlDwlNKwVnw/0cuEgQvha7PAP4nCXFmLE223Z Ya0SPhd9g4MozKKwSWxu7X63owj5zuH6d14tJIlFfjd7CSmtaE0TitQbcwxafWZSvDkjPd+o z4s/dsIgo9Df2M6sVrWjxdH9biTwSPOBKnHjFZ2a0eaY8w1uJPKUJlL10WuLTq3SafNPSaYT 35tOZkpmN/qEpzVRqgPfSmlq4QdttpQhSVmS6X2KeUV0AYp4tnlbr7/Td9qw9UE7xjHDUUG3 Je3fVoUnIWU8mCx6ovo23F358sW+4KmD0VVGQ9euuOidxqyOiQN/721IscsZ2R9q+o7WTA2t ST8rxY6dLU6VSzfVnfvD7vvqJu1sVDR+aqs8pFh5483tT/viwfoE8JV9nGGJeHx8y9sd34jf pWhoc7p2exxlMmv/BwGnJ2kzAwAA X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769769284167310160?= X-GMAIL-MSGID: =?utf-8?q?1769769284167310160?= Currently, Dept only tracks the real waits of PG_{locked,writeback} that actually happened having gone through __schedule() to avoid false positives. However, it ends in limited capacity for deadlock detection, because anyway there might be still way more potential dependencies by the waits that have yet to happen but may happen in the future so as to cause a deadlock. So let Dept assume that when PG_{locked,writeback} bit gets cleared, there might be waits on the bit to be woken up. Even though false positives may increase with the aggressive tracking, it's worth doing it because it's going to be useful in practice. See the following link for instance: https://lore.kernel.org/lkml/1674268856-31807-1-git-send-email-byungchul.park@lge.com/ Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 3 + include/linux/page-flags.h | 112 +++++++++++++++++++++++++++++++++---- include/linux/pagemap.h | 7 ++- mm/filemap.c | 11 +++- mm/page_alloc.c | 3 + 5 files changed, 121 insertions(+), 15 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3b8475007734..61d982eea8d1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -252,6 +253,8 @@ struct page { #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif + struct dept_ext_wgen PG_locked_wgen; + struct dept_ext_wgen PG_writeback_wgen; } _struct_page_alignment; /* diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 69e93a0c1277..d6ca1148d21d 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -202,6 +202,50 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#ifdef CONFIG_DEPT +#include +#include + +extern struct dept_map PG_locked_map; +extern struct dept_map PG_writeback_map; + +/* + * Place the following annotations in its suitable point in code: + * + * Annotate dept_page_set_bit() around firstly set_bit*() + * Annotate dept_page_clear_bit() around clear_bit*() + * Annotate dept_page_wait_on_bit() around wait_on_bit*() + */ + +static inline void dept_page_set_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_request_event(&PG_locked_map, &p->PG_locked_wgen); + else if (bit_nr == PG_writeback) + dept_request_event(&PG_writeback_map, &p->PG_writeback_wgen); +} + +static inline void dept_page_clear_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_event(&PG_locked_map, 1UL, _RET_IP_, __func__, &p->PG_locked_wgen); + else if (bit_nr == PG_writeback) + dept_event(&PG_writeback_map, 1UL, _RET_IP_, __func__, &p->PG_writeback_wgen); +} + +static inline void dept_page_wait_on_bit(struct page *p, int bit_nr) +{ + if (bit_nr == PG_locked) + dept_wait(&PG_locked_map, 1UL, _RET_IP_, __func__, 0, -1L); + else if (bit_nr == PG_writeback) + dept_wait(&PG_writeback_map, 1UL, _RET_IP_, __func__, 0, -1L); +} +#else +#define dept_page_set_bit(p, bit_nr) do { } while (0) +#define dept_page_clear_bit(p, bit_nr) do { } while (0) +#define dept_page_wait_on_bit(p, bit_nr) do { } while (0) +#endif + #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key); @@ -383,44 +427,88 @@ static __always_inline int Page##uname(struct page *page) \ #define SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_set_##lname(struct folio *folio) \ -{ set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void SetPage##uname(struct page *page) \ -{ set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void folio_clear_##lname(struct folio *folio) \ -{ clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void ClearPage##uname(struct page *page) \ -{ clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define __SETPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_set_##lname(struct folio *folio) \ -{ __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_set_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __SetPage##uname(struct page *page) \ -{ __set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __set_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_set_bit(page, PG_##lname); \ +} #define __CLEARPAGEFLAG(uname, lname, policy) \ static __always_inline \ void __folio_clear_##lname(struct folio *folio) \ -{ __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); \ + dept_page_clear_bit(&folio->page, PG_##lname); \ +} \ static __always_inline void __ClearPage##uname(struct page *page) \ -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + __clear_bit(PG_##lname, &policy(page, 1)->flags); \ + dept_page_clear_bit(page, PG_##lname); \ +} #define TESTSETFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_set_##lname(struct folio *folio) \ -{ return test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (!ret) \ + dept_page_set_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestSetPage##uname(struct page *page) \ -{ return test_and_set_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_set_bit(PG_##lname, &policy(page, 1)->flags);\ + if (!ret) \ + dept_page_set_bit(page, PG_##lname); \ + return ret; \ +} #define TESTCLEARFLAG(uname, lname, policy) \ static __always_inline \ bool folio_test_clear_##lname(struct folio *folio) \ -{ return test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ +{ \ + bool ret = test_and_clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy));\ + if (ret) \ + dept_page_clear_bit(&folio->page, PG_##lname); \ + return ret; \ +} \ static __always_inline int TestClearPage##uname(struct page *page) \ -{ return test_and_clear_bit(PG_##lname, &policy(page, 1)->flags); } +{ \ + bool ret = test_and_clear_bit(PG_##lname, &policy(page, 1)->flags);\ + if (ret) \ + dept_page_clear_bit(page, PG_##lname); \ + return ret; \ +} #define PAGEFLAG(uname, lname, policy) \ TESTPAGEFLAG(uname, lname, policy) \ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 29e1f9e76eb6..2843619264d3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -898,7 +898,12 @@ void folio_unlock(struct folio *folio); */ static inline bool folio_trylock(struct folio *folio) { - return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); + bool ret = !test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)); + + if (ret) + dept_page_set_bit(&folio->page, PG_locked); + + return likely(ret); } /* diff --git a/mm/filemap.c b/mm/filemap.c index adc49cb59db6..b80c8e2bd5f2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1097,6 +1097,7 @@ static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, if (flags & WQ_FLAG_CUSTOM) { if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; + dept_page_set_bit(&key->folio->page, key->bit_nr); flags |= WQ_FLAG_DONE; } } @@ -1206,6 +1207,7 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, if (wait->flags & WQ_FLAG_EXCLUSIVE) { if (test_and_set_bit(bit_nr, &folio->flags)) return false; + dept_page_set_bit(&folio->page, bit_nr); } else if (test_bit(bit_nr, &folio->flags)) return false; @@ -1216,8 +1218,10 @@ static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness = 5; -static struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); -static struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); +struct dept_map __maybe_unused PG_locked_map = DEPT_MAP_INITIALIZER(PG_locked_map, NULL); +struct dept_map __maybe_unused PG_writeback_map = DEPT_MAP_INITIALIZER(PG_writeback_map, NULL); +EXPORT_SYMBOL(PG_locked_map); +EXPORT_SYMBOL(PG_writeback_map); static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, int state, enum behavior behavior) @@ -1230,6 +1234,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, unsigned long pflags; bool in_thrashing; + dept_page_wait_on_bit(&folio->page, bit_nr); if (bit_nr == PG_locked) sdt_might_sleep_start(&PG_locked_map); else if (bit_nr == PG_writeback) @@ -1327,6 +1332,7 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, wait->flags |= WQ_FLAG_DONE; break; } + dept_page_set_bit(&folio->page, bit_nr); /* * If a signal happened, this 'finish_wait()' may remove the last @@ -1534,6 +1540,7 @@ void folio_unlock(struct folio *folio) BUILD_BUG_ON(PG_waiters != 7); BUILD_BUG_ON(PG_locked > 7); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + dept_page_clear_bit(&folio->page, PG_locked); if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0))) folio_wake_bit(folio, PG_locked); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aedebb37..57d6c82eb8fd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -76,6 +76,7 @@ #include #include #include +#include #include #include #include @@ -1626,6 +1627,8 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); + dept_ext_wgen_init(&page->PG_locked_wgen); + dept_ext_wgen_init(&page->PG_writeback_wgen); INIT_LIST_HEAD(&page->lru); #ifdef WANT_PAGE_VIRTUAL