From patchwork Wed Mar 22 04:00:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73176 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2147903wrt; Tue, 21 Mar 2023 21:01:22 -0700 (PDT) X-Google-Smtp-Source: AK7set83nDq+ODQ4nHJabPlxk7b3VlAU5nNY5UIvXfyXT5/BRLh2KEPd4oZFWlIL2kgClGcCBBwt X-Received: by 2002:a17:903:280b:b0:19e:500b:517a with SMTP id kp11-20020a170903280b00b0019e500b517amr1084729plb.69.1679457682470; Tue, 21 Mar 2023 21:01:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679457682; cv=none; d=google.com; s=arc-20160816; b=qHi8nNfQ3jdnuCoCn07M37uBxkaqdKuppSxKT3nekBwqn7eAn+nRt5+MZ0759CA7sG H/Opdcki2kovtvLzBSJKPtBsh/1hJXzQ1on0il/WOvLuEFeiDEx9pDDj2ldm1QHtXriK Ut+G9EgWthZ9gG9LGzkPVaDUyvm2ItLqxPMeqjAk//qDNKKZW4i9XF1ttuBwh3D0S5Mr zjY0JfmMDuRPHiVecfGom1GxqjqkZiXFxJvsshbCH4L/o83xPixkdLuz8XCgJkZSObC1 SmB1N0vehX8zg390LZB1FjS0KaKgLjQePRrIji3qfS+DcoxUnlQ8kX3uHybDX+5cgHK6 f7Sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XvGAl2ReC0WAxPZtGFfH1gFbZcpi9spE6F2vabU8Vos=; b=si4oBr6L9nu/OIGCnceRY8YulwKipkkvwaPGok1y7H4HKo1htV7yUgNkb/an23y2sf N1yEwIPLOo5Ni2j42Z9ql9ul/NVcpOR6fCyWLVT17+PClHAEfrFpn1fsjhNEu/V5xOVJ 35Pg4twSv2QlFhsB6AFrx6S0v387jJJtDhQaaHPQ3EolzJoVIL+y2VlVO1YBHtDizPud mPw3d4lKsiU1lc9u2bFYDgiVHmqKKsG83uoq+QHAplWLn0m6J3RvOpVmNPTxDC4kBrVV o5DM6LoV99jcm1F0DKJXn1Yw+I5VYob1m7DS3N1nK1cWerBBePOh4K4+bJ6g+k7mfd4s SUuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=oSKzXycD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id le8-20020a170902fb0800b001a05524eb88si14232420plb.427.2023.03.21.21.01.09; Tue, 21 Mar 2023 21:01:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=oSKzXycD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbjCVEAb (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229595AbjCVEAY (ORCPT ); Wed, 22 Mar 2023 00:00:24 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6B347403 for ; Tue, 21 Mar 2023 21:00:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 528A961E67 for ; Wed, 22 Mar 2023 04:00:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 373A5C433EF; Wed, 22 Mar 2023 04:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457622; bh=dXZSCwSD/LB/FwBrcGE2iL96HfH7dZ9DVcvMcF87LjU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oSKzXycDKQNaMvn2h8NBJQs2QZSi0Zh4P76GaPXTPlaXwOJ92MTbFHtmKvLg1fPUO hmAsQ5LTALa0dyq+apNn1KYtEKywnGn6nYoOwd7SxLof57UX6Tw2qFqYWYtIv23Oyj Pvw5wtxTTvbd7UXOIy8hTZsI4eCKXWroqpQZtnuykyOfVpiHBd2F4mbF5HwkoXnyRr o9dIG+laLvTs0VRmKAGWVVk4Wq0ud4oS7pn0wMSbkujqwCZyFpyq3NAv77ZmKM9WTs noCXFq9ny3o+k688pOnpE6y9YJrPQghaP4YW/orbGeAsiu67vsGNg89jAv1bExRo8M FSFlT7TPRcMAA== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 01/11] static_call: Improve key type abstraction Date: Tue, 21 Mar 2023 21:00:07 -0700 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761039019129315643?= X-GMAIL-MSGID: =?utf-8?q?1761039019129315643?= Make the static_call_key union less fragile by abstracting all knowledge about the type bit into helper functions. Signed-off-by: Josh Poimboeuf --- include/linux/static_call_types.h | 4 +- kernel/static_call_inline.c | 51 +++++++++++++++++-------- tools/include/linux/static_call_types.h | 4 +- 3 files changed, 40 insertions(+), 19 deletions(-) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 5a00b8b2cf9f..87c3598609e8 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -63,8 +63,8 @@ struct static_call_key { union { /* bit 0: 0 = mods, 1 = sites */ unsigned long type; - struct static_call_mod *mods; - struct static_call_site *sites; + struct static_call_mod *_mods; + struct static_call_site *_sites; }; }; diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c index 639397b5491c..41f6bda6773a 100644 --- a/kernel/static_call_inline.c +++ b/kernel/static_call_inline.c @@ -112,15 +112,21 @@ static inline void static_call_sort_entries(struct static_call_site *start, static inline bool static_call_key_has_mods(struct static_call_key *key) { - return !(key->type & 1); + return !!(key->type & 1); } -static inline struct static_call_mod *static_call_key_next(struct static_call_key *key) +static inline struct static_call_mod *static_call_key_mods(struct static_call_key *key) { if (!static_call_key_has_mods(key)) return NULL; - return key->mods; + return (struct static_call_mod *)(key->type & ~1); +} + +static inline void static_call_key_set_mods(struct static_call_key *key, struct static_call_mod *mods) +{ + key->_mods = mods; + key->type |= 1; } static inline struct static_call_site *static_call_key_sites(struct static_call_key *key) @@ -128,7 +134,12 @@ static inline struct static_call_site *static_call_key_sites(struct static_call_ if (static_call_key_has_mods(key)) return NULL; - return (struct static_call_site *)(key->type & ~1); + return key->_sites; +} + +static inline void static_call_key_set_sites(struct static_call_key *key, struct static_call_site *sites) +{ + key->_sites = sites; } void __static_call_update(struct static_call_key *key, void *tramp, void *func) @@ -154,7 +165,7 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func) goto done; first = (struct static_call_mod){ - .next = static_call_key_next(key), + .next = static_call_key_mods(key), .mod = NULL, .sites = static_call_key_sites(key), }; @@ -250,8 +261,7 @@ static int __static_call_init(struct module *mod, * static_call_init() before memory allocation works. */ if (!mod) { - key->sites = site; - key->type |= 1; + static_call_key_set_sites(key, site); goto do_transform; } @@ -266,10 +276,10 @@ static int __static_call_init(struct module *mod, */ if (static_call_key_sites(key)) { site_mod->mod = NULL; - site_mod->next = NULL; site_mod->sites = static_call_key_sites(key); + site_mod->next = NULL; - key->mods = site_mod; + static_call_key_set_mods(key, site_mod); site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL); if (!site_mod) @@ -278,8 +288,9 @@ static int __static_call_init(struct module *mod, site_mod->mod = mod; site_mod->sites = site; - site_mod->next = static_call_key_next(key); - key->mods = site_mod; + site_mod->next = static_call_key_mods(key); + + static_call_key_set_mods(key, site_mod); } do_transform: @@ -406,7 +417,7 @@ static void static_call_del_module(struct module *mod) struct static_call_site *stop = mod->static_call_sites + mod->num_static_call_sites; struct static_call_key *key, *prev_key = NULL; - struct static_call_mod *site_mod, **prev; + struct static_call_mod *site_mod, *prev; struct static_call_site *site; for (site = start; site < stop; site++) { @@ -416,15 +427,25 @@ static void static_call_del_module(struct module *mod) prev_key = key; - for (prev = &key->mods, site_mod = key->mods; + site_mod = static_call_key_mods(key); + if (!site_mod) + continue; + + if (site_mod->mod == mod) { + static_call_key_set_mods(key, site_mod->next); + kfree(site_mod); + continue; + } + + for (prev = site_mod, site_mod = site_mod->next; site_mod && site_mod->mod != mod; - prev = &site_mod->next, site_mod = site_mod->next) + prev = site_mod, site_mod = site_mod->next) ; if (!site_mod) continue; - *prev = site_mod->next; + prev->next = site_mod->next; kfree(site_mod); } } diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 5a00b8b2cf9f..87c3598609e8 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -63,8 +63,8 @@ struct static_call_key { union { /* bit 0: 0 = mods, 1 = sites */ unsigned long type; - struct static_call_mod *mods; - struct static_call_site *sites; + struct static_call_mod *_mods; + struct static_call_site *_sites; }; }; From patchwork Wed Mar 22 04:00:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73183 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2153351wrt; Tue, 21 Mar 2023 21:20:58 -0700 (PDT) X-Google-Smtp-Source: AK7set/O5MIDAlbUTfIbo+8k74AL7r21SFkc/tGRYLtGzazOkyeV1wJKddc8+ONdkAa4N1j/EGEM X-Received: by 2002:a05:6402:64e:b0:4af:63a7:7474 with SMTP id u14-20020a056402064e00b004af63a77474mr4876701edx.17.1679458858159; Tue, 21 Mar 2023 21:20:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679458858; cv=none; d=google.com; s=arc-20160816; b=AbmG5kIa3L+W+oPlhM72PSVNKTzRg2hP9WYmW04zhdTrUesZWlkFjDXggSB7rRq32Q ojNWoZaBVtP88OCISArpTk0DScDGxoXON10yR6n1iDgtRNkOWkkC+mObxQ/MNj/QBe8i +5TCNwH5IcEp5KqWlIR3GM/MKllLrFkgQJjnpSi+FuDq9xW8jfjn+Cf9VNzd+B9lNImH um8J/yRnKu8IP0b3f5XypM2TMiCxvBqx0GNJoy1n+xJj75+WTgvaRVheFN7xjz/0LXPX mhbM2O0jzbMk7wy+YBlK0gxmsaS8WDRL/ftqTx5R9UqjVaUDazLlHkXX/fapEHU2JJOw vWQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=mF+L6rNCsqPnrw6hbBGOnm0FrOQ+aPm9HOjmW4cE8Ms=; b=XzVLlelN5VzItep+Oh7HJbzFspAS4X0sv7iLCoTxkQVLJcjYOL/VunOXLf+7JgRwaK 7Rx+uTExHpum+I5rynrZHktk9CMyKg5knCyiFc7ZYHgp+sSVvwYAN5FtNrjMKXZPXtP4 NZbvX9FZzQbZ8hoytKsl3A4i9opZ3iNXcoQNi43K82tuM6+mKyGyOhc8nhjhvlqcvu0s C1IN/JH6yDzAdVf5E6Ldw9MUDJH7gVu182pmuZOlLYynYgF/u8RbujgfOqGThpmQ3P9P x1C+q6NefpCT0fX9T9E+IbWEyYB+sTbDGwpmh+2G24GbB3vgE24j3yt95oB65Yip0nVt ZISQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LhGzl+6t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f11-20020a05640214cb00b004fc7cbc9c09si14860607edx.551.2023.03.21.21.20.33; Tue, 21 Mar 2023 21:20:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LhGzl+6t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230311AbjCVEAf (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230016AbjCVEAZ (ORCPT ); Wed, 22 Mar 2023 00:00:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7839047816 for ; Tue, 21 Mar 2023 21:00:24 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0E2FD61F34 for ; Wed, 22 Mar 2023 04:00:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2833C433D2; Wed, 22 Mar 2023 04:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457623; bh=/ZLm+wrxs/+a1i9Qeb7J2lVK+Z8NtDgtRnyNWD9EKik=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LhGzl+6tp6p2aIHnYK6VuYlsZHXLK2dZkWI2BXJoYnVP75G2Z+CueGy7ydubG4iMP bS7L3DdGct2/BXryFCIUDHFL3OGlhwMmLKZ69nBm8kN45PBzmphOSExrZyxVQ4ReXL GhG1BnZlHj312ZUYnEQ29g+5rwOuJkSTUmvt4Z0vfKNpgSMPMjMCP9FLAxjfqZJKE+ 5od/w9oWaqkegqfv6h6kQRTcyMljerNSWk989DL2pQ2vWR0gS+WUzQOZcU5tmJZ+Lv tlxGFsu4Ci7j8vQLQpjiOND2gMaJaL2gF0q/LbUGrY9ydPWvk66IW+lVICtmxyPixb 91wRAm4clwOtQ== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 02/11] static_call: Flip key type union bit Date: Tue, 21 Mar 2023 21:00:08 -0700 Message-Id: X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040251931557653?= X-GMAIL-MSGID: =?utf-8?q?1761040251931557653?= Flip the meaning of the key->type union field. This will make it easier to converge some of the DECLARE_STATIC_CALL() macros. Signed-off-by: Josh Poimboeuf --- include/linux/static_call.h | 3 --- include/linux/static_call_types.h | 4 ++-- tools/include/linux/static_call_types.h | 4 ++-- 3 files changed, 4 insertions(+), 7 deletions(-) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 141e6b176a1b..f984b8f6d974 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -186,7 +186,6 @@ extern long __static_call_return0(void); DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = _func, \ - .type = 1, \ }; \ ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func) @@ -194,7 +193,6 @@ extern long __static_call_return0(void); DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = NULL, \ - .type = 1, \ }; \ ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) @@ -202,7 +200,6 @@ extern long __static_call_return0(void); DECLARE_STATIC_CALL(name, _func); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = __static_call_return0, \ - .type = 1, \ }; \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 87c3598609e8..c4c4efb6f6fa 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -61,10 +61,10 @@ struct static_call_site { struct static_call_key { void *func; union { - /* bit 0: 0 = mods, 1 = sites */ + /* bit 0: 0 = sites, 1 = mods */ unsigned long type; - struct static_call_mod *_mods; struct static_call_site *_sites; + struct static_call_mod *_mods; }; }; diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 87c3598609e8..c4c4efb6f6fa 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -61,10 +61,10 @@ struct static_call_site { struct static_call_key { void *func; union { - /* bit 0: 0 = mods, 1 = sites */ + /* bit 0: 0 = sites, 1 = mods */ unsigned long type; - struct static_call_mod *_mods; struct static_call_site *_sites; + struct static_call_mod *_mods; }; }; From patchwork Wed Mar 22 04:00:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73182 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2152830wrt; Tue, 21 Mar 2023 21:18:42 -0700 (PDT) X-Google-Smtp-Source: AK7set9UybheZeswDEjNcxaUv4uUtwFoM0LkEfQzOTiXc7qpEqvO+Sbcyt5UMoGkE5G8U+U/r328 X-Received: by 2002:a17:906:c185:b0:92b:846d:8928 with SMTP id g5-20020a170906c18500b0092b846d8928mr5362874ejz.65.1679458722745; Tue, 21 Mar 2023 21:18:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679458722; cv=none; d=google.com; s=arc-20160816; b=NCv4VmFXQ+axgWcJ994qObTKvrh9N8NlQfPEq2ZxLPO7xH0GsyVmcKFdoeLP6yjBVf aVf0u59/ZD8JcU2UwIRln/WfwJoxjsDJAuVpzM6MWu294eSutethORPZMeDnqa60P0Zl QE0yMPhZs+i4yVI4RxfZt1yX0o1/4dLz02YhrFzMsoRtmRxzynm1dRBJgdgi0k2/a5ON ocrN5M7goBM2s5tO+dta10wkvcH8XaQ5ceG0miYynSwQMciqoWZ6AM+UHSv9jdJUHaKA MexPdRc3TzbpCZCphJgxRd66Rbt9nUuWD7zA+9JgQ43SCyUCM5hHBDBublzN+zxTV9b4 ve+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5VBWmcRvucazzJBJnnh1A2n4CH2R5QhY3ZwOBcfJlrU=; b=RhNo8KZmEVBFwL3c5aCodQ3d9BL0HvApfnqTXifKNMVHC9d+ISVaaBvqULIyVY9+5G tQHEFyuJYyomEAPy++TRlIrTIHrlPXQ5v/x7E+RtU3wnr1Zhhs+aDtOzz+GhO9abLWd+ bapigz+A0Mk9OeEQDAjeXFPMWlBtxowe2WpFSWDZgq2qJZzl0D+W7mtr4HWl19FM1BMN V4jaIAzYM4EdlALrr50aCY3rzkBGuDQPKnNvxSlSEwb7uKnK+6UOxDIRiYLelImoZIlu Wn+yWc+pZw+4XfMqEhjvvDIhVuSDMF7LUt7toTIv8Vy9lg7eHQtnnu9cfrTCuVa+R4xb 4PJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DRtffnoa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c24-20020a170906341800b0092ce46fd9b3si15445531ejb.998.2023.03.21.21.18.18; Tue, 21 Mar 2023 21:18:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=DRtffnoa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230328AbjCVEAj (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230257AbjCVEA0 (ORCPT ); Wed, 22 Mar 2023 00:00:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CD5747403 for ; Tue, 21 Mar 2023 21:00:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C52D661ED7 for ; Wed, 22 Mar 2023 04:00:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A3F8C433A1; Wed, 22 Mar 2023 04:00:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457624; bh=QIV/cEnEzL7WBeJCOmt5xLvQJhUqQpKK+8+VlbgRk9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DRtffnoaIadl3zDBIZesmRxjD8edsN+2yBdx/qEFRedOIbG7vm/wd/vaCVGYRbtuS KARPX0xGOhlinI3MIcmkd9iZKFfObhdEV5pfqyCBQHxxodGVjUQZxPULLhFNSK0BB8 dN2kTQnEIxwZJpK+JQwQkBJ8r9JjwhONwgtwcfvgMADbYf5zqA648zN8nqQKdEu6MW AEuKrDTswZ3El4E/2xyPbz8OLnckHqjSPIzzzrewTNqnv7ufpWcxq2YCAx3UlQ0CHW xvcTDoEwZGVVhx0vCUBk7gbNix834HehgKGfPJuY1KQeYiqETcb6/Hopx2tuBY3vgO sioRJ9KUG1EyQ== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 03/11] static_call: Remove static_call_mod_init() declaration Date: Tue, 21 Mar 2023 21:00:09 -0700 Message-Id: <3b07f3830d7e4e967cc9714dbf54b7391f35cf8b.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040109289602995?= X-GMAIL-MSGID: =?utf-8?q?1761040109289602995?= This function doesn't exist (and never did). Signed-off-by: Josh Poimboeuf --- include/linux/static_call.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index f984b8f6d974..890ddc0c3190 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -177,7 +177,6 @@ struct static_call_tramp_key { }; extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); -extern int static_call_mod_init(struct module *mod); extern int static_call_text_reserved(void *start, void *end); extern long __static_call_return0(void); From patchwork Wed Mar 22 04:00:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73180 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2151349wrt; Tue, 21 Mar 2023 21:12:59 -0700 (PDT) X-Google-Smtp-Source: AK7set/mEwSy+yLLIGG+F3/Sj1XXu1LY8EhRdrhbrFaNykVmt83F3W9+ENZuMUhIGVcORvfmn27L X-Received: by 2002:a17:907:7654:b0:8b1:7857:2331 with SMTP id kj20-20020a170907765400b008b178572331mr5624534ejc.65.1679458378918; Tue, 21 Mar 2023 21:12:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679458378; cv=none; d=google.com; s=arc-20160816; b=kJEHQJmtcuF8LpiNHiR6YcK3cnCas2/7sdvQHfxwk10E+yjnTYk50sfm0Vt/IFqASg N5WJ2G6KRZhnKGrGGiHZFzqwJhLkqYgo1VvbSayu6cCnznL65StN0PpyHcFidfZcR65O E8p5YhtSCFIEIw4jR3nupM/e8ngNp7dd2Z2G7J6na9/8KDc2Xw1SCMSek9aXh0164e7N egnTq/87yI0140e96kFpgeY6v1q5oNDCuAM5NN4Yqjem3c0iXE5EHSf0/Td/abGE7lYm vKu35VjJ4lVZSwL5pX7NaHTBhR38ohu+9w5bKibfNYxkZWZpZqhe33U59WZGCdKgBvBB /vYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Hqg/nJeNL2hF1IPON1uDEwVpvC3pqf0PV7DSc7wPtF8=; b=zuTLioK1dvpdoEcvT0Zqq248xcfgbTQ9PBuMkIwBe/M7fDRqRSgQp2qHQ2djqxOyZ7 IwBfY2YQb055IY0tiV6K9UF2c3Dn7JSw6OhJDbzU9UpL+VRwkQxja3N78qYscxB3MFyT KStbBBVYyQo3rCrga2lKmVStQwZPB3tLHY8KvHGDzfotz2hzMCBn8xyWB1ZHaXP1PRH+ hh0Z7vz/cIJFWTYxyxOQkek6onD5XVQ5mWjK41l3tutZ7wBtX40zgSqZn2/G2LGdsVKA tUV80X1MvSaF378V4FOkmN9y1PIEnwIc2iItFyEasBLPneB8fN4crRSRaRevJutHp2sd wIvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="l/6zCufX"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v9-20020a170906338900b0093346a7ad4esi10382943eja.1005.2023.03.21.21.12.35; Tue, 21 Mar 2023 21:12:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="l/6zCufX"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230325AbjCVEAu (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbjCVEA2 (ORCPT ); Wed, 22 Mar 2023 00:00:28 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CF9447403 for ; Tue, 21 Mar 2023 21:00:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 20DB6B81AFA for ; Wed, 22 Mar 2023 04:00:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 578E3C4339B; Wed, 22 Mar 2023 04:00:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457624; bh=vb0/E/xQnHYZirAzUsMS9BEEJ9l0yo4puA34UlEsHUY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l/6zCufXDJX0jYzyqmaYDnjYZANV3vCTW3je5V0JNsAqbwav52XjOmIVH7ItDH6Uh 24ZVp6rHzdLHyIHYwXcBbkGR8UQhEdchmfLCL/FJm68Map3H+1+Nz/ffAEu7dYBiim pp+TklmKMf6pX3n61kSD3dPgpU2CTS8hxh3VMee4O0YgRRsLlWPK5SD8SJNjXV+e+B fqvpsXnlLPmGfhsa675S8VtpwJ2I8uB3aDpoj+QnBJi6W16hppSc79BtofGybFZKwi c4uAhok6UPaqpeYfhK2Mk52HFpLVuNBtF7iWvzGD7pXO6cxiH0mnP+wkuVRmNx5BjS EYIoEVnVsUhmg== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 04/11] static_call: Remove static_call.h dependency on cpu.h Date: Tue, 21 Mar 2023 21:00:10 -0700 Message-Id: <252c12888b50482ee5bda8415a67cdc971285843.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761039749171002944?= X-GMAIL-MSGID: =?utf-8?q?1761039749171002944?= Uninline __static_call_update() to remove static_call.h's dependency on cpu.h. This will make it much easier to include static_call.h in common header files like . Signed-off-by: Josh Poimboeuf --- block/bio.c | 1 + include/linux/static_call.h | 10 +--------- kernel/cgroup/cgroup.c | 1 + kernel/static_call.c | 12 ++++++++++++ sound/soc/intel/avs/trace.c | 1 + 5 files changed, 16 insertions(+), 9 deletions(-) diff --git a/block/bio.c b/block/bio.c index fd11614bba4d..a2ca0680fd18 100644 --- a/block/bio.c +++ b/block/bio.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include "blk.h" diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 890ddc0c3190..abce40166039 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -132,7 +132,6 @@ */ #include -#include #include #ifdef CONFIG_HAVE_STATIC_CALL @@ -246,14 +245,7 @@ static inline int static_call_init(void) { return 0; } #define static_call_cond(name) (void)__static_call(name) -static inline -void __static_call_update(struct static_call_key *key, void *tramp, void *func) -{ - cpus_read_lock(); - WRITE_ONCE(key->func, func); - arch_static_call_transform(NULL, tramp, func, false); - cpus_read_unlock(); -} +extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); static inline int static_call_text_reserved(void *start, void *end) { diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 935e8121b21e..4f29f509d9ce 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -58,6 +58,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS diff --git a/kernel/static_call.c b/kernel/static_call.c index e9c3e69f3837..63486995fd82 100644 --- a/kernel/static_call.c +++ b/kernel/static_call.c @@ -1,8 +1,20 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include long __static_call_return0(void) { return 0; } EXPORT_SYMBOL_GPL(__static_call_return0); + +#ifndef CONFIG_HAVE_STATIC_CALL_INLINE +void __static_call_update(struct static_call_key *key, void *tramp, void *func) +{ + cpus_read_lock(); + WRITE_ONCE(key->func, func); + arch_static_call_transform(NULL, tramp, func, false); + cpus_read_unlock(); +} +EXPORT_SYMBOL_GPL(__static_call_update); +#endif diff --git a/sound/soc/intel/avs/trace.c b/sound/soc/intel/avs/trace.c index c63eea909b5e..b033b560e6d2 100644 --- a/sound/soc/intel/avs/trace.c +++ b/sound/soc/intel/avs/trace.c @@ -7,6 +7,7 @@ // #include +#include #define CREATE_TRACE_POINTS #include "trace.h" From patchwork Wed Mar 22 04:00:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73185 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2154609wrt; Tue, 21 Mar 2023 21:26:40 -0700 (PDT) X-Google-Smtp-Source: AK7set9YmYLdFzyqvP+hgwTZaIn0id7fySC3xAYX1vImZx07L67bM2/vLCJxQO7LZcmQ6uevFB4Y X-Received: by 2002:a17:902:db01:b0:19f:2dff:21a2 with SMTP id m1-20020a170902db0100b0019f2dff21a2mr1319349plx.64.1679459200406; Tue, 21 Mar 2023 21:26:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679459200; cv=none; d=google.com; s=arc-20160816; b=ImlWy9YPfoHRxAfzZX2aArkeH6WhTSnZsUln3oVXgK2FYO3E7pz4SYrV9vjx+xCMyl ZAbB1yH8r0qehaXhqDm1h/nU17LQWvsJl2eEVoOMVMQ27yvc2blZtML6vR1wdokUD+bP E1eVRJbfASzyDgYQc8/RXxGC8ga23PobHc28Rt02m2j5hOfNLHu1pZo8EAvuIWbS+XG7 8f24+3a9npEmBibYsrrWx1lwE/5GDhaf6TTgl3DsNuBkTzoYuiiWT4bCwuc9XdT0qKLd 8M3uSnocJoOPln6pJm1i3iOGGPSt9wy7yCUeguESleVMfQS6nq3fLDtmnyfS0TMWyng3 Oeqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ACU4A0oUi5RO3JT8qfAgzKfYiBWrmEQbUvRCoVapV54=; b=VZcJhAMqxu7aEH1zd2euQF/r/jFNuAqvRC7E/Etgsv4j9QaCikpHkJBHcxt5gUANiB ikgyM/ch+8pLA+kiJNSRDAdmvyeixzyDOd81HscKceFkTNWBXz3OmAvsRjF+D9W5esZw TQBN4rJ1JPz0gWoDYKVqKxzLdaQMMHfx92+ha7jdXxUZ4sp7DJUAt/xB5/D5ygPCoppQ 7rpmPm3JFmB7PwPXklHAf7scMxdxJMxpY2HXOdmrTCNo5czT+7xLVZUT4xqH1g7p9KfV yBiMYrr65ZhWuFDri7pXQrPdWKMxrj4XnxPgT8uN61/yoZj+5+/vWrFX8AXzvJrDhNTr yALw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="FZJk/OvU"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a7-20020a170902ecc700b001a1c005bd39si10278137plh.97.2023.03.21.21.26.27; Tue, 21 Mar 2023 21:26:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="FZJk/OvU"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230344AbjCVEAr (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230274AbjCVEA3 (ORCPT ); Wed, 22 Mar 2023 00:00:29 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32F64498BE for ; Tue, 21 Mar 2023 21:00:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DD019B81B00 for ; Wed, 22 Mar 2023 04:00:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CBB5C433A0; Wed, 22 Mar 2023 04:00:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457625; bh=sFv6ysPUwwSbqqWjEUzZkiH5GZXLdn6WWf8JJtLGU/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FZJk/OvUzmqLG/R/sYpJ6Vr6cd1GnPQkyYrekaqKgoZ5rBsnjCby6AB+sj1A0BEdm VubwVNbQmJDhiUSbSRGqKopbYYaDbAXY6NgBSOYz3d7ATuiyJqChlBE/A0bMRlRJi+ sr0RIFPgITofS1+T43065v5YQs6L3DX4ahWtO/hg+z/CzJYIM0t7rNJhmLxmLIsdhO MbUka1rwx6N/txtdlG2qJTR6qTacHU3+q06f6H6XT5EhEFxYQNEDdAakAd5HUaLQa2 o8SdapoGOryqKsXD4VMhPRxOfvhPF+iS6oti4kMGhC70sHYv3B2ukSP9jZicqsK+w9 6JJlwBTXj3rUA== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 05/11] static_call: Make ARCH_ADD_TRAMP_KEY() generic Date: Tue, 21 Mar 2023 21:00:11 -0700 Message-Id: <6a0d8889143580b3eac61ecabca783a5e8ad1bad.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040610263920843?= X-GMAIL-MSGID: =?utf-8?q?1761040610263920843?= There's nothing arch-specific about ARCH_ADD_TRAMP_KEY(). Move it to the generic static_call.h. Signed-off-by: Josh Poimboeuf --- arch/x86/include/asm/static_call.h | 6 ------ include/linux/static_call.h | 11 +++++++++-- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index 343b722ccaf2..52abbdfd6106 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -57,12 +57,6 @@ #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) \ ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0) -#define ARCH_ADD_TRAMP_KEY(name) \ - asm(".pushsection .static_call_tramp_key, \"a\" \n" \ - ".long " STATIC_CALL_TRAMP_STR(name) " - . \n" \ - ".long " STATIC_CALL_KEY_STR(name) " - . \n" \ - ".popsection \n") - extern bool __static_call_fixup(void *tramp, u8 op, void *dest); #endif /* _ASM_STATIC_CALL_H */ diff --git a/include/linux/static_call.h b/include/linux/static_call.h index abce40166039..013022a8611d 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -213,10 +213,17 @@ extern long __static_call_return0(void); /* Leave the key unexported, so modules can't change static call targets: */ #define EXPORT_STATIC_CALL_TRAMP(name) \ EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)); \ - ARCH_ADD_TRAMP_KEY(name) + __STATIC_CALL_ADD_TRAMP_KEY(name) #define EXPORT_STATIC_CALL_TRAMP_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)); \ - ARCH_ADD_TRAMP_KEY(name) + __STATIC_CALL_ADD_TRAMP_KEY(name) + +/* Unexported key lookup table */ +#define __STATIC_CALL_ADD_TRAMP_KEY(name) \ + asm(".pushsection .static_call_tramp_key, \"a\" \n" \ + ".long " STATIC_CALL_TRAMP_STR(name) " - . \n" \ + ".long " STATIC_CALL_KEY_STR(name) " - . \n" \ + ".popsection \n") #elif defined(CONFIG_HAVE_STATIC_CALL) From patchwork Wed Mar 22 04:00:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73186 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2154725wrt; Tue, 21 Mar 2023 21:27:13 -0700 (PDT) X-Google-Smtp-Source: AK7set8O0CB1ZxJ9BfZHTescrM3KkWRHeGLPvbaSG+OzLTnLYV/gLjdG0BgWa1m/rMYl54NuW0O5 X-Received: by 2002:a17:90b:3e8c:b0:23f:78d6:6ac5 with SMTP id rj12-20020a17090b3e8c00b0023f78d66ac5mr2187054pjb.19.1679459233661; Tue, 21 Mar 2023 21:27:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679459233; cv=none; d=google.com; s=arc-20160816; b=I2fuQWwxElMF9gnUEGN6L99lVSJqjO/s2CTpSZ5rNfmUcej38Ozm3clQPUi51Oh2+1 uOJ8RyLm0Xslj9Yiz+t/C5sMrdJQt0vN65r8yTWqmHP/VrJbSIghIFCqegbzfdv0pwhA JdqGN9qLiTL4I/4XlCHosWTBrxwEvgvLcREFnUMwaViNvEQwnSLVUPSG1qP8hgYrP9rX RYkwjs/QhvMisOnj2pXH2xEnQxtPx/deuTeHKt/Pzjflf06WzQmEIROVEJIupAUKP3Jc zAGohw+vRHI2txOi66KbRxNPhYMhwFVYG/PGyr+IMH5YLg6hmXhYVIg+K0eW5lJRiqi3 /Ljg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eC8p9sKU3BCiPh0P9KW6yGTyPvapX4wthcF7Z2WDrUY=; b=b1mafZHMa5lIshTyodVk/BP+ut3lpTIheiSkRU6ffyE0yy8a4k66cg0PtiTDCClgjN MvodvItW4hMZxtfWGBjInHZc+yJqmaM+ax18sMoUHPargvbdCCfdAi0LT02XdQb8Ridc t4DJJ9/Asa0dKpo3iAHMGByHGDQW/dPXw/y6orFFnmdRqBEY+Q2FavSIv6gvVu3YETqL VzWqLF4ZNZaMW0wE3ERG63paccLrQh5eVkxo0kOzxQLMWhkLitSHviiOiYwoKBB2FOkv vLMkN6V/BrISlZdLxAig4APiYvrARJyF8iOSk4BarYcvl3zyglEPnZfZjKPlTjFCtklj jnQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ah7xBhMr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t5-20020a17090abc4500b0023d1e317716si14452740pjv.40.2023.03.21.21.27.01; Tue, 21 Mar 2023 21:27:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ah7xBhMr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbjCVEBD (ORCPT + 99 others); Wed, 22 Mar 2023 00:01:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229595AbjCVEAd (ORCPT ); Wed, 22 Mar 2023 00:00:33 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [IPv6:2604:1380:40e1:4800::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76B0A497DA for ; Tue, 21 Mar 2023 21:00:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id C78CDCE1752 for ; Wed, 22 Mar 2023 04:00:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7733C4339C; Wed, 22 Mar 2023 04:00:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457626; bh=WByO8acwZQuT4iXFRv6NfiOBwBiqMnpdQ0Zz8Qhutj0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ah7xBhMrbtmXdB28Fa+2TLa96rzWrLYBLYcsC9dBc3JLBzM0FbJm7hijJDaqJMVSC 88lVtqzAXHMEpbujV3WzUWMpU9L1calLNDNOLa0eRkHsjEjguwO9JORkxvVB/8dud6 MY9SNoagfO6WnErCEqzDr85E/67Nw6Nz6btL08sZHWaSR3CcKJjQtPBAtXKIaWnNXz 2+atd43Ey2zXbo7/NvLp9ael+CrqXqBSqUVlsSGjZe9muZr8eXkVABpLrIH4DK9lC7 egknG2DsSGrZI5SebY99U4kS6HsITkEpILBpse6YTc1fjUlC5pcB4yNIhdsWjWOGOf MBZeTDyyUkVag== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 06/11] static_call: "EXPORT_STATIC_CALL_TRAMP" -> "EXPORT_STATIC_CALL_RO" Date: Tue, 21 Mar 2023 21:00:12 -0700 Message-Id: <00373cd98e299d6ab3c6c7417514acf0f0ead157.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040645139780452?= X-GMAIL-MSGID: =?utf-8?q?1761040645139780452?= EXPORT_STATIC_CALL_TRAMP() basically creates a read-only export of the static call. Make that clearer by renaming it to EXPORT_STATIC_CALL_RO(). Signed-off-by: Josh Poimboeuf --- arch/x86/events/amd/brs.c | 2 +- arch/x86/include/asm/perf_event.h | 2 +- arch/x86/include/asm/preempt.h | 4 ++-- include/linux/kernel.h | 2 +- include/linux/sched.h | 2 +- include/linux/static_call.h | 28 +++++++++++++++---------- include/linux/static_call_types.h | 8 +++---- kernel/sched/core.c | 8 +++---- tools/include/linux/static_call_types.h | 8 +++---- 9 files changed, 35 insertions(+), 29 deletions(-) diff --git a/arch/x86/events/amd/brs.c b/arch/x86/events/amd/brs.c index ed308719236c..961be770aa24 100644 --- a/arch/x86/events/amd/brs.c +++ b/arch/x86/events/amd/brs.c @@ -423,7 +423,7 @@ void noinstr perf_amd_brs_lopwr_cb(bool lopwr_in) } DEFINE_STATIC_CALL_NULL(perf_lopwr_cb, perf_amd_brs_lopwr_cb); -EXPORT_STATIC_CALL_TRAMP_GPL(perf_lopwr_cb); +EXPORT_STATIC_CALL_RO_GPL(perf_lopwr_cb); void __init amd_brs_lopwr_init(void) { diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc15ed5e60b..43eb95db4cc9 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -588,7 +588,7 @@ DECLARE_STATIC_CALL(perf_lopwr_cb, perf_amd_brs_lopwr_cb); static __always_inline void perf_lopwr_cb(bool lopwr_in) { - static_call_mod(perf_lopwr_cb)(lopwr_in); + static_call_ro(perf_lopwr_cb)(lopwr_in); } #endif /* PERF_NEEDS_LOPWR_CB */ diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 2d13f25b1bd8..65028c346709 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -124,7 +124,7 @@ DECLARE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled); #define __preempt_schedule() \ do { \ - __STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule); \ + __STATIC_CALL_RO_ADDRESSABLE(preempt_schedule); \ asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule) : ASM_CALL_CONSTRAINT); \ } while (0) @@ -132,7 +132,7 @@ DECLARE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_e #define __preempt_schedule_notrace() \ do { \ - __STATIC_CALL_MOD_ADDRESSABLE(preempt_schedule_notrace); \ + __STATIC_CALL_RO_ADDRESSABLE(preempt_schedule_notrace); \ asm volatile ("call " STATIC_CALL_TRAMP_STR(preempt_schedule_notrace) : ASM_CALL_CONSTRAINT); \ } while (0) diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 40bce7495af8..5c857c3acbc0 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -107,7 +107,7 @@ DECLARE_STATIC_CALL(might_resched, __cond_resched); static __always_inline void might_resched(void) { - static_call_mod(might_resched)(); + static_call_ro(might_resched)(); } #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) diff --git a/include/linux/sched.h b/include/linux/sched.h index 63d242164b1a..13b17ff4ad22 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2074,7 +2074,7 @@ DECLARE_STATIC_CALL(cond_resched, __cond_resched); static __always_inline int _cond_resched(void) { - return static_call_mod(cond_resched)(); + return static_call_ro(cond_resched)(); } #elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 013022a8611d..74f089a5955b 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -23,6 +23,7 @@ * * static_call(name)(args...); * static_call_cond(name)(args...); + * static_call_ro(name)(args...); * static_call_update(name, func); * static_call_query(name); * @@ -123,12 +124,11 @@ * Notably argument setup is unconditional. * * - * EXPORT_STATIC_CALL() vs EXPORT_STATIC_CALL_TRAMP(): - * - * The difference is that the _TRAMP variant tries to only export the - * trampoline with the result that a module can use static_call{,_cond}() but - * not static_call_update(). + * EXPORT_STATIC_CALL() vs EXPORT_STATIC_CALL_RO(): * + * The difference is the read-only variant exports the trampoline but not the + * key, so a module can call it via static_call_ro() but can't update the + * target via static_call_update(). */ #include @@ -210,11 +210,14 @@ extern long __static_call_return0(void); EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)); \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) -/* Leave the key unexported, so modules can't change static call targets: */ -#define EXPORT_STATIC_CALL_TRAMP(name) \ +/* + * Read-only exports: export the trampoline but not the key, so modules can't + * change call targets. + */ +#define EXPORT_STATIC_CALL_RO(name) \ EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)); \ __STATIC_CALL_ADD_TRAMP_KEY(name) -#define EXPORT_STATIC_CALL_TRAMP_GPL(name) \ +#define EXPORT_STATIC_CALL_RO_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)); \ __STATIC_CALL_ADD_TRAMP_KEY(name) @@ -268,10 +271,13 @@ extern long __static_call_return0(void); EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)); \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) -/* Leave the key unexported, so modules can't change static call targets: */ -#define EXPORT_STATIC_CALL_TRAMP(name) \ +/* + * Read-only exports: export the trampoline but not the key, so modules can't + * change call targets. + */ +#define EXPORT_STATIC_CALL_RO(name) \ EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)) -#define EXPORT_STATIC_CALL_TRAMP_GPL(name) \ +#define EXPORT_STATIC_CALL_RO_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) #else /* Generic implementation */ diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index c4c4efb6f6fa..06293067424f 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -80,11 +80,11 @@ struct static_call_key { #endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ #ifdef MODULE -#define __STATIC_CALL_MOD_ADDRESSABLE(name) -#define static_call_mod(name) __raw_static_call(name) +#define __STATIC_CALL_RO_ADDRESSABLE(name) +#define static_call_ro(name) __raw_static_call(name) #else -#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_mod(name) __static_call(name) +#define __STATIC_CALL_RO_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) +#define static_call_ro(name) __static_call(name) #endif #define static_call(name) __static_call(name) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index af017e038b48..a89de2a2d8f8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6824,7 +6824,7 @@ EXPORT_SYMBOL(preempt_schedule); #define preempt_schedule_dynamic_disabled NULL #endif DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled); -EXPORT_STATIC_CALL_TRAMP(preempt_schedule); +EXPORT_STATIC_CALL_RO(preempt_schedule); #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule); void __sched notrace dynamic_preempt_schedule(void) @@ -6897,7 +6897,7 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notrace); #define preempt_schedule_notrace_dynamic_disabled NULL #endif DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled); -EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace); +EXPORT_STATIC_CALL_RO(preempt_schedule_notrace); #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace); void __sched notrace dynamic_preempt_schedule_notrace(void) @@ -8493,12 +8493,12 @@ EXPORT_SYMBOL(__cond_resched); #define cond_resched_dynamic_enabled __cond_resched #define cond_resched_dynamic_disabled ((void *)&__static_call_return0) DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched); -EXPORT_STATIC_CALL_TRAMP(cond_resched); +EXPORT_STATIC_CALL_RO(cond_resched); #define might_resched_dynamic_enabled __cond_resched #define might_resched_dynamic_disabled ((void *)&__static_call_return0) DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched); -EXPORT_STATIC_CALL_TRAMP(might_resched); +EXPORT_STATIC_CALL_RO(might_resched); #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched); int __sched dynamic_cond_resched(void) diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index c4c4efb6f6fa..06293067424f 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -80,11 +80,11 @@ struct static_call_key { #endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ #ifdef MODULE -#define __STATIC_CALL_MOD_ADDRESSABLE(name) -#define static_call_mod(name) __raw_static_call(name) +#define __STATIC_CALL_RO_ADDRESSABLE(name) +#define static_call_ro(name) __raw_static_call(name) #else -#define __STATIC_CALL_MOD_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_mod(name) __static_call(name) +#define __STATIC_CALL_RO_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) +#define static_call_ro(name) __static_call(name) #endif #define static_call(name) __static_call(name) From patchwork Wed Mar 22 04:00:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73177 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2148175wrt; Tue, 21 Mar 2023 21:02:13 -0700 (PDT) X-Google-Smtp-Source: AK7set8sMA1nysUDvwd6717NRWckLgiODon0qikOOSATqyArC+V069PoEVLQwoo417VqK1z/kwRq X-Received: by 2002:aa7:981c:0:b0:625:e3c0:8a58 with SMTP id e28-20020aa7981c000000b00625e3c08a58mr2091717pfl.4.1679457733275; Tue, 21 Mar 2023 21:02:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679457733; cv=none; d=google.com; s=arc-20160816; b=Gh6NdlX/+HH0YxNsockwVH7ABP3QX5MDl0ODOrB3IMmbgXs/aGpQe0GBoj4w9fPLP8 oB0rFGSnRJCXZpgrbGtEuVBRJOesj4TIKVsiz6TLEMKsvlxl8x9XR3H/F1ceYhpxulxt jBROFA/HR6Q76CfKIFXo0xJO3iF0Qvqwwr3ziNmKrZ3/arQ+KKL/RNSBp0Pzl3pXHcq7 /G3cJEsOq55QR8O5uwc9K3IQszRRKwOcXMmviLwwE7tkm9nNrJ3KX4e6hIy5hwwaSCxE MZhVyA5wqg9r4tEXml1PilxOk++jV+Ye/PQPndeckL1Vvaa2FALKCQxISshx9vPMZ1PH fIRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=h5Mbd5RJ5RGOCZTFVN74K/xRBzeq6JfWSYd4tCpbAXU=; b=o6Al0N8/hi20LVKtgc8PoyzemNGE+B73R2Sf88RTWIa1t8H3iofp2W8LOm9Lw6T+Rg TuoKNdosREcYqGCAEo+XZYMfgXBPkZ9fd1ez1nT8AUQ4TiQHNCzWprx7W6kvcBZriufZ cbm7U8zPNgMkB40dGjNqxLpQBAUAhMc2b4tUyMEjt8sfShZKUQ2zQHozdZu7w7U8o6B+ QA47Yo1fziMmc2Nk3Q5jo93xhmCXoDHk4TTabF1McyFIKria1BR9QkzrOhKhdmK20Imi IovPWC4qh91KkvueNIRITpotc3CmjpjUMfbxSGh+DEjsf/da3HOZFZJ4UNGI9Nm19HSy fzyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OeAR1xl3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j16-20020a056a00235000b0059cc9bfde63si15793966pfj.92.2023.03.21.21.02.00; Tue, 21 Mar 2023 21:02:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OeAR1xl3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230385AbjCVEA6 (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230294AbjCVEAc (ORCPT ); Wed, 22 Mar 2023 00:00:32 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C510A4BE96 for ; Tue, 21 Mar 2023 21:00:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 55A78B81B04 for ; Wed, 22 Mar 2023 04:00:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E856C433EF; Wed, 22 Mar 2023 04:00:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457627; bh=tb5QGkQcKrmN7Wm/CAvKL0u/Ga2rvVEu68/h2UtgX30=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OeAR1xl3qz3lSIgmEmstlXdDKpTgH5daqrwgwb1svDcx/JYf5SC/V3XUSiU5GVltU eGu2ycVrh06TwCqMcuoR+nhleIVna0Ryd39Pq1p/6qbg9qUiLUUgdHpgaUnyw6wB2N VzxDW22Rud/BBmJtIaphSlpJ59ENJI9M9MYLfnV5sUGddDxkW5YSI9ahCR3Ab0rvyX BUVwlTPFtXUZX19JNTvSdf8D7vSsbXRMMgKmML8wRqYAf6NJ9r1cSMvhPVFwpJqb36 1aacCKxi6b6PngHaenG7vkpWtRBVhu4SCNE3EOTxWIoyiPdvnJIjrnrTw5M9Wtcu9n 9rtPHuKF5pQiw== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 07/11] static_call: Reorganize static call headers Date: Tue, 21 Mar 2023 21:00:13 -0700 Message-Id: <315c9c6959d53bcdfc05e64a90bfd465137aca95.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761039072146825030?= X-GMAIL-MSGID: =?utf-8?q?1761039072146825030?= Move all the extra gunk out of static_call_types.h, which is for sharing types with objtool. While at it, de-spaghettify static_call.h, with user-visible interfaces at the top, and implementation differences more clearly separated. Signed-off-by: Josh Poimboeuf --- arch/arm/include/asm/paravirt.h | 2 +- arch/arm64/include/asm/paravirt.h | 2 +- arch/x86/include/asm/paravirt.h | 2 +- arch/x86/include/asm/preempt.h | 2 +- arch/x86/include/asm/static_call.h | 3 +- arch/x86/kernel/paravirt.c | 1 + include/linux/entry-common.h | 2 +- include/linux/entry-kvm.h | 2 +- include/linux/kernel.h | 2 +- include/linux/module.h | 2 +- include/linux/static_call.h | 250 +++++++++++------------- include/linux/static_call_types.h | 70 +------ kernel/static_call.c | 1 + kernel/static_call_inline.c | 13 ++ tools/include/linux/static_call_types.h | 70 +------ 15 files changed, 148 insertions(+), 276 deletions(-) diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h index 95d5b0d625cd..37a723b653f5 100644 --- a/arch/arm/include/asm/paravirt.h +++ b/arch/arm/include/asm/paravirt.h @@ -3,7 +3,7 @@ #define _ASM_ARM_PARAVIRT_H #ifdef CONFIG_PARAVIRT -#include +#include struct static_key; extern struct static_key paravirt_steal_enabled; diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index 9aa193e0e8f2..f59cb310d8ef 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -3,7 +3,7 @@ #define _ASM_ARM64_PARAVIRT_H #ifdef CONFIG_PARAVIRT -#include +#include struct static_key; extern struct static_key paravirt_steal_enabled; diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index cf40e813b3d7..25d7696be801 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -15,7 +15,7 @@ #include #include #include -#include +#include #include u64 dummy_steal_clock(int cpu); diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 65028c346709..0879ec504b31 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -7,7 +7,7 @@ #include #include -#include +#include /* We use the MSB mostly because its available */ #define PREEMPT_NEED_RESCHED 0x80000000 diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index 52abbdfd6106..14c6b1862e2e 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -2,7 +2,8 @@ #ifndef _ASM_STATIC_CALL_H #define _ASM_STATIC_CALL_H -#include +#include +#include /* * For CONFIG_HAVE_STATIC_CALL_INLINE, this is a temporary trampoline which diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 42e182868873..378aaa2925ad 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -33,6 +33,7 @@ #include #include #include +#include /* * nop stub, which must not clobber anything *including the stack* to diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index d95ab85f96ba..c89b08e6a029 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -2,7 +2,7 @@ #ifndef __LINUX_ENTRYCOMMON_H #define __LINUX_ENTRYCOMMON_H -#include +#include #include #include #include diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h index 6813171afccb..2f3e56062e3e 100644 --- a/include/linux/entry-kvm.h +++ b/include/linux/entry-kvm.h @@ -2,7 +2,7 @@ #ifndef __LINUX_ENTRYKVM_H #define __LINUX_ENTRYKVM_H -#include +#include #include #include #include diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 5c857c3acbc0..90bc8932c4e3 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -28,7 +28,7 @@ #include #include #include -#include +#include #include #include diff --git a/include/linux/module.h b/include/linux/module.h index 4435ad9439ab..a933ec51817d 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -26,7 +26,7 @@ #include #include #include -#include +#include #include #include diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 74f089a5955b..650bda9a3367 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -134,178 +134,141 @@ #include #include -#ifdef CONFIG_HAVE_STATIC_CALL -#include - -/* - * Either @site or @tramp can be NULL. - */ -extern void arch_static_call_transform(void *site, void *tramp, void *func, bool tail); - -#define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name) - -#else -#define STATIC_CALL_TRAMP_ADDR(name) NULL -#endif - -#define static_call_update(name, func) \ -({ \ - typeof(&STATIC_CALL_TRAMP(name)) __F = (func); \ - __static_call_update(&STATIC_CALL_KEY(name), \ - STATIC_CALL_TRAMP_ADDR(name), __F); \ -}) - -#define static_call_query(name) (READ_ONCE(STATIC_CALL_KEY(name).func)) - +struct static_call_mods; +struct static_call_key { + void *func; #ifdef CONFIG_HAVE_STATIC_CALL_INLINE - -extern int __init static_call_init(void); - -extern void static_call_force_reinit(void); - -struct static_call_mod { - struct static_call_mod *next; - struct module *mod; /* for vmlinux, mod == NULL */ - struct static_call_site *sites; + union { + /* bit 0: 0 = sites, 1 = mods */ + unsigned long type; + struct static_call_site *_sites; + struct static_call_mod *_mods; + }; +#endif }; -/* For finding the key associated with a trampoline */ -struct static_call_tramp_key { - s32 tramp; - s32 key; -}; +#define DECLARE_STATIC_CALL(name, func) \ + extern struct static_call_key STATIC_CALL_KEY(name); \ + extern typeof(func) STATIC_CALL_TRAMP(name); -extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); -extern int static_call_text_reserved(void *start, void *end); - -extern long __static_call_return0(void); - -#define DEFINE_STATIC_CALL(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ +#define __DEFINE_STATIC_CALL(name, type, _func) \ + DECLARE_STATIC_CALL(name, type); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = _func, \ - }; \ - ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func) + } -#define DEFINE_STATIC_CALL_NULL(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = NULL, \ - }; \ - ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) +#define DEFINE_STATIC_CALL(name, func) \ + __DEFINE_STATIC_CALL(name, func, func); \ + __DEFINE_STATIC_CALL_TRAMP(name, func) -#define DEFINE_STATIC_CALL_RET0(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = __static_call_return0, \ - }; \ - ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) +#define DEFINE_STATIC_CALL_NULL(name, type) \ + __DEFINE_STATIC_CALL(name, type, NULL); \ + __DEFINE_STATIC_CALL_NULL_TRAMP(name) -#define static_call_cond(name) (void)__static_call(name) +#define DEFINE_STATIC_CALL_RET0(name, type) \ + __DEFINE_STATIC_CALL(name, type, __static_call_return0); \ + __DEFINE_STATIC_CALL_RET0_TRAMP(name) #define EXPORT_STATIC_CALL(name) \ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ - EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)) + __EXPORT_STATIC_CALL_TRAMP(name) #define EXPORT_STATIC_CALL_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)); \ - EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) + __EXPORT_STATIC_CALL_TRAMP_GPL(name) /* * Read-only exports: export the trampoline but not the key, so modules can't * change call targets. + * + * These are called via static_call_ro(). */ #define EXPORT_STATIC_CALL_RO(name) \ - EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)); \ + __EXPORT_STATIC_CALL_TRAMP(name); \ __STATIC_CALL_ADD_TRAMP_KEY(name) -#define EXPORT_STATIC_CALL_RO_GPL(name) \ - EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)); \ +#define EXPORT_STATIC_CALL_RO_GPL(name) \ + __EXPORT_STATIC_CALL_TRAMP_GPL(name); \ __STATIC_CALL_ADD_TRAMP_KEY(name) -/* Unexported key lookup table */ -#define __STATIC_CALL_ADD_TRAMP_KEY(name) \ - asm(".pushsection .static_call_tramp_key, \"a\" \n" \ - ".long " STATIC_CALL_TRAMP_STR(name) " - . \n" \ - ".long " STATIC_CALL_KEY_STR(name) " - . \n" \ - ".popsection \n") +/* + * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from + * the symbol table so that objtool can reference it when it generates the + * .static_call_sites section. + */ +#define __STATIC_CALL_ADDRESSABLE(name) __ADDRESSABLE(STATIC_CALL_KEY(name)) -#elif defined(CONFIG_HAVE_STATIC_CALL) +#define static_call(name) \ +({ \ + __STATIC_CALL_ADDRESSABLE(name); \ + __static_call(name); \ +}) -static inline int static_call_init(void) { return 0; } +#define static_call_cond(name) (void)__static_call_cond(name) -#define DEFINE_STATIC_CALL(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = _func, \ - }; \ - ARCH_DEFINE_STATIC_CALL_TRAMP(name, _func) +/* Use static_call_ro() to call a read-only-exported static call. */ +#define static_call_ro(name) __static_call_ro(name) -#define DEFINE_STATIC_CALL_NULL(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = NULL, \ - }; \ - ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) +#if defined(MODULE) || !defined(CONFIG_HAVE_STATIC_CALL_INLINE) +#define __STATIC_CALL_RO_ADDRESSABLE(name) +#define __static_call_ro(name) __static_call(name) +#else +#define __STATIC_CALL_RO_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) +#define __static_call_ro(name) static_call(name) +#endif -#define DEFINE_STATIC_CALL_RET0(name, _func) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = __static_call_return0, \ - }; \ - ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) +#define static_call_update(name, func) \ +({ \ + typeof(&STATIC_CALL_TRAMP(name)) __F = (func); \ + __static_call_update(&STATIC_CALL_KEY(name), \ + STATIC_CALL_TRAMP_ADDR(name), __F); \ +}) -#define static_call_cond(name) (void)__static_call(name) +#define static_call_query(name) (READ_ONCE(STATIC_CALL_KEY(name).func)) -extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); -static inline int static_call_text_reserved(void *start, void *end) -{ - return 0; -} +#ifdef CONFIG_HAVE_STATIC_CALL -extern long __static_call_return0(void); +#include -#define EXPORT_STATIC_CALL(name) \ - EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ - EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)) -#define EXPORT_STATIC_CALL_GPL(name) \ - EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)); \ - EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) +#define __DEFINE_STATIC_CALL_TRAMP(name, func) \ + ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) -/* - * Read-only exports: export the trampoline but not the key, so modules can't - * change call targets. - */ -#define EXPORT_STATIC_CALL_RO(name) \ +#define __DEFINE_STATIC_CALL_NULL_TRAMP(name) \ + ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) + +#define __DEFINE_STATIC_CALL_RET0_TRAMP(name) \ + ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) + +#define __EXPORT_STATIC_CALL_TRAMP(name) \ EXPORT_SYMBOL(STATIC_CALL_TRAMP(name)) -#define EXPORT_STATIC_CALL_RO_GPL(name) \ + +#define __EXPORT_STATIC_CALL_TRAMP_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) -#else /* Generic implementation */ +#define __static_call(name) (&STATIC_CALL_TRAMP(name)) +#define __static_call_cond __static_call -static inline int static_call_init(void) { return 0; } +#define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name) -static inline long __static_call_return0(void) -{ - return 0; -} +extern long __static_call_return0(void); +extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); -#define __DEFINE_STATIC_CALL(name, _func, _func_init) \ - DECLARE_STATIC_CALL(name, _func); \ - struct static_call_key STATIC_CALL_KEY(name) = { \ - .func = _func_init, \ - } +/* + * Either @site or @tramp can be NULL. + */ +extern void arch_static_call_transform(void *site, void *tramp, void *func, bool tail); -#define DEFINE_STATIC_CALL(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, _func) +#else /* !CONFIG_HAVE_STATIC_CALL */ -#define DEFINE_STATIC_CALL_NULL(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, NULL) +#define __DEFINE_STATIC_CALL_TRAMP(name, func) +#define __DEFINE_STATIC_CALL_NULL_TRAMP(name) +#define __DEFINE_STATIC_CALL_RET0_TRAMP(name) +#define __EXPORT_STATIC_CALL_TRAMP(name) +#define __EXPORT_STATIC_CALL_TRAMP_GPL(name) -#define DEFINE_STATIC_CALL_RET0(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, __static_call_return0) +#define __static_call(name) \ + ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) static inline void __static_call_nop(void) { } - /* * This horrific hack takes care of two things: * @@ -326,7 +289,9 @@ static inline void __static_call_nop(void) { } (typeof(STATIC_CALL_TRAMP(name))*)func; \ }) -#define static_call_cond(name) (void)__static_call_cond(name) +#define STATIC_CALL_TRAMP_ADDR(name) NULL + +static inline long __static_call_return0(void) { return 0; } static inline void __static_call_update(struct static_call_key *key, void *tramp, void *func) @@ -334,14 +299,29 @@ void __static_call_update(struct static_call_key *key, void *tramp, void *func) WRITE_ONCE(key->func, func); } -static inline int static_call_text_reserved(void *start, void *end) -{ - return 0; -} +#endif /* CONFIG_HAVE_STATIC_CALL */ -#define EXPORT_STATIC_CALL(name) EXPORT_SYMBOL(STATIC_CALL_KEY(name)) -#define EXPORT_STATIC_CALL_GPL(name) EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)) -#endif /* CONFIG_HAVE_STATIC_CALL */ +#ifdef CONFIG_HAVE_STATIC_CALL_INLINE + +/* Unexported key lookup table */ +#define __STATIC_CALL_ADD_TRAMP_KEY(name) \ + asm(".pushsection .static_call_tramp_key, \"a\" \n" \ + ".long " STATIC_CALL_TRAMP_STR(name) " - . \n" \ + ".long " STATIC_CALL_KEY_STR(name) " - . \n" \ + ".popsection \n") + +extern int static_call_init(void); +extern int static_call_text_reserved(void *start, void *end); +extern void static_call_force_reinit(void); + +#else /* !CONFIG_HAVE_STATIC_CALL_INLINE*/ + +#define __STATIC_CALL_ADD_TRAMP_KEY(name) +static inline int static_call_init(void) { return 0; } +static inline int static_call_text_reserved(void *start, void *end) { return 0; } +static inline void static_call_force_reinit(void) {} + +#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ #endif /* _LINUX_STATIC_CALL_H */ diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 06293067424f..8b349fe39e45 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -2,6 +2,10 @@ #ifndef _STATIC_CALL_TYPES_H #define _STATIC_CALL_TYPES_H +/* + * Static call types for sharing with objtool + */ + #include #include #include @@ -34,70 +38,4 @@ struct static_call_site { s32 key; }; -#define DECLARE_STATIC_CALL(name, func) \ - extern struct static_call_key STATIC_CALL_KEY(name); \ - extern typeof(func) STATIC_CALL_TRAMP(name); - -#ifdef CONFIG_HAVE_STATIC_CALL - -#define __raw_static_call(name) (&STATIC_CALL_TRAMP(name)) - -#ifdef CONFIG_HAVE_STATIC_CALL_INLINE - -/* - * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from - * the symbol table so that objtool can reference it when it generates the - * .static_call_sites section. - */ -#define __STATIC_CALL_ADDRESSABLE(name) \ - __ADDRESSABLE(STATIC_CALL_KEY(name)) - -#define __static_call(name) \ -({ \ - __STATIC_CALL_ADDRESSABLE(name); \ - __raw_static_call(name); \ -}) - -struct static_call_key { - void *func; - union { - /* bit 0: 0 = sites, 1 = mods */ - unsigned long type; - struct static_call_site *_sites; - struct static_call_mod *_mods; - }; -}; - -#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */ - -#define __STATIC_CALL_ADDRESSABLE(name) -#define __static_call(name) __raw_static_call(name) - -struct static_call_key { - void *func; -}; - -#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ - -#ifdef MODULE -#define __STATIC_CALL_RO_ADDRESSABLE(name) -#define static_call_ro(name) __raw_static_call(name) -#else -#define __STATIC_CALL_RO_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_ro(name) __static_call(name) -#endif - -#define static_call(name) __static_call(name) - -#else - -struct static_call_key { - void *func; -}; - -#define static_call(name) \ - ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) - -#endif /* CONFIG_HAVE_STATIC_CALL */ - #endif /* _STATIC_CALL_TYPES_H */ diff --git a/kernel/static_call.c b/kernel/static_call.c index 63486995fd82..e5fc33d05015 100644 --- a/kernel/static_call.c +++ b/kernel/static_call.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c index 41f6bda6773a..b4f4a9eaa6d8 100644 --- a/kernel/static_call_inline.c +++ b/kernel/static_call_inline.c @@ -9,6 +9,19 @@ #include #include #include +#include + +/* For finding the key associated with a trampoline */ +struct static_call_tramp_key { + s32 tramp; + s32 key; +}; + +struct static_call_mod { + struct static_call_mod *next; + struct module *mod; /* for vmlinux, mod == NULL */ + struct static_call_site *sites; +}; extern struct static_call_site __start_static_call_sites[], __stop_static_call_sites[]; diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 06293067424f..8b349fe39e45 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -2,6 +2,10 @@ #ifndef _STATIC_CALL_TYPES_H #define _STATIC_CALL_TYPES_H +/* + * Static call types for sharing with objtool + */ + #include #include #include @@ -34,70 +38,4 @@ struct static_call_site { s32 key; }; -#define DECLARE_STATIC_CALL(name, func) \ - extern struct static_call_key STATIC_CALL_KEY(name); \ - extern typeof(func) STATIC_CALL_TRAMP(name); - -#ifdef CONFIG_HAVE_STATIC_CALL - -#define __raw_static_call(name) (&STATIC_CALL_TRAMP(name)) - -#ifdef CONFIG_HAVE_STATIC_CALL_INLINE - -/* - * __ADDRESSABLE() is used to ensure the key symbol doesn't get stripped from - * the symbol table so that objtool can reference it when it generates the - * .static_call_sites section. - */ -#define __STATIC_CALL_ADDRESSABLE(name) \ - __ADDRESSABLE(STATIC_CALL_KEY(name)) - -#define __static_call(name) \ -({ \ - __STATIC_CALL_ADDRESSABLE(name); \ - __raw_static_call(name); \ -}) - -struct static_call_key { - void *func; - union { - /* bit 0: 0 = sites, 1 = mods */ - unsigned long type; - struct static_call_site *_sites; - struct static_call_mod *_mods; - }; -}; - -#else /* !CONFIG_HAVE_STATIC_CALL_INLINE */ - -#define __STATIC_CALL_ADDRESSABLE(name) -#define __static_call(name) __raw_static_call(name) - -struct static_call_key { - void *func; -}; - -#endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ - -#ifdef MODULE -#define __STATIC_CALL_RO_ADDRESSABLE(name) -#define static_call_ro(name) __raw_static_call(name) -#else -#define __STATIC_CALL_RO_ADDRESSABLE(name) __STATIC_CALL_ADDRESSABLE(name) -#define static_call_ro(name) __static_call(name) -#endif - -#define static_call(name) __static_call(name) - -#else - -struct static_call_key { - void *func; -}; - -#define static_call(name) \ - ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) - -#endif /* CONFIG_HAVE_STATIC_CALL */ - #endif /* _STATIC_CALL_TYPES_H */ From patchwork Wed Mar 22 04:00:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73187 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2154786wrt; Tue, 21 Mar 2023 21:27:38 -0700 (PDT) X-Google-Smtp-Source: AK7set/GTY5lNLe0XrpLb2iUmgbdYicFUo9uqKLORLA6gEuoVVOKcAQwO0DnwlxmGN68HQFX6eeG X-Received: by 2002:a17:90b:33cc:b0:233:a6b7:3770 with SMTP id lk12-20020a17090b33cc00b00233a6b73770mr2337887pjb.14.1679459258454; Tue, 21 Mar 2023 21:27:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679459258; cv=none; d=google.com; s=arc-20160816; b=HPebXKLWTq3LsP/kRHD3Z2Nbqsqn86uuze0I9V9oXLXNtv38Ws9NPbXHHRoiK5ZOJf qS08OqUt0oAkwG64Ba4vrOaQSbdDyoUiVpKCLaLVid264oQC4RowMLp43IjIExxno5QC W9voDSeRPGrwUwzCOjJWUt4osmYFlKsTmOpcPeETou0wG8HmAzig2g6lOBBhfxWOf40s Qr+X3jSBUcPZw3Jg6wOpPUw6tsWat4QGpGe3wb+H0yiVCS02AeATe376T6JvvGYvJ1L0 mu4b6DrDIJ4g7kZoDxS4xVVveuWzXQfWkPX1k9ehh41AUbybfkMxVlGUzF/Bkzqg9bgC E4vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XlqQF4wUw68rrGIX6vF73C2t8N7/1+DpBCW+LmFO4BY=; b=P8jX6kK4oMwb0jmjxtlvs5ETdZdktaKOx/ndQ7qNWrvcVRPQdca6QQDoPYjXhXjL5q XPKuR5bu2r+/IreLQeJi86tqJlAx73j18rQQz/Kc6aWRtb3u0qQ/LLADxsxSG7V05HMv Vm/zk3Opgc3m2e43Op0c26suXpza8q9W5kZMHvHtOy9hzjxnM07aDp5XDNj1oBs7Tc4j fLUydJWVEQsQFByTw1o2CF6R3LcniQZw/rRGc8ypNpNKmyi7gkdsLss0lTUYnQRJDCkv V4HCJVz00hMVMyQ5RZEpDJm+WBDMd0HxAbfUTIr1qzBugoOYVtzdf57ufCwZ4YXP9fQF +MsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Hi5QCIv/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t5-20020a17090abc4500b0023d1e317716si14452740pjv.40.2023.03.21.21.27.26; Tue, 21 Mar 2023 21:27:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="Hi5QCIv/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230313AbjCVEAz (ORCPT + 99 others); Wed, 22 Mar 2023 00:00:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbjCVEAb (ORCPT ); Wed, 22 Mar 2023 00:00:31 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45DDB4C6C7 for ; Tue, 21 Mar 2023 21:00:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E69C5B81B03 for ; Wed, 22 Mar 2023 04:00:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3049BC433A0; Wed, 22 Mar 2023 04:00:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457627; bh=Ko8dqfsbQjWRnrk2DZgCMLtu7fL816reSjEA59wqAaY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hi5QCIv/A5W/qDawWErpT3W2w2gt9tQk4Wa3uUc3oPnTWgZKMZvTwsZpNNw1d83dM ybVCWti9DqrlfsI4b5LHfg+zEE+s7sTfpCd8hurzI5tmcXSHdwHFtiVbVDi/hzFaa+ F2fbY8rX+3sk5LkgRQ97OLqy9GHjeHCx14jb7SI+Z+a0XMbiMeCGYqDtOAm9+U1kdd 2yfgJbZdi68fLxq+7fP3EhwY5v9p0pEunvM1C1Bs5a2qbMp/WnUiroZtJH/f0hjNWr Fa1sWKNspub7EBUBCfMausDojGiuZG8EIV0k31kci9sjEFa/JQ93mAL7JI6Drz8B89 B7kDoe7ed0y1w== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 08/11] arm64/static_call: Fix static call CFI violations Date: Tue, 21 Mar 2023 21:00:14 -0700 Message-Id: <3d8c9e67a7e29f3bed4e44429d953e1ac9c6d5be.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040671526245918?= X-GMAIL-MSGID: =?utf-8?q?1761040671526245918?= On arm64, with CONFIG_CFI_CLANG, it's trivial to trigger CFI violations by running "perf record -e sched:sched_switch -a": CFI failure at perf_misc_flags+0x34/0x70 (target: __static_call_return0+0x0/0xc; expected type: 0x837de525) WARNING: CPU: 3 PID: 32 at perf_misc_flags+0x34/0x70 CPU: 3 PID: 32 Comm: ksoftirqd/3 Kdump: loaded Tainted: P 6.3.0-rc2 #8 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 pstate: 904000c5 (NzcV daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : perf_misc_flags+0x34/0x70 lr : perf_event_output_forward+0x74/0xf0 sp : ffff80000a98b970 x29: ffff80000a98b970 x28: ffff00077bd34d00 x27: ffff8000097d2d00 x26: fffffbffeff6a360 x25: ffff800009835a30 x24: ffff0000c2e8dca0 x23: 0000000000000000 x22: 0000000000000080 x21: ffff00077bd31610 x20: ffff0000c2e8dca0 x19: ffff00077bd31610 x18: ffff800008cd52f0 x17: 00000000837de525 x16: 0000000072923c8f x15: 000000000000b67e x14: 000000000178797d x13: 0000000000000004 x12: 0000000070b5b3a8 x11: 0000000000000015 x10: 0000000000000048 x9 : ffff80000829e2b4 x8 : ffff80000829c6f0 x7 : 0000000000000000 x6 : 0000000000000000 x5 : fffffbffeff6a340 x4 : ffff00077bd31610 x3 : ffff00077bd31610 x2 : ffff800009833400 x1 : 0000000000000000 x0 : ffff00077bd31610 Call trace: perf_misc_flags+0x34/0x70 perf_event_output_forward+0x74/0xf0 __perf_event_overflow+0x12c/0x1e8 perf_swevent_event+0x98/0x1a0 perf_tp_event+0x140/0x558 perf_trace_run_bpf_submit+0x88/0xc8 perf_trace_sched_switch+0x160/0x19c __schedule+0xabc/0x153c dynamic_cond_resched+0x48/0x68 run_ksoftirqd+0x3c/0x138 smpboot_thread_fn+0x26c/0x2f8 kthread+0x108/0x1c4 ret_from_fork+0x10/0x20 The problem is that the __perf_guest_state() static call does an indirect branch to __static_call_return0(), which isn't CFI-compliant. Fix that by generating custom CFI-compliant ret0 functions for each defined static key. Signed-off-by: Josh Poimboeuf Tested-by: Mark Rutland --- arch/Kconfig | 4 ++ arch/arm64/include/asm/static_call.h | 29 +++++++++++ include/linux/static_call.h | 64 +++++++++++++++++++++---- include/linux/static_call_types.h | 4 ++ kernel/Makefile | 2 +- kernel/static_call.c | 2 +- tools/include/linux/static_call_types.h | 4 ++ 7 files changed, 97 insertions(+), 12 deletions(-) create mode 100644 arch/arm64/include/asm/static_call.h diff --git a/arch/Kconfig b/arch/Kconfig index e3511afbb7f2..8800fe80a0f9 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1348,6 +1348,10 @@ config HAVE_STATIC_CALL_INLINE depends on HAVE_STATIC_CALL select OBJTOOL +config CFI_WITHOUT_STATIC_CALL + def_bool y + depends on CFI_CLANG && !HAVE_STATIC_CALL + config HAVE_PREEMPT_DYNAMIC bool diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h new file mode 100644 index 000000000000..b3489cac7742 --- /dev/null +++ b/arch/arm64/include/asm/static_call.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_STATIC_CALL_H +#define _ASM_ARM64_STATIC_CALL_H + +/* + * Make a dummy reference to a function pointer in C to force the compiler to + * emit a __kcfi_typeid_ symbol for asm to use. + */ +#define GEN_CFI_SYM(func) \ + static typeof(func) __used __section(".discard.cfi") *__UNIQUE_ID(cfi) = func + + +/* Generate a CFI-compliant static call NOP function */ +#define __ARCH_DEFINE_STATIC_CALL_CFI(name, insns) \ + asm(".align 4 \n" \ + ".word __kcfi_typeid_" name " \n" \ + ".globl " name " \n" \ + name ": \n" \ + "bti c \n" \ + insns " \n" \ + "ret \n" \ + ".type " name ", @function \n" \ + ".size " name ", . - " name " \n") + +#define __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) \ + GEN_CFI_SYM(STATIC_CALL_RET0_CFI(name)); \ + __ARCH_DEFINE_STATIC_CALL_CFI(STATIC_CALL_RET0_CFI_STR(name), "mov x0, xzr") + +#endif /* _ASM_ARM64_STATIC_CALL_H */ diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 650bda9a3367..50ad928afeb8 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -147,15 +147,19 @@ struct static_call_key { #endif }; +extern long __static_call_return0(void); + #define DECLARE_STATIC_CALL(name, func) \ extern struct static_call_key STATIC_CALL_KEY(name); \ - extern typeof(func) STATIC_CALL_TRAMP(name); + extern typeof(func) STATIC_CALL_TRAMP(name); \ + __DECLARE_STATIC_CALL_CFI(name, func) #define __DEFINE_STATIC_CALL(name, type, _func) \ DECLARE_STATIC_CALL(name, type); \ struct static_call_key STATIC_CALL_KEY(name) = { \ .func = _func, \ - } + }; \ + __DEFINE_STATIC_CALL_CFI(name) #define DEFINE_STATIC_CALL(name, func) \ __DEFINE_STATIC_CALL(name, func, func); \ @@ -166,15 +170,18 @@ struct static_call_key { __DEFINE_STATIC_CALL_NULL_TRAMP(name) #define DEFINE_STATIC_CALL_RET0(name, type) \ - __DEFINE_STATIC_CALL(name, type, __static_call_return0); \ + __DEFINE_STATIC_CALL(name, type, __STATIC_CALL_RET0(name)); \ __DEFINE_STATIC_CALL_RET0_TRAMP(name) #define EXPORT_STATIC_CALL(name) \ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \ - __EXPORT_STATIC_CALL_TRAMP(name) + __EXPORT_STATIC_CALL_TRAMP(name); \ + __EXPORT_STATIC_CALL_CFI(name) + #define EXPORT_STATIC_CALL_GPL(name) \ EXPORT_SYMBOL_GPL(STATIC_CALL_KEY(name)); \ - __EXPORT_STATIC_CALL_TRAMP_GPL(name) + __EXPORT_STATIC_CALL_TRAMP_GPL(name); \ + __EXPORT_STATIC_CALL_CFI_GPL(name) /* * Read-only exports: export the trampoline but not the key, so modules can't @@ -184,9 +191,12 @@ struct static_call_key { */ #define EXPORT_STATIC_CALL_RO(name) \ __EXPORT_STATIC_CALL_TRAMP(name); \ + __EXPORT_STATIC_CALL_CFI(name) \ __STATIC_CALL_ADD_TRAMP_KEY(name) + #define EXPORT_STATIC_CALL_RO_GPL(name) \ __EXPORT_STATIC_CALL_TRAMP_GPL(name); \ + __EXPORT_STATIC_CALL_CFI_GPL(name) \ __STATIC_CALL_ADD_TRAMP_KEY(name) /* @@ -218,12 +228,19 @@ struct static_call_key { #define static_call_update(name, func) \ ({ \ typeof(&STATIC_CALL_TRAMP(name)) __F = (func); \ + if (__F == (void *)__static_call_return0) \ + __F = __STATIC_CALL_RET0(name); \ __static_call_update(&STATIC_CALL_KEY(name), \ STATIC_CALL_TRAMP_ADDR(name), __F); \ }) -#define static_call_query(name) (READ_ONCE(STATIC_CALL_KEY(name).func)) - +#define static_call_query(name) \ +({ \ + void *__F = (READ_ONCE(STATIC_CALL_KEY(name).func)); \ + if (__F == __STATIC_CALL_RET0(name)) \ + __F = __static_call_return0; \ + __F; \ +}) #ifdef CONFIG_HAVE_STATIC_CALL @@ -249,7 +266,6 @@ struct static_call_key { #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name) -extern long __static_call_return0(void); extern void __static_call_update(struct static_call_key *key, void *tramp, void *func); /* @@ -291,8 +307,6 @@ static inline void __static_call_nop(void) { } #define STATIC_CALL_TRAMP_ADDR(name) NULL -static inline long __static_call_return0(void) { return 0; } - static inline void __static_call_update(struct static_call_key *key, void *tramp, void *func) { @@ -324,4 +338,34 @@ static inline void static_call_force_reinit(void) {} #endif /* CONFIG_HAVE_STATIC_CALL_INLINE */ + +#ifdef CONFIG_CFI_WITHOUT_STATIC_CALL + +#include + +#define __STATIC_CALL_RET0(name) STATIC_CALL_RET0_CFI(name) + +#define __DECLARE_STATIC_CALL_CFI(name, func) \ + extern typeof(func) STATIC_CALL_RET0_CFI(name) + +#define __DEFINE_STATIC_CALL_CFI(name) \ + __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) + +#define __EXPORT_STATIC_CALL_CFI(name) \ + EXPORT_SYMBOL(STATIC_CALL_RET0_CFI(name)) + +#define __EXPORT_STATIC_CALL_CFI_GPL(name) \ + EXPORT_SYMBOL_GPL(STATIC_CALL_RET0_CFI(name)) + +#else /* ! CONFIG_CFI_WITHOUT_STATIC_CALL */ + +#define __STATIC_CALL_RET0(name) (void *)__static_call_return0 + +#define __DECLARE_STATIC_CALL_CFI(name, func) +#define __DEFINE_STATIC_CALL_CFI(name) +#define __EXPORT_STATIC_CALL_CFI(name) +#define __EXPORT_STATIC_CALL_CFI_GPL(name) + +#endif /* CONFIG_CFI_WITHOUT_STATIC_CALL */ + #endif /* _LINUX_STATIC_CALL_H */ diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 8b349fe39e45..72732af51cba 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -22,6 +22,10 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) +#define STATIC_CALL_RET0_CFI_PREFIX __SCR__ +#define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) +#define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name)) + /* * Flags in the low bits of static_call_site::key. */ diff --git a/kernel/Makefile b/kernel/Makefile index 10ef068f598d..59b062b1c8f7 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -110,7 +110,7 @@ obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ obj-$(CONFIG_KCSAN) += kcsan/ obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o -obj-$(CONFIG_HAVE_STATIC_CALL) += static_call.o +obj-y += static_call.o obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call_inline.o obj-$(CONFIG_CFI_CLANG) += cfi.o diff --git a/kernel/static_call.c b/kernel/static_call.c index e5fc33d05015..090ecf5d34b4 100644 --- a/kernel/static_call.c +++ b/kernel/static_call.c @@ -9,7 +9,7 @@ long __static_call_return0(void) } EXPORT_SYMBOL_GPL(__static_call_return0); -#ifndef CONFIG_HAVE_STATIC_CALL_INLINE +#if defined(CONFIG_HAVE_STATIC_CALL) && !defined(CONFIG_HAVE_STATIC_CALL_INLINE) void __static_call_update(struct static_call_key *key, void *tramp, void *func) { cpus_read_lock(); diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 8b349fe39e45..72732af51cba 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -22,6 +22,10 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) +#define STATIC_CALL_RET0_CFI_PREFIX __SCR__ +#define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) +#define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name)) + /* * Flags in the low bits of static_call_site::key. */ From patchwork Wed Mar 22 04:00:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73178 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2148842wrt; Tue, 21 Mar 2023 21:04:28 -0700 (PDT) X-Google-Smtp-Source: AK7set/AzyCl/PVhq4OFqu88e+WN9Ton7PoTqd1ZnUKWuQEjOJ4xmkP4nYblZs8UEetUUQAOUzyP X-Received: by 2002:a05:6a00:80d0:b0:627:e677:bc70 with SMTP id ei16-20020a056a0080d000b00627e677bc70mr717636pfb.14.1679457867990; Tue, 21 Mar 2023 21:04:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679457867; cv=none; d=google.com; s=arc-20160816; b=vq4je+OGbu022QYKsjbIh83uouw5FkUk9spL1kOnZxKSHp+6oUYXSJkGngqtGJiDdq Yrx4alSl4nVUE+D8FXxBgRXbm35j/nnLpJ5mJBHQGaIaUmGcJqt6QIFRRerkOmKyvkro ueqd4Vya6qhnF1/7E38Q+Cfr8ZxXsRDYL8htrNZ9pmntLbnoIgQ7moddysvvMVDZF/DU HjuNNCpEnVfEjCxUbegYpW7vRddXo9RLJAMTA/ofQVeDdLROW+jwxJxqntO7l11tDkhL VsjypCagzXGUbNz6HJgIxQrWXXSG4wD5ggxsWTNvWMotS+XfZjmFljyeMqJTeHjdsxJC UMUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YhbaDF2zdZOhvXgUgdLoZ1yLewCUBxFsoV667/Oi/pI=; b=aG+kHl2o3FfEUDA85pmn7+64CIk3BpDqRuYwmLLlopbZrZUm5xEQEZvit0PAuOgyto Qfr5llLUe5q2ZshJIMtHlKKqAJrPog+yfJfn8zklLlA7T8iUkTNNZMY/NAha8jloru1K D2rd8jTehAocXfnsxG9XVPhQn6oeR/hIBEQb4P1p1C3b5kcFQ98aQelz0rH37a1ccb2c hsZ+WyyYCBv7hcq5pyaUhRc0XEqnoAZfeMB0QiZ3fnSTa8WKM2R7wsmooz8OfQuZyaXl QzqQ8zKfFFxSbnKHt9XQL/sgqB8Kq2k45/dRRWZSuIUqfAacqXZOaWRfsujNwFiAOkcW UXjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="byGUn35/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j16-20020a056a00235000b0059cc9bfde63si15793966pfj.92.2023.03.21.21.04.15; Tue, 21 Mar 2023 21:04:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="byGUn35/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230415AbjCVEBN (ORCPT + 99 others); Wed, 22 Mar 2023 00:01:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230301AbjCVEAe (ORCPT ); Wed, 22 Mar 2023 00:00:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22ACF4D421 for ; Tue, 21 Mar 2023 21:00:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A960EB81B07 for ; Wed, 22 Mar 2023 04:00:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D893CC433A7; Wed, 22 Mar 2023 04:00:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457628; bh=qjI5SVHqDJvNpjFb1YqUeFS5pXrdEquAw81GDMiBKA4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=byGUn35/G40su0Sy/SeTCsEwldfAbig96JeRetvJ+HCHwMOcsgv5h8jshTaZ4D+nB eUyBz2SGQ19I0q4oGRIGU220yXOxPEph5PgFiIAh35vncQa6XZg0gLQuqOZQOQwC/w YaBiXZoVv+K6qnoJWEbdbqiEmvtr5+TOMMukhmtMHyQFvXmgicgZ6eRVBQh4KXG6fO rrNxFVHI4cQn45qC9VXvYIGKGTEJosEWGwfPoQL4VWGMj5DElzSQYPMa63PRdArxIC hm3mjFjrHpz4qra27oEXnUcgT4ELjEaQQ7Ug+N+Lypl7MMLVnYuu1sy7mg7IA1u4lP eH8CWVqoF2iSA== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 09/11] static_call: Make NULL static calls consistent Date: Tue, 21 Mar 2023 21:00:15 -0700 Message-Id: <7638861ae89606b1277ad4235654bba2b880f313.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761039213385418888?= X-GMAIL-MSGID: =?utf-8?q?1761039213385418888?= NULL static calls have inconsistent behavior. With HAVE_STATIC_CALL=y they're a NOP, but with HAVE_STATIC_CALL=n they go boom. That's guaranteed to cause subtle bugs. Make the behavior consistent by making NULL static calls a NOP with HAVE_STATIC_CALL=n. This is probably easier than doing the reverse (making NULL static calls panic with HAVE_STATIC_CALL=y). And it seems to match the current use cases better: there are several call sites which rely on the NOP behavior, whereas no call sites rely on the crashing behavior. Signed-off-by: Josh Poimboeuf --- arch/arm64/include/asm/static_call.h | 4 ++ arch/powerpc/include/asm/static_call.h | 2 +- arch/powerpc/kernel/static_call.c | 5 +- arch/x86/include/asm/static_call.h | 4 +- arch/x86/kernel/static_call.c | 14 +++-- include/linux/static_call.h | 78 +++++++++---------------- include/linux/static_call_types.h | 4 ++ kernel/static_call.c | 5 ++ tools/include/linux/static_call_types.h | 4 ++ 9 files changed, 57 insertions(+), 63 deletions(-) diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h index b3489cac7742..02693b404afc 100644 --- a/arch/arm64/include/asm/static_call.h +++ b/arch/arm64/include/asm/static_call.h @@ -22,6 +22,10 @@ ".type " name ", @function \n" \ ".size " name ", . - " name " \n") +#define __ARCH_DEFINE_STATIC_CALL_NOP_CFI(name) \ + GEN_CFI_SYM(STATIC_CALL_NOP_CFI(name)); \ + __ARCH_DEFINE_STATIC_CALL_CFI(STATIC_CALL_NOP_CFI_STR(name), "") + #define __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) \ GEN_CFI_SYM(STATIC_CALL_RET0_CFI(name)); \ __ARCH_DEFINE_STATIC_CALL_CFI(STATIC_CALL_RET0_CFI_STR(name), "mov x0, xzr") diff --git a/arch/powerpc/include/asm/static_call.h b/arch/powerpc/include/asm/static_call.h index de1018cc522b..744435127574 100644 --- a/arch/powerpc/include/asm/static_call.h +++ b/arch/powerpc/include/asm/static_call.h @@ -23,7 +23,7 @@ #define PPC_SCT_DATA 28 /* Offset of label 2 */ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) __PPC_SCT(name, "b " #func) -#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr") +#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) __PPC_SCT(name, "blr") #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20") #endif /* _ASM_POWERPC_STATIC_CALL_H */ diff --git a/arch/powerpc/kernel/static_call.c b/arch/powerpc/kernel/static_call.c index 863a7aa24650..8bfe46654e01 100644 --- a/arch/powerpc/kernel/static_call.c +++ b/arch/powerpc/kernel/static_call.c @@ -8,6 +8,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { int err; bool is_ret0 = (func == __static_call_return0); + bool is_nop = (func == __static_call_nop); unsigned long target = (unsigned long)(is_ret0 ? tramp + PPC_SCT_RET0 : func); bool is_short = is_offset_in_branch_range((long)target - (long)tramp); @@ -16,13 +17,13 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) mutex_lock(&text_mutex); - if (func && !is_short) { + if (!is_nop && !is_short) { err = patch_instruction(tramp + PPC_SCT_DATA, ppc_inst(target)); if (err) goto out; } - if (!func) + if (is_nop) err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); else if (is_short) err = patch_branch(tramp, target, 0); diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index 14c6b1862e2e..afea6ceeed23 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -48,10 +48,10 @@ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") #ifdef CONFIG_RETHUNK -#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ +#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk") #else -#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ +#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop") #endif diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index b70670a98597..27c095c7fc96 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -89,7 +89,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, case JCC: if (!func) { - func = __static_call_return; + func = __static_call_return; //FIXME use __static_call_nop()? if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) func = x86_return_thunk; } @@ -139,33 +139,35 @@ static void __static_call_validate(u8 *insn, bool tail, bool tramp) BUG(); } -static inline enum insn_type __sc_insn(bool null, bool tail) +static inline enum insn_type __sc_insn(bool nop, bool tail) { /* * Encode the following table without branches: * - * tail null insn + * tail nop insn * -----+-------+------ * 0 | 0 | CALL * 0 | 1 | NOP * 1 | 0 | JMP * 1 | 1 | RET */ - return 2*tail + null; + return 2*tail + nop; } void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { + bool nop = (func == __static_call_nop); + mutex_lock(&text_mutex); if (tramp) { __static_call_validate(tramp, true, true); - __static_call_transform(tramp, __sc_insn(!func, true), func, false); + __static_call_transform(tramp, __sc_insn(nop, true), func, false); } if (IS_ENABLED(CONFIG_HAVE_STATIC_CALL_INLINE) && site) { __static_call_validate(site, tail, false); - __static_call_transform(site, __sc_insn(!func, tail), func, false); + __static_call_transform(site, __sc_insn(nop, tail), func, false); } mutex_unlock(&text_mutex); diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 50ad928afeb8..65ac01179993 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -66,24 +66,16 @@ * * Notes on NULL function pointers: * - * Static_call()s support NULL functions, with many of the caveats that - * regular function pointers have. + * A static_call() to a NULL function pointer is a NOP. * - * Clearly calling a NULL function pointer is 'BAD', so too for - * static_call()s (although when HAVE_STATIC_CALL it might not be immediately - * fatal). A NULL static_call can be the result of: + * A NULL static call can be the result of: * * DECLARE_STATIC_CALL_NULL(my_static_call, void (*)(int)); * - * which is equivalent to declaring a NULL function pointer with just a - * typename: - * - * void (*my_func_ptr)(int arg1) = NULL; - * - * or using static_call_update() with a NULL function. In both cases the - * HAVE_STATIC_CALL implementation will patch the trampoline with a RET - * instruction, instead of an immediate tail-call JMP. HAVE_STATIC_CALL_INLINE - * architectures can patch the trampoline call to a NOP. + * or using static_call_update() with a NULL function pointer. In both cases + * the HAVE_STATIC_CALL implementation will patch the trampoline with a RET +* instruction, instead of an immediate tail-call JMP. HAVE_STATIC_CALL_INLINE +* architectures can patch the trampoline call to a NOP. * * In all cases, any argument evaluation is unconditional. Unlike a regular * conditional function pointer call: @@ -91,14 +83,7 @@ * if (my_func_ptr) * my_func_ptr(arg1) * - * where the argument evaludation also depends on the pointer value. - * - * When calling a static_call that can be NULL, use: - * - * static_call_cond(name)(arg1); - * - * which will include the required value tests to avoid NULL-pointer - * dereferences. + * where the argument evaluation also depends on the pointer value. * * To query which function is currently set to be called, use: * @@ -147,6 +132,7 @@ struct static_call_key { #endif }; +extern void __static_call_nop(void); extern long __static_call_return0(void); #define DECLARE_STATIC_CALL(name, func) \ @@ -166,8 +152,8 @@ extern long __static_call_return0(void); __DEFINE_STATIC_CALL_TRAMP(name, func) #define DEFINE_STATIC_CALL_NULL(name, type) \ - __DEFINE_STATIC_CALL(name, type, NULL); \ - __DEFINE_STATIC_CALL_NULL_TRAMP(name) + __DEFINE_STATIC_CALL(name, type, __STATIC_CALL_NOP(name)); \ + __DEFINE_STATIC_CALL_NOP_TRAMP(name) #define DEFINE_STATIC_CALL_RET0(name, type) \ __DEFINE_STATIC_CALL(name, type, __STATIC_CALL_RET0(name)); \ @@ -212,7 +198,7 @@ extern long __static_call_return0(void); __static_call(name); \ }) -#define static_call_cond(name) (void)__static_call_cond(name) +#define static_call_cond(name) (void)static_call(name) /* Use static_call_ro() to call a read-only-exported static call. */ #define static_call_ro(name) __static_call_ro(name) @@ -228,7 +214,9 @@ extern long __static_call_return0(void); #define static_call_update(name, func) \ ({ \ typeof(&STATIC_CALL_TRAMP(name)) __F = (func); \ - if (__F == (void *)__static_call_return0) \ + if (!__F) \ + __F = __STATIC_CALL_NOP(name); \ + else if (__F == (void *)__static_call_return0) \ __F = __STATIC_CALL_RET0(name); \ __static_call_update(&STATIC_CALL_KEY(name), \ STATIC_CALL_TRAMP_ADDR(name), __F); \ @@ -237,7 +225,9 @@ extern long __static_call_return0(void); #define static_call_query(name) \ ({ \ void *__F = (READ_ONCE(STATIC_CALL_KEY(name).func)); \ - if (__F == __STATIC_CALL_RET0(name)) \ + if (__F == __STATIC_CALL_NOP(name)) \ + __F = NULL; \ + else if (__F == __STATIC_CALL_RET0(name)) \ __F = __static_call_return0; \ __F; \ }) @@ -249,8 +239,8 @@ extern long __static_call_return0(void); #define __DEFINE_STATIC_CALL_TRAMP(name, func) \ ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) -#define __DEFINE_STATIC_CALL_NULL_TRAMP(name) \ - ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) +#define __DEFINE_STATIC_CALL_NOP_TRAMP(name) \ + ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) #define __DEFINE_STATIC_CALL_RET0_TRAMP(name) \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) @@ -262,7 +252,6 @@ extern long __static_call_return0(void); EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) #define __static_call(name) (&STATIC_CALL_TRAMP(name)) -#define __static_call_cond __static_call #define STATIC_CALL_TRAMP_ADDR(name) &STATIC_CALL_TRAMP(name) @@ -276,7 +265,7 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool #else /* !CONFIG_HAVE_STATIC_CALL */ #define __DEFINE_STATIC_CALL_TRAMP(name, func) -#define __DEFINE_STATIC_CALL_NULL_TRAMP(name) +#define __DEFINE_STATIC_CALL_NOP_TRAMP(name) #define __DEFINE_STATIC_CALL_RET0_TRAMP(name) #define __EXPORT_STATIC_CALL_TRAMP(name) #define __EXPORT_STATIC_CALL_TRAMP_GPL(name) @@ -284,27 +273,6 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool #define __static_call(name) \ ((typeof(STATIC_CALL_TRAMP(name))*)(STATIC_CALL_KEY(name).func)) -static inline void __static_call_nop(void) { } -/* - * This horrific hack takes care of two things: - * - * - it ensures the compiler will only load the function pointer ONCE, - * which avoids a reload race. - * - * - it ensures the argument evaluation is unconditional, similar - * to the HAVE_STATIC_CALL variant. - * - * Sadly current GCC/Clang (10 for both) do not optimize this properly - * and will emit an indirect call for the NULL case :-( - */ -#define __static_call_cond(name) \ -({ \ - void *func = READ_ONCE(STATIC_CALL_KEY(name).func); \ - if (!func) \ - func = &__static_call_nop; \ - (typeof(STATIC_CALL_TRAMP(name))*)func; \ -}) - #define STATIC_CALL_TRAMP_ADDR(name) NULL static inline @@ -343,22 +311,28 @@ static inline void static_call_force_reinit(void) {} #include +#define __STATIC_CALL_NOP(name) STATIC_CALL_NOP_CFI(name) #define __STATIC_CALL_RET0(name) STATIC_CALL_RET0_CFI(name) #define __DECLARE_STATIC_CALL_CFI(name, func) \ + extern typeof(func) STATIC_CALL_NOP_CFI(name); \ extern typeof(func) STATIC_CALL_RET0_CFI(name) #define __DEFINE_STATIC_CALL_CFI(name) \ + __ARCH_DEFINE_STATIC_CALL_NOP_CFI(name); \ __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) #define __EXPORT_STATIC_CALL_CFI(name) \ + EXPORT_SYMBOL(STATIC_CALL_NOP_CFI(name)); \ EXPORT_SYMBOL(STATIC_CALL_RET0_CFI(name)) #define __EXPORT_STATIC_CALL_CFI_GPL(name) \ + EXPORT_SYMBOL_GPL(STATIC_CALL_NOP_CFI(name)); \ EXPORT_SYMBOL_GPL(STATIC_CALL_RET0_CFI(name)) #else /* ! CONFIG_CFI_WITHOUT_STATIC_CALL */ +#define __STATIC_CALL_NOP(name) (void *)__static_call_nop #define __STATIC_CALL_RET0(name) (void *)__static_call_return0 #define __DECLARE_STATIC_CALL_CFI(name, func) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 72732af51cba..2e2481c3f54e 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -22,6 +22,10 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) +#define STATIC_CALL_NOP_CFI_PREFIX __SCN__ +#define STATIC_CALL_NOP_CFI(name) __PASTE(STATIC_CALL_NOP_CFI_PREFIX, name) +#define STATIC_CALL_NOP_CFI_STR(name) __stringify(STATIC_CALL_NOP_CFI(name)) + #define STATIC_CALL_RET0_CFI_PREFIX __SCR__ #define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) #define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name)) diff --git a/kernel/static_call.c b/kernel/static_call.c index 090ecf5d34b4..20bf34bc3e2a 100644 --- a/kernel/static_call.c +++ b/kernel/static_call.c @@ -9,6 +9,11 @@ long __static_call_return0(void) } EXPORT_SYMBOL_GPL(__static_call_return0); +void __static_call_nop(void) +{ +} +EXPORT_SYMBOL_GPL(__static_call_nop); + #if defined(CONFIG_HAVE_STATIC_CALL) && !defined(CONFIG_HAVE_STATIC_CALL_INLINE) void __static_call_update(struct static_call_key *key, void *tramp, void *func) { diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 72732af51cba..2e2481c3f54e 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -22,6 +22,10 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) +#define STATIC_CALL_NOP_CFI_PREFIX __SCN__ +#define STATIC_CALL_NOP_CFI(name) __PASTE(STATIC_CALL_NOP_CFI_PREFIX, name) +#define STATIC_CALL_NOP_CFI_STR(name) __stringify(STATIC_CALL_NOP_CFI(name)) + #define STATIC_CALL_RET0_CFI_PREFIX __SCR__ #define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) #define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name)) From patchwork Wed Mar 22 04:00:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73184 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2153727wrt; Tue, 21 Mar 2023 21:22:35 -0700 (PDT) X-Google-Smtp-Source: AK7set9LijMp1/2MIuPAT7H7L53cQNllrcSLnItYv00yIZyZzMJD35HvQtEhAzCbeGbcxQbBzgMy X-Received: by 2002:a17:906:4f8f:b0:925:5705:b5b8 with SMTP id o15-20020a1709064f8f00b009255705b5b8mr5050712eju.58.1679458955701; Tue, 21 Mar 2023 21:22:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679458955; cv=none; d=google.com; s=arc-20160816; b=mvhVdlAJlYwENVeN+qNx4dRgWYq96JKRDO6VfrfY/vLjoYh7rsOXi3HvH+kQrmir/S /hLgLsXDCnofGiccSZOdshVtfq5uWIfIF4lLdYOhB/kQQFex+4RR5y9K7K7QJczp3+To wmFcNNz6cpRkhHGxvA8/XmzfI8bi8n1qROnQQ/S0kMUADOxnRLVv9zTxEIIYON+a2F8U blbyZ8t5K1wLuNNIDkWyWawn17ZWQM1MEAzO4NhJFW57fC3mOJHhvNzpRwOzgDWrvddl AVb7GLpWhGyaC/mBGNf4lCcYSOUrJuzSzFADIoageAaUESjFZxu7DTAaLI/ptvnS1ygM VNKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XkStwyejXpEDxQmb+lmp/LPFQY7TRpxjwssDNtJ9NVs=; b=ZqXksoNx2gwi04DJLRuvZSxZYVIj6//t3kUOlz/NYjB/ss//NuOEd+lZKiucq6z5vp waNaOs+tTQM4iaLy2UrvHzxxPwS9oQ5ckAk90zlkREhAQdlWtGazMIWBGGO+1vLxU0C8 S9YzVZRFYr7Dx8CoUKNMm9swCjWmDPyyc0HJfoenpC2K4D6w0bMYB79OLROOJX7LiBKM mIe5+Bpm35AQ6by6V3alAvpEoG/xUgdLLz+tADRWD/anTkz/K3SmD9/tPYF7qq/LePsR gbXg7AXKiAfzWEhGfhZ5TmfP1bOONley0/t1TLT466gMITOmQ2I+n0reTVYW1HoIoTSl d1IA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SDmoYgGF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id vo10-20020a170907a80a00b00931f078ce63si7512533ejc.278.2023.03.21.21.22.11; Tue, 21 Mar 2023 21:22:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=SDmoYgGF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230400AbjCVEBI (ORCPT + 99 others); Wed, 22 Mar 2023 00:01:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230299AbjCVEAe (ORCPT ); Wed, 22 Mar 2023 00:00:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E58BD4A1FB for ; Tue, 21 Mar 2023 21:00:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6B08AB81B05 for ; Wed, 22 Mar 2023 04:00:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F6C2C433D2; Wed, 22 Mar 2023 04:00:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457629; bh=cKEMr/rwAC93qko/Kc2EIUi6Z0gQ08sgmjljnkVmJko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SDmoYgGFPePMH3I1wvZmhxwKlXk3V0IR/C9prXtdbLu2aKzZszD9iNLmtpwmMjvM5 aSZ8rXW97ENAH7Un93qEThoiZpcw8J4zqR4IT8+zhkeXxBXblZkDmFzrbvZgyucueA JHH0td2lO4xaeV+UKqGK9mZLWiXWD3acb0qec/LW8zmlZ9OSdIzUhUREoh5ve2oQi0 FVEXVnonqQ01s550JgukrAllKjnFJJND8uCpeXXU+dXxXx95i6exd/JtNMGbKe1YiC Sy/2XqcTodnjmZdcSDSMzJ7dsbLivRpmKz4x3r0UUEfiY64R6TiQHxlXTWYZ0CFyVk fDO3YIel4kbnA== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 10/11] static_call: Remove static_call_cond() Date: Tue, 21 Mar 2023 21:00:16 -0700 Message-Id: <3916caa1dcd114301a49beafa5030eca396745c1.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040353847652467?= X-GMAIL-MSGID: =?utf-8?q?1761040353847652467?= A static call to a NULL pointer is now a NOP for all configs. There's no longer a need for an explicit static_call_cond(). Remove it and convert its usages to plain static_call(). Signed-off-by: Josh Poimboeuf --- arch/x86/events/core.c | 24 +++++++++++------------ arch/x86/include/asm/kvm-x86-ops.h | 3 +-- arch/x86/include/asm/kvm-x86-pmu-ops.h | 3 +-- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/irq.c | 2 +- arch/x86/kvm/lapic.c | 22 ++++++++++----------- arch/x86/kvm/pmu.c | 4 ++-- arch/x86/kvm/x86.c | 24 +++++++++++------------ include/linux/static_call.h | 5 +---- security/keys/trusted-keys/trusted_core.c | 2 +- 10 files changed, 44 insertions(+), 49 deletions(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index d096b04bf80e..c94537501091 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -995,7 +995,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) if (cpuc->txn_flags & PERF_PMU_TXN_ADD) n0 -= cpuc->n_txn; - static_call_cond(x86_pmu_start_scheduling)(cpuc); + static_call(x86_pmu_start_scheduling)(cpuc); for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) { c = cpuc->event_constraint[i]; @@ -1094,7 +1094,7 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) */ if (!unsched && assign) { for (i = 0; i < n; i++) - static_call_cond(x86_pmu_commit_scheduling)(cpuc, i, assign[i]); + static_call(x86_pmu_commit_scheduling)(cpuc, i, assign[i]); } else { for (i = n0; i < n; i++) { e = cpuc->event_list[i]; @@ -1102,13 +1102,13 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign) /* * release events that failed scheduling */ - static_call_cond(x86_pmu_put_event_constraints)(cpuc, e); + static_call(x86_pmu_put_event_constraints)(cpuc, e); cpuc->event_constraint[i] = NULL; } } - static_call_cond(x86_pmu_stop_scheduling)(cpuc); + static_call(x86_pmu_stop_scheduling)(cpuc); return unsched ? -EINVAL : 0; } @@ -1221,7 +1221,7 @@ static inline void x86_assign_hw_event(struct perf_event *event, hwc->last_cpu = smp_processor_id(); hwc->last_tag = ++cpuc->tags[i]; - static_call_cond(x86_pmu_assign)(event, idx); + static_call(x86_pmu_assign)(event, idx); switch (hwc->idx) { case INTEL_PMC_IDX_FIXED_BTS: @@ -1399,7 +1399,7 @@ int x86_perf_event_set_period(struct perf_event *event) if (left > x86_pmu.max_period) left = x86_pmu.max_period; - static_call_cond(x86_pmu_limit_period)(event, &left); + static_call(x86_pmu_limit_period)(event, &left); this_cpu_write(pmc_prev_left[idx], left); @@ -1487,7 +1487,7 @@ static int x86_pmu_add(struct perf_event *event, int flags) * This is before x86_pmu_enable() will call x86_pmu_start(), * so we enable LBRs before an event needs them etc.. */ - static_call_cond(x86_pmu_add)(event); + static_call(x86_pmu_add)(event); ret = 0; out: @@ -1640,7 +1640,7 @@ static void x86_pmu_del(struct perf_event *event, int flags) if (i >= cpuc->n_events - cpuc->n_added) --cpuc->n_added; - static_call_cond(x86_pmu_put_event_constraints)(cpuc, event); + static_call(x86_pmu_put_event_constraints)(cpuc, event); /* Delete the array entry. */ while (++i < cpuc->n_events) { @@ -1660,7 +1660,7 @@ static void x86_pmu_del(struct perf_event *event, int flags) * This is after x86_pmu_stop(); so we disable LBRs after any * event can need them etc.. */ - static_call_cond(x86_pmu_del)(event); + static_call(x86_pmu_del)(event); } int x86_pmu_handle_irq(struct pt_regs *regs) @@ -2627,13 +2627,13 @@ static const struct attribute_group *x86_pmu_attr_groups[] = { static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) { - static_call_cond(x86_pmu_sched_task)(pmu_ctx, sched_in); + static_call(x86_pmu_sched_task)(pmu_ctx, sched_in); } static void x86_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc, struct perf_event_pmu_context *next_epc) { - static_call_cond(x86_pmu_swap_task_ctx)(prev_epc, next_epc); + static_call(x86_pmu_swap_task_ctx)(prev_epc, next_epc); } void perf_check_microcode(void) @@ -2672,7 +2672,7 @@ static bool x86_pmu_filter(struct pmu *pmu, int cpu) { bool ret = false; - static_call_cond(x86_pmu_filter)(pmu, cpu, &ret); + static_call(x86_pmu_filter)(pmu, cpu, &ret); return ret; } diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 8dc345cc6318..2f0bfd910637 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -9,8 +9,7 @@ BUILD_BUG_ON(1) * "static_call_update()" calls. * * KVM_X86_OP_OPTIONAL() can be used for those functions that can have - * a NULL definition, for example if "static_call_cond()" will be used - * at the call sites. KVM_X86_OP_OPTIONAL_RET0() can be used likewise + * a NULL definition. KVM_X86_OP_OPTIONAL_RET0() can be used likewise * to make a definition optional, but in this case the default will * be __static_call_return0. */ diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h index c17e3e96fc1d..6815319c4ff3 100644 --- a/arch/x86/include/asm/kvm-x86-pmu-ops.h +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -9,8 +9,7 @@ BUILD_BUG_ON(1) * "static_call_update()" calls. * * KVM_X86_PMU_OP_OPTIONAL() can be used for those functions that can have - * a NULL definition, for example if "static_call_cond()" will be used - * at the call sites. + * a NULL definition. */ KVM_X86_PMU_OP(hw_event_available) KVM_X86_PMU_OP(pmc_is_enabled) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 808c292ad3f4..1dfba499d3e5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2172,12 +2172,12 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq) static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) { - static_call_cond(kvm_x86_vcpu_blocking)(vcpu); + static_call(kvm_x86_vcpu_blocking)(vcpu); } static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) { - static_call_cond(kvm_x86_vcpu_unblocking)(vcpu); + static_call(kvm_x86_vcpu_unblocking)(vcpu); } static inline int kvm_cpu_get_apicid(int mps_cpu) diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index b2c397dd2bc6..4f9e090c9d42 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -155,7 +155,7 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu) { __kvm_migrate_apic_timer(vcpu); __kvm_migrate_pit_timer(vcpu); - static_call_cond(kvm_x86_migrate_timers)(vcpu); + static_call(kvm_x86_migrate_timers)(vcpu); } bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index e542cf285b51..d5f7e829d975 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -681,8 +681,8 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic) if (unlikely(apic->apicv_active)) { /* need to update RVI */ kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR); - static_call_cond(kvm_x86_hwapic_irr_update)(apic->vcpu, - apic_find_highest_irr(apic)); + static_call(kvm_x86_hwapic_irr_update)(apic->vcpu, + apic_find_highest_irr(apic)); } else { apic->irr_pending = false; kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR); @@ -708,7 +708,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic) * just set SVI. */ if (unlikely(apic->apicv_active)) - static_call_cond(kvm_x86_hwapic_isr_update)(vec); + static_call(kvm_x86_hwapic_isr_update)(vec); else { ++apic->isr_count; BUG_ON(apic->isr_count > MAX_APIC_VECTOR); @@ -753,7 +753,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic) * and must be left alone. */ if (unlikely(apic->apicv_active)) - static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic)); + static_call(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic)); else { --apic->isr_count; BUG_ON(apic->isr_count < 0); @@ -2519,7 +2519,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value) if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE)) { kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu); - static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu); + static_call(kvm_x86_set_virtual_apic_mode)(vcpu); } apic->base_address = apic->vcpu->arch.apic_base & @@ -2682,9 +2682,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->arch.pv_eoi.msr_val = 0; apic_update_ppr(apic); if (apic->apicv_active) { - static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu); - static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1); - static_call_cond(kvm_x86_hwapic_isr_update)(-1); + static_call(kvm_x86_apicv_post_state_restore)(vcpu); + static_call(kvm_x86_hwapic_irr_update)(vcpu, -1); + static_call(kvm_x86_hwapic_isr_update)(-1); } vcpu->arch.apic_arb_prio = 0; @@ -2961,9 +2961,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s) kvm_lapic_set_reg(apic, APIC_TMCCT, 0); kvm_apic_update_apicv(vcpu); if (apic->apicv_active) { - static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu); - static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic)); - static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic)); + static_call(kvm_x86_apicv_post_state_restore)(vcpu); + static_call(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic)); + static_call(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic)); } kvm_make_request(KVM_REQ_EVENT, vcpu); if (ioapic_in_kernel(vcpu->kvm)) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 612e6c70ce2e..6accb46295a3 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -552,7 +552,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { if (lapic_in_kernel(vcpu)) { - static_call_cond(kvm_x86_pmu_deliver_pmi)(vcpu); + static_call(kvm_x86_pmu_deliver_pmi)(vcpu); kvm_apic_local_deliver(vcpu->arch.apic, APIC_LVTPC); } } @@ -632,7 +632,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmc_stop_counter(pmc); } - static_call_cond(kvm_x86_pmu_cleanup)(vcpu); + static_call(kvm_x86_pmu_cleanup)(vcpu); bitmap_zero(pmu->pmc_in_use, X86_PMC_IDX_MAX); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7713420abab0..fcf845fc5770 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4845,7 +4845,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s) { - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call(kvm_x86_sync_pir_to_irr)(vcpu); return kvm_apic_get_state(vcpu, s); } @@ -8948,7 +8948,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r = kvm_vcpu_do_singlestep(vcpu); - static_call_cond(kvm_x86_update_emulated_instruction)(vcpu); + static_call(kvm_x86_update_emulated_instruction)(vcpu); __kvm_set_rflags(vcpu, ctxt->eflags); } @@ -10307,7 +10307,7 @@ static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) if (irqchip_split(vcpu->kvm)) kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors); else { - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call(kvm_x86_sync_pir_to_irr)(vcpu); if (ioapic_in_kernel(vcpu->kvm)) kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); } @@ -10329,11 +10329,11 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu) bitmap_or((ulong *)eoi_exit_bitmap, vcpu->arch.ioapic_handled_vectors, to_hv_synic(vcpu)->vec_bitmap, 256); - static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap); + static_call(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap); return; } - static_call_cond(kvm_x86_load_eoi_exitmap)( + static_call(kvm_x86_load_eoi_exitmap)( vcpu, (u64 *)vcpu->arch.ioapic_handled_vectors); } @@ -10353,7 +10353,7 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, void kvm_arch_guest_memory_reclaimed(struct kvm *kvm) { - static_call_cond(kvm_x86_guest_memory_reclaimed)(kvm); + static_call(kvm_x86_guest_memory_reclaimed)(kvm); } static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) @@ -10361,7 +10361,7 @@ static void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) if (!lapic_in_kernel(vcpu)) return; - static_call_cond(kvm_x86_set_apic_access_page_addr)(vcpu); + static_call(kvm_x86_set_apic_access_page_addr)(vcpu); } void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) @@ -10603,7 +10603,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * i.e. they can post interrupts even if APICv is temporarily disabled. */ if (kvm_lapic_enabled(vcpu)) - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call(kvm_x86_sync_pir_to_irr)(vcpu); if (kvm_vcpu_exit_request(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; @@ -10654,7 +10654,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) break; if (kvm_lapic_enabled(vcpu)) - static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); + static_call(kvm_x86_sync_pir_to_irr)(vcpu); if (unlikely(kvm_vcpu_exit_request(vcpu))) { exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED; @@ -11392,7 +11392,7 @@ static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs, *mmu_reset_needed |= kvm_read_cr3(vcpu) != sregs->cr3; vcpu->arch.cr3 = sregs->cr3; kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3); - static_call_cond(kvm_x86_post_set_cr3)(vcpu, sregs->cr3); + static_call(kvm_x86_post_set_cr3)(vcpu, sregs->cr3); kvm_set_cr8(vcpu, sregs->cr8); @@ -12361,7 +12361,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm) mutex_unlock(&kvm->slots_lock); } kvm_unload_vcpu_mmus(kvm); - static_call_cond(kvm_x86_vm_destroy)(kvm); + static_call(kvm_x86_vm_destroy)(kvm); kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1)); kvm_pic_destroy(kvm); kvm_ioapic_destroy(kvm); @@ -13049,7 +13049,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu) void kvm_arch_start_assignment(struct kvm *kvm) { if (atomic_inc_return(&kvm->arch.assigned_device_count) == 1) - static_call_cond(kvm_x86_pi_start_assignment)(kvm); + static_call(kvm_x86_pi_start_assignment)(kvm); } EXPORT_SYMBOL_GPL(kvm_arch_start_assignment); diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 65ac01179993..d5254107ccf4 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -22,7 +22,6 @@ * __static_call_return0; * * static_call(name)(args...); - * static_call_cond(name)(args...); * static_call_ro(name)(args...); * static_call_update(name, func); * static_call_query(name); @@ -92,7 +91,7 @@ * * DEFINE_STATIC_CALL_RET0 / __static_call_return0: * - * Just like how DEFINE_STATIC_CALL_NULL() / static_call_cond() optimize the + * Just like how DEFINE_STATIC_CALL_NULL() optimizes the * conditional void function call, DEFINE_STATIC_CALL_RET0 / * __static_call_return0 optimize the do nothing return 0 function. * @@ -198,8 +197,6 @@ extern long __static_call_return0(void); __static_call(name); \ }) -#define static_call_cond(name) (void)static_call(name) - /* Use static_call_ro() to call a read-only-exported static call. */ #define static_call_ro(name) __static_call_ro(name) diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c index c6fc50d67214..b7920482ebcb 100644 --- a/security/keys/trusted-keys/trusted_core.c +++ b/security/keys/trusted-keys/trusted_core.c @@ -388,7 +388,7 @@ static int __init init_trusted(void) static void __exit cleanup_trusted(void) { - static_call_cond(trusted_key_exit)(); + static_call(trusted_key_exit)(); } late_initcall(init_trusted); From patchwork Wed Mar 22 04:00:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Poimboeuf X-Patchwork-Id: 73188 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:604a:0:0:0:0:0 with SMTP id j10csp2154848wrt; Tue, 21 Mar 2023 21:27:55 -0700 (PDT) X-Google-Smtp-Source: AK7set/z08j4i6x37c02RoRl85ph3CjAJNAC0+7+BAviu5BlVzFRNpDC7iz/PnU6AVXc2ZpJDvJA X-Received: by 2002:a17:906:57cb:b0:929:e5a8:63f7 with SMTP id u11-20020a17090657cb00b00929e5a863f7mr792179ejr.28.1679459275400; Tue, 21 Mar 2023 21:27:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679459275; cv=none; d=google.com; s=arc-20160816; b=q9Rk+jVwjeB4ii9rAYjPymvML+EhDT7JprCc3lnvD9BPE1bVbgwNcTcDX33j5AI5mK /GW7ELJ7f/3/YgQlwZIv3NmhS0dRKGRT2HqkrMPN4pKrECBfmxmJCqyJKjKaXLznvdKR QGx3ryMLBBaHlni6X66xoNJCD2UxUI2NXcftbl6wUCxxLSgu7McWOuPZoV2liL145+Au ZqFXL7TJmeY/Gtz5lBa54SkzrD9txfgBVA0s0kAqXqmzlvYhk8X5uYvTH5EOKiONksWO hKotYfWGht/KK2SnJZfv14tS6VHHhKnbIOg8rFNxytPkIEXsOlRWmO8zsCuJ0TuiN44P 8Maw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=V3vminQj8iQKNs5d0f1UNLculgMR1CucK1kw5ZcFpSs=; b=R7/4QunJzrokYXnQE369luf86SCTd+HxYhq5qBCzPbGLFsZnmCHvbsJgoIUB2QGUIG mSI1OZkyaFtI42JU2FWletEOdS67ubPEAvGSZ2j+FYERnaY/zRPoG52zu+A9k0LLbv84 jrb2tJvgZQdyRS8hD9Q4BV7H46/5rv7YpdpVS4OMfofWH2MtRpt8fHB4n2H+Uid2ymgT JgARdtFkSCY2g507Y9u7pGXycjWpJFPBdy+iQXctpH/BdjAhr6uDFlF27fh4bP/NUsdf D7E4ER/7skiQFBA3wfuq8fiWJ1kb0BNsM795J4EFs5sJmW570VQzo8om+slQq54ihICG dH3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aJaxlTRD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f13-20020a170906560d00b008d4d102a7b8si11250441ejq.365.2023.03.21.21.27.32; Tue, 21 Mar 2023 21:27:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aJaxlTRD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230358AbjCVEBT (ORCPT + 99 others); Wed, 22 Mar 2023 00:01:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230291AbjCVEAe (ORCPT ); Wed, 22 Mar 2023 00:00:34 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C54647403 for ; Tue, 21 Mar 2023 21:00:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2699AB81B00 for ; Wed, 22 Mar 2023 04:00:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4539BC4339E; Wed, 22 Mar 2023 04:00:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679457629; bh=K9boLq8z7fFVEvsbG6dLv4M6tcuUFGHuZyYPrBfNWt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aJaxlTRD5z6KYT8GOc3YDHsq7AgJRJsje9tYr2v/JDTRGcdZDbVcPPQD06RFU+1ma uDY4R5CiQxp/GWYvovLw1L0BxFw3tqvWQoTOf9B97XR3LBfWup1P9NOgbN16NMOtM2 ABf5kLArMRbm3BnO+ivX/Nyj7c0mDJovtw3zJ9hP0eZfN2F6jleykCAfwa5MjltFLX DGSO2ls8GQ2dmkNhWs6WDCtGZGDPqVJJjqofDytIpdE/4tQayH/9VrlUa8Besn/d3y REizfEsK6vCLwuoliLULg1Wz1sNyU8tqGPEwoVXfoGKpQR1hyYXlDcJDLwEwvAM/MO r171LcKrXxcPw== From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Mark Rutland , Jason Baron , Steven Rostedt , Ard Biesheuvel , Christophe Leroy , Paolo Bonzini , Sean Christopherson , Sami Tolvanen , Nick Desaulniers , Will McVicker , Kees Cook , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 11/11] static_call: Remove DEFINE_STATIC_CALL_RET0() Date: Tue, 21 Mar 2023 21:00:17 -0700 Message-Id: <8aab02492c2bf512c7ffe458e41acc1b930ed2dc.1679456900.git.jpoimboe@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 Content-type: text/plain X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761040689175074357?= X-GMAIL-MSGID: =?utf-8?q?1761040689175074357?= NULL and RET0 static calls are both slightly different ways of nopping a static call. A not-insignificant amount of code and complexity is spent maintaining them separately. It's also somewhat tricky for the user who has to try to remember to use the correct one for the given function type. Simplify things all around by just combining them, such that NULL static calls always return 0. While it doesn't necessarily make sense for void-return functions to return 0, it's pretty much harmless. The return value register is already callee-clobbered, and an extra "xor %eax, %eax" shouldn't affect performance (knock on wood). This "do nothing return 0" default should work for the vast majority of NULL cases. Otherwise it can be easily overridden with a user-specified function which panics or returns 0xdeadbeef or does whatever one wants. This simplifies the static call code and also tends to help simplify users' code as well. Signed-off-by: Josh Poimboeuf --- arch/arm64/include/asm/static_call.h | 4 -- arch/powerpc/include/asm/static_call.h | 1 - arch/powerpc/kernel/irq.c | 2 +- arch/powerpc/kernel/static_call.c | 7 +- arch/x86/events/amd/core.c | 2 +- arch/x86/events/core.c | 5 +- arch/x86/include/asm/kvm-x86-ops.h | 3 +- arch/x86/include/asm/static_call.h | 13 +--- arch/x86/kernel/alternative.c | 6 -- arch/x86/kernel/static_call.c | 89 ++----------------------- arch/x86/kvm/x86.c | 4 +- include/linux/static_call.h | 65 +++++------------- include/linux/static_call_types.h | 4 -- kernel/events/core.c | 15 ++--- kernel/sched/core.c | 10 +-- kernel/static_call.c | 5 -- tools/include/linux/static_call_types.h | 4 -- 17 files changed, 39 insertions(+), 200 deletions(-) diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h index 02693b404afc..b3489cac7742 100644 --- a/arch/arm64/include/asm/static_call.h +++ b/arch/arm64/include/asm/static_call.h @@ -22,10 +22,6 @@ ".type " name ", @function \n" \ ".size " name ", . - " name " \n") -#define __ARCH_DEFINE_STATIC_CALL_NOP_CFI(name) \ - GEN_CFI_SYM(STATIC_CALL_NOP_CFI(name)); \ - __ARCH_DEFINE_STATIC_CALL_CFI(STATIC_CALL_NOP_CFI_STR(name), "") - #define __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) \ GEN_CFI_SYM(STATIC_CALL_RET0_CFI(name)); \ __ARCH_DEFINE_STATIC_CALL_CFI(STATIC_CALL_RET0_CFI_STR(name), "mov x0, xzr") diff --git a/arch/powerpc/include/asm/static_call.h b/arch/powerpc/include/asm/static_call.h index 744435127574..0b17fc551157 100644 --- a/arch/powerpc/include/asm/static_call.h +++ b/arch/powerpc/include/asm/static_call.h @@ -23,7 +23,6 @@ #define PPC_SCT_DATA 28 /* Offset of label 2 */ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) __PPC_SCT(name, "b " #func) -#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) __PPC_SCT(name, "blr") #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20") #endif /* _ASM_POWERPC_STATIC_CALL_H */ diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index c9535f2760b5..320e1a41abd6 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -220,7 +220,7 @@ static __always_inline void call_do_softirq(const void *sp) } #endif -DEFINE_STATIC_CALL_RET0(ppc_get_irq, *ppc_md.get_irq); +DEFINE_STATIC_CALL_NULL(ppc_get_irq, *ppc_md.get_irq); static void __do_irq(struct pt_regs *regs, unsigned long oldsp) { diff --git a/arch/powerpc/kernel/static_call.c b/arch/powerpc/kernel/static_call.c index 8bfe46654e01..db3116b2d8a8 100644 --- a/arch/powerpc/kernel/static_call.c +++ b/arch/powerpc/kernel/static_call.c @@ -8,7 +8,6 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { int err; bool is_ret0 = (func == __static_call_return0); - bool is_nop = (func == __static_call_nop); unsigned long target = (unsigned long)(is_ret0 ? tramp + PPC_SCT_RET0 : func); bool is_short = is_offset_in_branch_range((long)target - (long)tramp); @@ -17,15 +16,13 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) mutex_lock(&text_mutex); - if (!is_nop && !is_short) { + if (!is_short) { err = patch_instruction(tramp + PPC_SCT_DATA, ppc_inst(target)); if (err) goto out; } - if (is_nop) - err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); - else if (is_short) + if (is_short) err = patch_branch(tramp, target, 0); else err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP())); diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c index 8c45b198b62f..3c545595bfeb 100644 --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -330,7 +330,7 @@ static inline bool amd_is_pair_event_code(struct hw_perf_event *hwc) } } -DEFINE_STATIC_CALL_RET0(amd_pmu_branch_hw_config, *x86_pmu.hw_config); +DEFINE_STATIC_CALL_NULL(amd_pmu_branch_hw_config, *x86_pmu.hw_config); static int amd_core_hw_config(struct perf_event *event) { diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index c94537501091..dfeaeee34acf 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -96,7 +96,7 @@ DEFINE_STATIC_CALL_NULL(x86_pmu_filter, *x86_pmu.filter); * This one is magic, it will get called even when PMU init fails (because * there is no PMU), in which case it should simply return NULL. */ -DEFINE_STATIC_CALL_RET0(x86_pmu_guest_get_msrs, *x86_pmu.guest_get_msrs); +DEFINE_STATIC_CALL_NULL(x86_pmu_guest_get_msrs, *x86_pmu.guest_get_msrs); u64 __read_mostly hw_cache_event_ids [PERF_COUNT_HW_CACHE_MAX] @@ -2125,9 +2125,6 @@ static int __init init_hw_perf_events(void) if (!x86_pmu.read) x86_pmu.read = _x86_pmu_read; - if (!x86_pmu.guest_get_msrs) - x86_pmu.guest_get_msrs = (void *)&__static_call_return0; - if (!x86_pmu.set_period) x86_pmu.set_period = x86_perf_event_set_period; diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 2f0bfd910637..6e1259ed1014 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -10,8 +10,7 @@ BUILD_BUG_ON(1) * * KVM_X86_OP_OPTIONAL() can be used for those functions that can have * a NULL definition. KVM_X86_OP_OPTIONAL_RET0() can be used likewise - * to make a definition optional, but in this case the default will - * be __static_call_return0. + * to make a definition optional. */ KVM_X86_OP(check_processor_compatibility) KVM_X86_OP(hardware_enable) diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index afea6ceeed23..21ad48988f6e 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -29,8 +29,7 @@ * ud1 %esp, %ecx * * That trailing #UD provides both a speculation stop and serves as a unique - * 3 byte signature identifying static call trampolines. Also see tramp_ud[] - * and __static_call_fixup(). + * 3 byte signature identifying static call trampolines. Also see tramp_ud[]. */ #define __ARCH_DEFINE_STATIC_CALL_TRAMP(name, insns) \ asm(".pushsection .static_call.text, \"ax\" \n" \ @@ -47,17 +46,7 @@ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") -#ifdef CONFIG_RETHUNK -#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) \ - __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk") -#else -#define ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) \ - __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop") -#endif - #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) \ ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0) -extern bool __static_call_fixup(void *tramp, u8 op, void *dest); - #endif /* _ASM_STATIC_CALL_H */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index f615e0cb6d93..4388dc9942ca 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -624,12 +624,6 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) if (op == JMP32_INSN_OPCODE) dest = addr + insn.length + insn.immediate.value; - if (__static_call_fixup(addr, op, dest) || - WARN_ONCE(dest != &__x86_return_thunk, - "missing return thunk: %pS-%pS: %*ph", - addr, dest, 5, addr)) - continue; - DPRINTK("return thunk at: %pS (%px) len: %d to: %pS", addr, addr, insn.length, addr + insn.length + insn.immediate.value); diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 27c095c7fc96..d914167fbb4e 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -6,10 +6,8 @@ enum insn_type { CALL = 0, /* site call */ - NOP = 1, /* site cond-call */ - JMP = 2, /* tramp / site tail-call */ - RET = 3, /* tramp / site cond-tail-call */ - JCC = 4, + JMP = 1, /* tramp / site tail-call */ + JCC = 2, }; /* @@ -24,8 +22,6 @@ static const u8 tramp_ud[] = { 0x0f, 0xb9, 0xcc }; */ static const u8 xor5rax[] = { 0x2e, 0x2e, 0x2e, 0x31, 0xc0 }; -static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc }; - static u8 __is_Jcc(u8 *insn) /* Jcc.d32 */ { u8 ret = 0; @@ -39,17 +35,6 @@ static u8 __is_Jcc(u8 *insn) /* Jcc.d32 */ return ret; } -extern void __static_call_return(void); - -asm (".global __static_call_return\n\t" - ".type __static_call_return, @function\n\t" - ASM_FUNC_ALIGN "\n\t" - "__static_call_return:\n\t" - ANNOTATE_NOENDBR - ANNOTATE_RETPOLINE_SAFE - "ret; int3\n\t" - ".size __static_call_return, . - __static_call_return \n\t"); - static void __ref __static_call_transform(void *insn, enum insn_type type, void *func, bool modinit) { @@ -58,7 +43,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, const void *code; u8 op, buf[6]; - if ((type == JMP || type == RET) && (op = __is_Jcc(insn))) + if (type == JMP && (op = __is_Jcc(insn))) type = JCC; switch (type) { @@ -72,28 +57,11 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, break; - case NOP: - code = x86_nops[5]; - break; - case JMP: code = text_gen_insn(JMP32_INSN_OPCODE, insn, func); break; - case RET: - if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) - code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk); - else - code = &retinsn; - break; - case JCC: - if (!func) { - func = __static_call_return; //FIXME use __static_call_nop()? - if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) - func = x86_return_thunk; - } - buf[0] = 0x0f; __text_gen_insn(buf+1, op, insn+1, func, 5); code = buf; @@ -122,12 +90,10 @@ static void __static_call_validate(u8 *insn, bool tail, bool tramp) if (tail) { if (opcode == JMP32_INSN_OPCODE || - opcode == RET_INSN_OPCODE || __is_Jcc(insn)) return; } else { if (opcode == CALL_INSN_OPCODE || - !memcmp(insn, x86_nops[5], 5) || !memcmp(insn, xor5rax, 5)) return; } @@ -139,65 +105,22 @@ static void __static_call_validate(u8 *insn, bool tail, bool tramp) BUG(); } -static inline enum insn_type __sc_insn(bool nop, bool tail) -{ - /* - * Encode the following table without branches: - * - * tail nop insn - * -----+-------+------ - * 0 | 0 | CALL - * 0 | 1 | NOP - * 1 | 0 | JMP - * 1 | 1 | RET - */ - return 2*tail + nop; -} - void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) { - bool nop = (func == __static_call_nop); + enum insn_type insn = tail ? JMP : CALL; mutex_lock(&text_mutex); if (tramp) { __static_call_validate(tramp, true, true); - __static_call_transform(tramp, __sc_insn(nop, true), func, false); + __static_call_transform(tramp, insn, func, false); } if (IS_ENABLED(CONFIG_HAVE_STATIC_CALL_INLINE) && site) { __static_call_validate(site, tail, false); - __static_call_transform(site, __sc_insn(nop, tail), func, false); + __static_call_transform(site, insn, func, false); } mutex_unlock(&text_mutex); } EXPORT_SYMBOL_GPL(arch_static_call_transform); - -#ifdef CONFIG_RETHUNK -/* - * This is called by apply_returns() to fix up static call trampolines, - * specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as - * having a return trampoline. - * - * The problem is that static_call() is available before determining - * X86_FEATURE_RETHUNK and, by implication, running alternatives. - * - * This means that __static_call_transform() above can have overwritten the - * return trampoline and we now need to fix things up to be consistent. - */ -bool __static_call_fixup(void *tramp, u8 op, void *dest) -{ - if (memcmp(tramp+5, tramp_ud, 3)) { - /* Not a trampoline site, not our problem. */ - return false; - } - - mutex_lock(&text_mutex); - if (op == RET_INSN_OPCODE || dest == &__x86_return_thunk) - __static_call_transform(tramp, RET, NULL, true); - mutex_unlock(&text_mutex); - - return true; -} -#endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fcf845fc5770..324676d738c0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9321,9 +9321,7 @@ static inline void kvm_ops_update(struct kvm_x86_init_ops *ops) #define KVM_X86_OP(func) \ WARN_ON(!kvm_x86_ops.func); __KVM_X86_OP(func) #define KVM_X86_OP_OPTIONAL __KVM_X86_OP -#define KVM_X86_OP_OPTIONAL_RET0(func) \ - static_call_update(kvm_x86_##func, (void *)kvm_x86_ops.func ? : \ - (void *)__static_call_return0); +#define KVM_X86_OP_OPTIONAL_RET0(func) __KVM_X86_OP #include #undef __KVM_X86_OP diff --git a/include/linux/static_call.h b/include/linux/static_call.h index d5254107ccf4..625b3217480f 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -17,9 +17,6 @@ * DECLARE_STATIC_CALL(name, func); * DEFINE_STATIC_CALL(name, func); * DEFINE_STATIC_CALL_NULL(name, typename); - * DEFINE_STATIC_CALL_RET0(name, typename); - * - * __static_call_return0; * * static_call(name)(args...); * static_call_ro(name)(args...); @@ -65,19 +62,26 @@ * * Notes on NULL function pointers: * - * A static_call() to a NULL function pointer is a NOP. + * A static_call() to a NULL function pointer is equivalent to a call to a + * "do nothing return 0" function. * * A NULL static call can be the result of: * * DECLARE_STATIC_CALL_NULL(my_static_call, void (*)(int)); * - * or using static_call_update() with a NULL function pointer. In both cases - * the HAVE_STATIC_CALL implementation will patch the trampoline with a RET -* instruction, instead of an immediate tail-call JMP. HAVE_STATIC_CALL_INLINE -* architectures can patch the trampoline call to a NOP. + * or using static_call_update() with a NULL function pointer. + * + * The "return 0" feature is strictly UB per the C standard (since it casts a + * function pointer to a different signature) and relies on the architecture + * ABI to make things work. In particular it relies on the return value + * register being callee-clobbered for all function calls. + * + * In particular The x86_64 implementation of HAVE_STATIC_CALL_INLINE + * replaces the 5 byte CALL instruction at the callsite with a 5 byte clear + * of the RAX register, completely eliding any function call overhead. * - * In all cases, any argument evaluation is unconditional. Unlike a regular - * conditional function pointer call: + * Any argument evaluation is unconditional. Unlike a regular conditional + * function pointer call: * * if (my_func_ptr) * my_func_ptr(arg1) @@ -88,26 +92,6 @@ * * func = static_call_query(name); * - * - * DEFINE_STATIC_CALL_RET0 / __static_call_return0: - * - * Just like how DEFINE_STATIC_CALL_NULL() optimizes the - * conditional void function call, DEFINE_STATIC_CALL_RET0 / - * __static_call_return0 optimize the do nothing return 0 function. - * - * This feature is strictly UB per the C standard (since it casts a function - * pointer to a different signature) and relies on the architecture ABI to - * make things work. In particular it relies on Caller Stack-cleanup and the - * whole return register being clobbered for short return values. All normal - * CDECL style ABIs conform. - * - * In particular the x86_64 implementation replaces the 5 byte CALL - * instruction at the callsite with a 5 byte clear of the RAX register, - * completely eliding any function call overhead. - * - * Notably argument setup is unconditional. - * - * * EXPORT_STATIC_CALL() vs EXPORT_STATIC_CALL_RO(): * * The difference is the read-only variant exports the trampoline but not the @@ -131,7 +115,6 @@ struct static_call_key { #endif }; -extern void __static_call_nop(void); extern long __static_call_return0(void); #define DECLARE_STATIC_CALL(name, func) \ @@ -151,10 +134,6 @@ extern long __static_call_return0(void); __DEFINE_STATIC_CALL_TRAMP(name, func) #define DEFINE_STATIC_CALL_NULL(name, type) \ - __DEFINE_STATIC_CALL(name, type, __STATIC_CALL_NOP(name)); \ - __DEFINE_STATIC_CALL_NOP_TRAMP(name) - -#define DEFINE_STATIC_CALL_RET0(name, type) \ __DEFINE_STATIC_CALL(name, type, __STATIC_CALL_RET0(name)); \ __DEFINE_STATIC_CALL_RET0_TRAMP(name) @@ -212,8 +191,6 @@ extern long __static_call_return0(void); ({ \ typeof(&STATIC_CALL_TRAMP(name)) __F = (func); \ if (!__F) \ - __F = __STATIC_CALL_NOP(name); \ - else if (__F == (void *)__static_call_return0) \ __F = __STATIC_CALL_RET0(name); \ __static_call_update(&STATIC_CALL_KEY(name), \ STATIC_CALL_TRAMP_ADDR(name), __F); \ @@ -222,10 +199,8 @@ extern long __static_call_return0(void); #define static_call_query(name) \ ({ \ void *__F = (READ_ONCE(STATIC_CALL_KEY(name).func)); \ - if (__F == __STATIC_CALL_NOP(name)) \ + if (__F == __STATIC_CALL_RET0(name)) \ __F = NULL; \ - else if (__F == __STATIC_CALL_RET0(name)) \ - __F = __static_call_return0; \ __F; \ }) @@ -236,9 +211,6 @@ extern long __static_call_return0(void); #define __DEFINE_STATIC_CALL_TRAMP(name, func) \ ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) -#define __DEFINE_STATIC_CALL_NOP_TRAMP(name) \ - ARCH_DEFINE_STATIC_CALL_NOP_TRAMP(name) - #define __DEFINE_STATIC_CALL_RET0_TRAMP(name) \ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) @@ -262,7 +234,6 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool #else /* !CONFIG_HAVE_STATIC_CALL */ #define __DEFINE_STATIC_CALL_TRAMP(name, func) -#define __DEFINE_STATIC_CALL_NOP_TRAMP(name) #define __DEFINE_STATIC_CALL_RET0_TRAMP(name) #define __EXPORT_STATIC_CALL_TRAMP(name) #define __EXPORT_STATIC_CALL_TRAMP_GPL(name) @@ -308,28 +279,22 @@ static inline void static_call_force_reinit(void) {} #include -#define __STATIC_CALL_NOP(name) STATIC_CALL_NOP_CFI(name) #define __STATIC_CALL_RET0(name) STATIC_CALL_RET0_CFI(name) #define __DECLARE_STATIC_CALL_CFI(name, func) \ - extern typeof(func) STATIC_CALL_NOP_CFI(name); \ extern typeof(func) STATIC_CALL_RET0_CFI(name) #define __DEFINE_STATIC_CALL_CFI(name) \ - __ARCH_DEFINE_STATIC_CALL_NOP_CFI(name); \ __ARCH_DEFINE_STATIC_CALL_RET0_CFI(name) #define __EXPORT_STATIC_CALL_CFI(name) \ - EXPORT_SYMBOL(STATIC_CALL_NOP_CFI(name)); \ EXPORT_SYMBOL(STATIC_CALL_RET0_CFI(name)) #define __EXPORT_STATIC_CALL_CFI_GPL(name) \ - EXPORT_SYMBOL_GPL(STATIC_CALL_NOP_CFI(name)); \ EXPORT_SYMBOL_GPL(STATIC_CALL_RET0_CFI(name)) #else /* ! CONFIG_CFI_WITHOUT_STATIC_CALL */ -#define __STATIC_CALL_NOP(name) (void *)__static_call_nop #define __STATIC_CALL_RET0(name) (void *)__static_call_return0 #define __DECLARE_STATIC_CALL_CFI(name, func) diff --git a/include/linux/static_call_types.h b/include/linux/static_call_types.h index 2e2481c3f54e..72732af51cba 100644 --- a/include/linux/static_call_types.h +++ b/include/linux/static_call_types.h @@ -22,10 +22,6 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) -#define STATIC_CALL_NOP_CFI_PREFIX __SCN__ -#define STATIC_CALL_NOP_CFI(name) __PASTE(STATIC_CALL_NOP_CFI_PREFIX, name) -#define STATIC_CALL_NOP_CFI_STR(name) __stringify(STATIC_CALL_NOP_CFI(name)) - #define STATIC_CALL_RET0_CFI_PREFIX __SCR__ #define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) #define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name)) diff --git a/kernel/events/core.c b/kernel/events/core.c index f79fd8b87f75..52f1edb8128c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6757,9 +6757,9 @@ static void perf_pending_task(struct callback_head *head) #ifdef CONFIG_GUEST_PERF_EVENTS struct perf_guest_info_callbacks __rcu *perf_guest_cbs; -DEFINE_STATIC_CALL_RET0(__perf_guest_state, *perf_guest_cbs->state); -DEFINE_STATIC_CALL_RET0(__perf_guest_get_ip, *perf_guest_cbs->get_ip); -DEFINE_STATIC_CALL_RET0(__perf_guest_handle_intel_pt_intr, *perf_guest_cbs->handle_intel_pt_intr); +DEFINE_STATIC_CALL_NULL(__perf_guest_state, *perf_guest_cbs->state); +DEFINE_STATIC_CALL_NULL(__perf_guest_get_ip, *perf_guest_cbs->get_ip); +DEFINE_STATIC_CALL_NULL(__perf_guest_handle_intel_pt_intr, *perf_guest_cbs->handle_intel_pt_intr); void perf_register_guest_info_callbacks(struct perf_guest_info_callbacks *cbs) { @@ -6783,10 +6783,9 @@ void perf_unregister_guest_info_callbacks(struct perf_guest_info_callbacks *cbs) return; rcu_assign_pointer(perf_guest_cbs, NULL); - static_call_update(__perf_guest_state, (void *)&__static_call_return0); - static_call_update(__perf_guest_get_ip, (void *)&__static_call_return0); - static_call_update(__perf_guest_handle_intel_pt_intr, - (void *)&__static_call_return0); + static_call_update(__perf_guest_state, NULL); + static_call_update(__perf_guest_get_ip, NULL); + static_call_update(__perf_guest_handle_intel_pt_intr, NULL); synchronize_rcu(); } EXPORT_SYMBOL_GPL(perf_unregister_guest_info_callbacks); @@ -13766,4 +13765,4 @@ struct cgroup_subsys perf_event_cgrp_subsys = { }; #endif /* CONFIG_CGROUP_PERF */ -DEFINE_STATIC_CALL_RET0(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t); +DEFINE_STATIC_CALL_NULL(perf_snapshot_branch_stack, perf_snapshot_branch_stack_t); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a89de2a2d8f8..e69543a8b098 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6821,7 +6821,6 @@ EXPORT_SYMBOL(preempt_schedule); #if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) #ifndef preempt_schedule_dynamic_enabled #define preempt_schedule_dynamic_enabled preempt_schedule -#define preempt_schedule_dynamic_disabled NULL #endif DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled); EXPORT_STATIC_CALL_RO(preempt_schedule); @@ -6894,7 +6893,6 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notrace); #if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) #ifndef preempt_schedule_notrace_dynamic_enabled #define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace -#define preempt_schedule_notrace_dynamic_disabled NULL #endif DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled); EXPORT_STATIC_CALL_RO(preempt_schedule_notrace); @@ -8491,13 +8489,11 @@ EXPORT_SYMBOL(__cond_resched); #ifdef CONFIG_PREEMPT_DYNAMIC #if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) #define cond_resched_dynamic_enabled __cond_resched -#define cond_resched_dynamic_disabled ((void *)&__static_call_return0) -DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched); +DEFINE_STATIC_CALL_NULL(cond_resched, __cond_resched); EXPORT_STATIC_CALL_RO(cond_resched); #define might_resched_dynamic_enabled __cond_resched -#define might_resched_dynamic_disabled ((void *)&__static_call_return0) -DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched); +DEFINE_STATIC_CALL_NULL(might_resched, __cond_resched); EXPORT_STATIC_CALL_RO(might_resched); #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched); @@ -8643,7 +8639,7 @@ int sched_dynamic_mode(const char *str) #if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) #define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled) -#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled) +#define preempt_dynamic_disable(f) static_call_update(f, NULL) #elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) #define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key) #define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key) diff --git a/kernel/static_call.c b/kernel/static_call.c index 20bf34bc3e2a..090ecf5d34b4 100644 --- a/kernel/static_call.c +++ b/kernel/static_call.c @@ -9,11 +9,6 @@ long __static_call_return0(void) } EXPORT_SYMBOL_GPL(__static_call_return0); -void __static_call_nop(void) -{ -} -EXPORT_SYMBOL_GPL(__static_call_nop); - #if defined(CONFIG_HAVE_STATIC_CALL) && !defined(CONFIG_HAVE_STATIC_CALL_INLINE) void __static_call_update(struct static_call_key *key, void *tramp, void *func) { diff --git a/tools/include/linux/static_call_types.h b/tools/include/linux/static_call_types.h index 2e2481c3f54e..72732af51cba 100644 --- a/tools/include/linux/static_call_types.h +++ b/tools/include/linux/static_call_types.h @@ -22,10 +22,6 @@ #define STATIC_CALL_TRAMP(name) __PASTE(STATIC_CALL_TRAMP_PREFIX, name) #define STATIC_CALL_TRAMP_STR(name) __stringify(STATIC_CALL_TRAMP(name)) -#define STATIC_CALL_NOP_CFI_PREFIX __SCN__ -#define STATIC_CALL_NOP_CFI(name) __PASTE(STATIC_CALL_NOP_CFI_PREFIX, name) -#define STATIC_CALL_NOP_CFI_STR(name) __stringify(STATIC_CALL_NOP_CFI(name)) - #define STATIC_CALL_RET0_CFI_PREFIX __SCR__ #define STATIC_CALL_RET0_CFI(name) __PASTE(STATIC_CALL_RET0_CFI_PREFIX, name) #define STATIC_CALL_RET0_CFI_STR(name) __stringify(STATIC_CALL_RET0_CFI(name))