From patchwork Fri Jan 20 19:25:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46592 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp385586wrn; Fri, 20 Jan 2023 11:30:57 -0800 (PST) X-Google-Smtp-Source: AMrXdXvZCz137Gg47VIamAh09eB1blSU1v/pfOnLTGxxYSDgGw8EkbTwtk6uKeE/lihbtJsB7F53 X-Received: by 2002:a17:906:3e51:b0:7c1:932c:96d7 with SMTP id t17-20020a1709063e5100b007c1932c96d7mr14454329eji.58.1674243056823; Fri, 20 Jan 2023 11:30:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243056; cv=none; d=google.com; s=arc-20160816; b=kn5UzIUdPtwgZMAMesAqxso6z4mwTgxcPvm9Qg9TaAZYJ7LrKES9HEC4GT23wx502k W7Sue2j4a3P9eFltncqVLFFYu4kzCzoD8ebeFBGcYAHXPdzDZ/nweBGlP/YZWpp1LrV7 +LWRSFCbEeIDeit4/ey4El3NIjRmiU7DzkS2xOzQd44k38XsE/WSnuJXgQ3laNRVfMmX D3q6vv+Op3kXPqs15QWkRg+DOW1og4+10zJytr/QWubTVxc30rfuwtjN4cCoIuVJuKpF AuxnpLnkaTXDBymE/E/3c6cq6JB0NhGVg+5IOep0KWBYQU+w8mEzdUC+syDnFyq7KIel TXMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ESy1fVVagaREWYuH9eDGpGhexrbo4+TLl3+9CSn0llE=; b=zG8o7Vz+Mh+5+35VYrUC+v2KUGzVo9UreX+mYe2LteJ6bSv9Kd1DkvAYUm/0kF8/xe JWD0amykfjb6xNyZVGg6/S+CCPHhcpGcZZtmcP1LSXFq4AKwUd6Lv8QEK3jCa84x3Ucx HORQ1eP1Tflq589xWSTWhiePt/fU0iRl87XRE6Jg5R4X3PBu+R+Q/nz06swMxIQCQkL7 RahrEqDbnDVyjM+3BBSgTWDJeNaWdQVOBq9jiekhJ8narp2eiygo9SZ9FKjvscw+mdoi 1KEzy4zkDn6SekBvyZX10FDpG7mx0f55T/JzbNxq0/0rL1mMMBsu8t8giQlayqqsm2oC hr3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp23-20020a1709073e1700b00870e0c1f5b6si19775799ejc.370.2023.01.20.11.30.30; Fri, 20 Jan 2023 11:30:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229897AbjATTZd (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229877AbjATTZ3 (ORCPT ); Fri, 20 Jan 2023 14:25:29 -0500 Received: from mail-vs1-f46.google.com (mail-vs1-f46.google.com [209.85.217.46]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A83968CE73; Fri, 20 Jan 2023 11:25:25 -0800 (PST) Received: by mail-vs1-f46.google.com with SMTP id t10so6785778vsr.3; Fri, 20 Jan 2023 11:25:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ESy1fVVagaREWYuH9eDGpGhexrbo4+TLl3+9CSn0llE=; b=YYrZmFXzOQ4lcZZTMEyxAkYK6Mgep9nrtseIPeho/6qfxGM1aB0IfSBK6dVNqV9zek WrUO9mj+B6Mk/7E6us1pp+WiOrJCZ2WT4wszFbq8BBePWa0QKTrRvlf0dHcWMYhXucqx f6a7DlVXKCwY/9+x1B7MqJykl37qgg80Xq93X7DEvD5/2cRHkLSmv7ULnQsq28+6XKjY LG8ne3rmuC/lFtvje6QtgJgSiR5j+hWR9i67el16JJTro0wWNiU9bOcda75URx9YOJqA pmwZ4Q+6JVIUfh0k4Ys822HQNKEnH9FNfYbL2HE8HSO0oNeV4pSvZM1tfcI5wkcqoWvi s10w== X-Gm-Message-State: AFqh2kpPtWHKYuLDVJen1vrX6Pvu4PZeZfEtADn7zwzijAGNmRfwtveP guOlgKnMbgps2j9BHtMutO/Uqk9qxgJkFzJN X-Received: by 2002:a67:5e83:0:b0:3ce:e81e:323f with SMTP id s125-20020a675e83000000b003cee81e323fmr22488081vsb.18.1674242724200; Fri, 20 Jan 2023 11:25:24 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id g19-20020a05620a40d300b006cec8001bf4sm26890971qko.26.2023.01.20.11.25.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:23 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 1/9] bpf: Enable annotating trusted nested pointers Date: Fri, 20 Jan 2023 13:25:15 -0600 Message-Id: <20230120192523.3650503-2-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571087615675629?= X-GMAIL-MSGID: =?utf-8?q?1755571087615675629?= In kfuncs, a "trusted" pointer is a pointer that the kfunc can assume is safe, and which the verifier will allow to be passed to a KF_TRUSTED_ARGS kfunc. Currently, a KF_TRUSTED_ARGS kfunc disallows any pointer to be passed at a nonzero offset, but sometimes this is in fact safe if the "nested" pointer's lifetime is inherited from its parent. For example, the const cpumask_t *cpus_ptr field in a struct task_struct will remain valid until the task itself is destroyed, and thus would also be safe to pass to a KF_TRUSTED_ARGS kfunc. While it would be conceptually simple to enable this by using BTF tags, gcc unfortunately does not yet support this. In the interim, this patch enables support for this by using a type-naming convention. A new BTF_TYPE_SAFE_NESTED macro is defined in verifier.c which allows a developer to specify the nested fields of a type which are considered trusted if its parent is also trusted. The verifier is also updated to account for this. A patch with selftests will be added in a follow-on change, along with documentation for this feature. Signed-off-by: David Vernet --- include/linux/bpf.h | 4 +++ kernel/bpf/btf.c | 61 +++++++++++++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 32 ++++++++++++++++++++--- 3 files changed, 94 insertions(+), 3 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index ae7771c7d750..283e96e5b228 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2186,6 +2186,10 @@ struct bpf_core_ctx { const struct btf *btf; }; +bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off); + int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, int relo_idx, void *insn); diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 4ba749fcce9d..dd05b5f2c1d8 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -8227,3 +8227,64 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, } return err; } + +bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, + const struct bpf_reg_state *reg, + int off) +{ + struct btf *btf = reg->btf; + const struct btf_type *walk_type, *safe_type; + const char *tname; + char safe_tname[64]; + long ret, safe_id; + const struct btf_member *member, *m_walk = NULL; + u32 i; + const char *walk_name; + + walk_type = btf_type_by_id(btf, reg->btf_id); + if (!walk_type) + return false; + + tname = btf_name_by_offset(btf, walk_type->name_off); + + ret = snprintf(safe_tname, sizeof(safe_tname), "%s__safe_fields", tname); + if (ret < 0) + return false; + + safe_id = btf_find_by_name_kind(btf, safe_tname, BTF_INFO_KIND(walk_type->info)); + if (safe_id < 0) + return false; + + safe_type = btf_type_by_id(btf, safe_id); + if (!safe_type) + return false; + + for_each_member(i, walk_type, member) { + u32 moff; + + /* We're looking for the PTR_TO_BTF_ID member in the struct + * type we're walking which matches the specified offset. + * Below, we'll iterate over the fields in the safe variant of + * the struct and see if any of them has a matching type / + * name. + */ + moff = __btf_member_bit_offset(walk_type, member) / 8; + if (off == moff) { + m_walk = member; + break; + } + } + if (m_walk == NULL) + return false; + + walk_name = __btf_name_by_offset(btf, m_walk->name_off); + for_each_member(i, safe_type, member) { + const char *m_name = __btf_name_by_offset(btf, member->name_off); + + /* If we match on both type and name, the field is considered trusted. */ + if (m_walk->type == member->type && !strcmp(walk_name, m_name)) + return true; + } + + return false; +} diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ca7db2ce70b9..7f973847b58e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4755,6 +4755,25 @@ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val) return 0; } +#define BTF_TYPE_SAFE_NESTED(__type) __PASTE(__type, __safe_fields) + +BTF_TYPE_SAFE_NESTED(struct task_struct) { + const cpumask_t *cpus_ptr; +}; + +static bool nested_ptr_is_trusted(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + int off) +{ + /* If its parent is not trusted, it can't regain its trusted status. */ + if (!is_trusted_reg(reg)) + return false; + + BTF_TYPE_EMIT(BTF_TYPE_SAFE_NESTED(struct task_struct)); + + return btf_nested_type_is_trusted(&env->log, reg, off); +} + static int check_ptr_to_btf_access(struct bpf_verifier_env *env, struct bpf_reg_state *regs, int regno, int off, int size, @@ -4843,10 +4862,17 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (type_flag(reg->type) & PTR_UNTRUSTED) flag |= PTR_UNTRUSTED; - /* By default any pointer obtained from walking a trusted pointer is - * no longer trusted except the rcu case below. + /* By default any pointer obtained from walking a trusted pointer is no + * longer trusted, unless the field being accessed has explicitly been + * marked as inheriting its parent's state of trust. + * + * An RCU-protected pointer can also be deemed trusted if we are in an + * RCU read region. This case is handled below. */ - flag &= ~PTR_TRUSTED; + if (nested_ptr_is_trusted(env, reg, off)) + flag |= PTR_TRUSTED; + else + flag &= ~PTR_TRUSTED; if (flag & MEM_RCU) { /* Mark value register as MEM_RCU only if it is protected by From patchwork Fri Jan 20 19:25:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46595 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp386832wrn; Fri, 20 Jan 2023 11:33:34 -0800 (PST) X-Google-Smtp-Source: AMrXdXtfn/3FoLe0012PIU7IDK7fXvZMDzn3HZHnU7IhIKh5c/juSc/3dz7nyjN6KxZS8kpMErd9 X-Received: by 2002:a17:90b:4b51:b0:22b:ae5e:8720 with SMTP id mi17-20020a17090b4b5100b0022bae5e8720mr3482423pjb.40.1674243214367; Fri, 20 Jan 2023 11:33:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243214; cv=none; d=google.com; s=arc-20160816; b=NYB2x4mqeUBqP52/FZMP31aisvkavd53pXMj0Qf1ysR6gQgec9VdKZ8eeyA78qETR8 Ku6fS6ploceH8htx1rHUFiAlxf+7zRqUpk5o5zP3ag2/NeJEy8jiCKUWmqqU3zy+H1d6 XVki0QZA6dEricuRcacb945KUgH8U6jg0RZiDrCyC358C5/CWGTGOW33zOLIXjG41hLM wPsY+X7GBk8MxLH1fEI+w69+vhTKjFM44UZYnIF0j+2W3Xdzipkh7d4ccTY+0a70z4Od IOEuM8VqUROMpEGYjFpfLSCeIca5vgDc55dUjwEFLXQEeD7aTIM4CcOZkjSnKvameEao kCVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=V+W32fWb/CNB6qimyvYrdya9osCOV2Z+fu3ZyPaW8RI=; b=qK+9D3xy7WcgQ2qsKagSCkELLf/gWbLTVBayCA9WnFuM/sh1Pdbg8wk7jYbZehaD4C uQYDcCJl8fFRdeFTr32NPKfgZVcRbm9SwmTxEsNI7W0Nk1dHNmiSpfPiaBj39utallDt QXvgPu/0g1OmEpdx6gWljXM1Tture/PZU6UeiTu/v8glbTgZX9ar2E3Vug9ZWpScZQ4L iVNfU2dylQpk7gNFuCZHYQnvzX0odNqCykid065ICpfAwu8V61Zk0nYMtwAg/M2TvUze JXQXQNQ/PZlt7YUlLjjG8buklZD4mEf35TtBzfNea5rCUTm5WqFWTOunIw5cl/4M1u/V ubDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a13-20020a17090a8c0d00b002293c88719dsi3104914pjo.100.2023.01.20.11.33.22; Fri, 20 Jan 2023 11:33:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229736AbjATTZg (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230037AbjATTZb (ORCPT ); Fri, 20 Jan 2023 14:25:31 -0500 Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DA11A500B; Fri, 20 Jan 2023 11:25:27 -0800 (PST) Received: by mail-qt1-f175.google.com with SMTP id q15so4988735qtn.0; Fri, 20 Jan 2023 11:25:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V+W32fWb/CNB6qimyvYrdya9osCOV2Z+fu3ZyPaW8RI=; b=k63L2Czt5BAIm2x/a7QL/3M5Vsl6w8CmdtNDrZjcJtIkYxJ9fbdaxaR3iZIRWezWL/ hwkkoWi2P1ABDuYWEtJPlTeR9iMG661EIJmusjHpwqSynm68dlaKbtDI6bR2DbHdB7CV NHeDPX4uHNxpjp45tYjIhrAHfcHKEbXZsnzkU6TQyaYANeSLReRxHBmPZKek8jR9ldaB 5HE4LnJaVDQKsiMAZnWqEZa1eqhefoO+TOdPe29cy4vzw94aGj7mF9YChT+H1be2oonf pED7TLpVK/Q6oEDQC057ISjkN8ZjsDl2fO7J+PlW2E8JLXHtxyCjFHixmkjYoEb+ildj QzoA== X-Gm-Message-State: AFqh2kpnm4lL/WW2+pXUk6tF0sVWKbyBAK3oNXzNpa5fFcNskEDw82+Y VZ2WD2uS0FlT4op2Dx6muZnTVzjFMreewHFo X-Received: by 2002:ac8:478c:0:b0:3a8:10c4:4ae with SMTP id k12-20020ac8478c000000b003a810c404aemr21483241qtq.49.1674242726341; Fri, 20 Jan 2023 11:25:26 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id s18-20020a05620a255200b006eeb3165565sm26793423qko.80.2023.01.20.11.25.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:25 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 2/9] bpf: Allow trusted args to walk struct when checking BTF IDs Date: Fri, 20 Jan 2023 13:25:16 -0600 Message-Id: <20230120192523.3650503-3-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571252943908761?= X-GMAIL-MSGID: =?utf-8?q?1755571252943908761?= When validating BTF types for KF_TRUSTED_ARGS kfuncs, the verifier currently enforces that the top-level type must match when calling the kfunc. In other words, the verifier does not allow the BPF program to pass a bitwise equivalent struct, despite it being allowed according to the C standard. For example, if you have the following type: struct nf_conn___init { struct nf_conn ct; }; The C standard stipulates that it would be safe to pass a struct nf_conn___init to a kfunc expecting a struct nf_conn. The verifier currently disallows this, however, as semantically kfuncs may want to enforce that structs that have equivalent types according to the C standard, but have different BTF IDs, are not able to be passed to kfuncs expecting one or the other. For example, struct nf_conn___init may not be queried / looked up, as it is allocated but may not yet be fully initialized. On the other hand, being able to pass types that are equivalent according to the C standard will be useful for other types of kfunc / kptrs enabled by BPF. For example, in a follow-on patch, a series of kfuncs will be added which allow programs to do bitwise queries on cpumasks that are either allocated by the program (in which case they'll be a 'struct bpf_cpumask' type that wraps a cpumask_t as its first element), or a cpumask that was allocated by the main kernel (in which case it will just be a straight cpumask_t, as in task->cpus_ptr). Having the two types of cpumasks allows us to distinguish between the two for when a cpumask is read-only vs. mutatable. A struct bpf_cpumask can be mutated by e.g. bpf_cpumask_clear(), whereas a regular cpumask_t cannot be. On the other hand, a struct bpf_cpumask can of course be queried in the exact same manner as a cpumask_t, with e.g. bpf_cpumask_test_cpu(). If we were to enforce that top level types match, then a user that's passing a struct bpf_cpumask to a read-only cpumask_t argument would have to cast with something like bpf_cast_to_kern_ctx() (which itself would need to be updated to expect the alias, and currently it only accommodates a single alias per prog type). Additionally, not specifying KF_TRUSTED_ARGS is not an option, as some kfuncs take one argument as a struct bpf_cpumask *, and another as a struct cpumask * (i.e. cpumask_t). In order to enable this, this patch relaxes the constraint that a KF_TRUSTED_ARGS kfunc must have strict type matching, and instead only enforces strict type matching if a type is observed to be a "no-cast alias" (i.e., that the type names are equivalent, but one is suffixed with ___init). Additionally, in order to try and be conservative and match existing behavior / expectations, this patch also enforces strict type checking for acquire kfuncs. We were already enforcing it for release kfuncs, so this should also improve the consistency of the semantics for kfuncs. Signed-off-by: David Vernet --- include/linux/bpf.h | 4 +++ kernel/bpf/btf.c | 61 +++++++++++++++++++++++++++++++++++++++++++ kernel/bpf/verifier.c | 30 ++++++++++++++++++++- 3 files changed, 94 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 283e96e5b228..d01d99127b7b 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2190,6 +2190,10 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off); +bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log, + const struct btf *reg_btf, u32 reg_id, + const struct btf *arg_btf, u32 arg_id); + int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, int relo_idx, void *insn); diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index dd05b5f2c1d8..47b8cb96f2c2 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -336,6 +336,12 @@ const char *btf_type_str(const struct btf_type *t) /* Type name size */ #define BTF_SHOW_NAME_SIZE 80 +/* + * The suffix of a type that indicates it cannot alias another type when + * comparing BTF IDs for kfunc invocations. + */ +#define NOCAST_ALIAS_SUFFIX "___init" + /* * Common data to all BTF show operations. Private show functions can add * their own data to a structure containing a struct btf_show and consult it @@ -8288,3 +8294,58 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, return false; } + +bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log, + const struct btf *reg_btf, u32 reg_id, + const struct btf *arg_btf, u32 arg_id) +{ + const char *reg_name, *arg_name, *search_needle; + const struct btf_type *reg_type, *arg_type; + int reg_len, arg_len, cmp_len; + size_t pattern_len = sizeof(NOCAST_ALIAS_SUFFIX) - sizeof(char); + + reg_type = btf_type_by_id(reg_btf, reg_id); + if (!reg_type) + return false; + + arg_type = btf_type_by_id(arg_btf, arg_id); + if (!arg_type) + return false; + + reg_name = btf_name_by_offset(reg_btf, reg_type->name_off); + arg_name = btf_name_by_offset(arg_btf, arg_type->name_off); + + reg_len = strlen(reg_name); + arg_len = strlen(arg_name); + + /* Exactly one of the two type names may be suffixed with ___init, so + * if the strings are the same size, they can't possibly be no-cast + * aliases of one another. If you have two of the same type names, e.g. + * they're both nf_conn___init, it would be improper to return true + * because they are _not_ no-cast aliases, they are the same type. + */ + if (reg_len == arg_len) + return false; + + /* Either of the two names must be the other name, suffixed with ___init. */ + if ((reg_len != arg_len + pattern_len) && + (arg_len != reg_len + pattern_len)) + return false; + + if (reg_len < arg_len) { + search_needle = strstr(arg_name, NOCAST_ALIAS_SUFFIX); + cmp_len = reg_len; + } else { + search_needle = strstr(reg_name, NOCAST_ALIAS_SUFFIX); + cmp_len = arg_len; + } + + if (!search_needle) + return false; + + /* ___init suffix must come at the end of the name */ + if (*(search_needle + pattern_len) != '\0') + return false; + + return !strncmp(reg_name, arg_name, cmp_len); +} diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7f973847b58e..ca5d601fb3cf 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8563,9 +8563,37 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, reg_ref_id = *reg2btf_ids[base_type(reg->type)]; } - if (is_kfunc_trusted_args(meta) || (is_kfunc_release(meta) && reg->ref_obj_id)) + /* Enforce strict type matching for calls to kfuncs that are acquiring + * or releasing a reference, or are no-cast aliases. We do _not_ + * enforce strict matching for plain KF_TRUSTED_ARGS kfuncs by default, + * as we want to enable BPF programs to pass types that are bitwise + * equivalent without forcing them to explicitly cast with something + * like bpf_cast_to_kern_ctx(). + * + * For example, say we had a type like the following: + * + * struct bpf_cpumask { + * cpumask_t cpumask; + * refcount_t usage; + * }; + * + * Note that as specified in , cpumask_t is typedef'ed + * to a struct cpumask, so it would be safe to pass a struct + * bpf_cpumask * to a kfunc expecting a struct cpumask *. + * + * The philosophy here is similar to how we allow scalars of different + * types to be passed to kfuncs as long as the size is the same. The + * only difference here is that we're simply allowing + * btf_struct_ids_match() to walk the struct at the 0th offset, and + * resolve types. + */ + if (is_kfunc_acquire(meta) || + (is_kfunc_release(meta) && reg->ref_obj_id) || + btf_type_ids_nocast_alias(&env->log, reg_btf, reg_ref_id, meta->btf, ref_id)) strict_type_match = true; + WARN_ON_ONCE(is_kfunc_trusted_args(meta) && reg->off); + reg_ref_t = btf_type_skip_modifiers(reg_btf, reg_ref_id, ®_ref_id); reg_ref_tname = btf_name_by_offset(reg_btf, reg_ref_t->name_off); if (!btf_struct_ids_match(&env->log, reg_btf, reg_ref_id, reg->off, meta->btf, ref_id, strict_type_match)) { From patchwork Fri Jan 20 19:25:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46594 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp386555wrn; Fri, 20 Jan 2023 11:33:00 -0800 (PST) X-Google-Smtp-Source: AMrXdXu2sdvwLhG1JoonBHyRqOIrPu2XUxYmwyUQkU9lS/aHQ+QTV55RXrSN/YE9ZkIBjCN4MYmd X-Received: by 2002:a05:6402:5110:b0:49d:32d0:126 with SMTP id m16-20020a056402511000b0049d32d00126mr20114900edd.20.1674243180505; Fri, 20 Jan 2023 11:33:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243180; cv=none; d=google.com; s=arc-20160816; b=rUjpKNtt4jqn4V6P6nH/BMer21iej8MaQdTY4lOuXkhUSElynJkiyWe8FaQswH2Y13 K40FlRO0rJKszpYJL9MWoE1p74UIX+0mNRpv0oEVTnAjepeUGx/Eff24zJXNbtQfVoIe QuWf0xz0G2LN+IFDjOBYBb/A5jbU15hQvccuDeORaafHiKE3SwHEepwoBUbjo3h6tHgy rjLiOhOObwdGK1DLutV+czByywktHedYFHhBrbzWuV3b0AmakjohhMQuuJBDqYp9HTHG Wso7ECoZVG4CluaccG+qkXrP75ks8Lh+N6WANzLU/sHaDie+hbK4MU0jqwGstOOJM1m6 Dbsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KJ/CIF8V3Oz85fXn1ATcxkeRERKTOn9zuVe6bz+JYpA=; b=xjTwRgwFyNJ1zZ7ehRmfFfFBmgO4KRuYzrS2axGmY4sBDLOzbo2Ze+ZikvknQNHm4a EoqGEgl3s+WsBG2In2DvnvxLL9MzKcRXmVaTci2heZ/FjQhhXw1NWsE6DfWFrC07pqio G+lChwsq57q4GsLKOwarBQFVpCe1xqwctD2CZV3jW//GMT7PL0GlqoF9aT8S0tF9j6aG cqNP/YUuKwqS0B7NYVHvpoaQSH0k2eifBqfvO8/SO25pySnJ0PoqqrMIedo4+hmvrVQz eU550+Rl0e08sHt9e7iWTVwPVW9zlUXQQMgwudV+5FmgA3uRu9H2VMysl6Q4y1nAj+mD LiCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hs2-20020a1709073e8200b0084d172e0b0esi48123020ejc.35.2023.01.20.11.32.29; Fri, 20 Jan 2023 11:33:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbjATTZk (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230197AbjATTZf (ORCPT ); Fri, 20 Jan 2023 14:25:35 -0500 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 099F2CB50A; Fri, 20 Jan 2023 11:25:30 -0800 (PST) Received: by mail-qv1-f45.google.com with SMTP id t7so4453571qvv.3; Fri, 20 Jan 2023 11:25:29 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KJ/CIF8V3Oz85fXn1ATcxkeRERKTOn9zuVe6bz+JYpA=; b=wQT3eh4Mo7zXBcxmWSL6uJRsj/5QGoGRpazZxOn4PsDBd8csQGHW2EDOrB7keJapf+ 4pgw59WLie/iyuc8QZZtrXsn7pDgw/fthGNpmExKbZ9Ar1H69N6BC7xyP9KtvYKiyZYA IxfvvxtpSLrbnowCO7qzrkR3vN/Fr6IJg+tE/vIJAKOF3TemusbmPmnkZyS7c1Zoe5ua 8jyse6PIVd9FGLKRSpJQuT4sbaS5tFErfV4hcW/KMFtfr/K3O5Brh8ZDD//orJCks2P+ Ws13mjWLgWkEClJxwpVt53AVnRPdMsW+rC5os4e18/SCkXbfgDlM5oTVPKWME2zVSB4q P1Ig== X-Gm-Message-State: AFqh2kqXpsGekQ4K8A660DqfpwL1cjy/esy/JAowIk+wyt0hOxtYS7TT buwkS+jbPjaCBaMtYa63TK+ME6APBTHACwbm X-Received: by 2002:ad4:584b:0:b0:531:8b4b:b59d with SMTP id de11-20020ad4584b000000b005318b4bb59dmr51044840qvb.50.1674242728609; Fri, 20 Jan 2023 11:25:28 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id br38-20020a05620a462600b00706b09b16fasm6085348qkb.11.2023.01.20.11.25.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:28 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 3/9] bpf: Disallow NULLable pointers for trusted kfuncs Date: Fri, 20 Jan 2023 13:25:17 -0600 Message-Id: <20230120192523.3650503-4-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571217194453919?= X-GMAIL-MSGID: =?utf-8?q?1755571217194453919?= KF_TRUSTED_ARGS kfuncs currently have a subtle and insidious bug in validating pointers to scalars. Say that you have a kfunc like the following, which takes an array as the first argument: bool bpf_cpumask_empty(const struct cpumask *cpumask) { return cpumask_empty(cpumask); } ... BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_TRUSTED_ARGS) ... If a BPF program were to invoke the kfunc with a NULL argument, it would crash the kernel. The reason is that struct cpumask is defined as a bitmap, which is itself defined as an array, and is accessed as a memory address memory by bitmap operations. So when the verifier analyzes the register, it interprets it as a pointer to a scalar struct, which is an array of size 8. check_mem_reg() then sees that the register is NULL, and returns 0, and the kfunc crashes when it passes it down to the cpumask wrappers. To fix this, this patch adds a check for KF_ARG_PTR_TO_MEM which verifies that the register doesn't contain a possibly-NULL pointer if the kfunc is KF_TRUSTED_ARGS. Signed-off-by: David Vernet --- kernel/bpf/verifier.c | 6 ++++++ tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c | 4 ++-- tools/testing/selftests/bpf/prog_tests/task_kfunc.c | 4 ++-- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ca5d601fb3cf..a466887f5334 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8937,6 +8937,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; } + if (is_kfunc_trusted_args(meta) && + (register_is_null(reg) || type_may_be_null(reg->type))) { + verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i); + return -EACCES; + } + if (reg->ref_obj_id) { if (is_kfunc_release(meta) && meta->ref_obj_id) { verbose(env, "verifier internal error: more than one arg with ref_obj_id R%d %u %u\n", diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c b/tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c index 973f0c5af965..f3bb0e16e088 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c +++ b/tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c @@ -93,11 +93,11 @@ static struct { const char *prog_name; const char *expected_err_msg; } failure_tests[] = { - {"cgrp_kfunc_acquire_untrusted", "R1 must be referenced or trusted"}, + {"cgrp_kfunc_acquire_untrusted", "Possibly NULL pointer passed to trusted arg0"}, {"cgrp_kfunc_acquire_fp", "arg#0 pointer type STRUCT cgroup must point"}, {"cgrp_kfunc_acquire_unsafe_kretprobe", "reg type unsupported for arg#0 function"}, {"cgrp_kfunc_acquire_trusted_walked", "R1 must be referenced or trusted"}, - {"cgrp_kfunc_acquire_null", "arg#0 pointer type STRUCT cgroup must point"}, + {"cgrp_kfunc_acquire_null", "Possibly NULL pointer passed to trusted arg0"}, {"cgrp_kfunc_acquire_unreleased", "Unreleased reference"}, {"cgrp_kfunc_get_non_kptr_param", "arg#0 expected pointer to map value"}, {"cgrp_kfunc_get_non_kptr_acquired", "arg#0 expected pointer to map value"}, diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c index 18848c31e36f..a4f49e8dc7e8 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -87,11 +87,11 @@ static struct { const char *prog_name; const char *expected_err_msg; } failure_tests[] = { - {"task_kfunc_acquire_untrusted", "R1 must be referenced or trusted"}, + {"task_kfunc_acquire_untrusted", "Possibly NULL pointer passed to trusted arg0"}, {"task_kfunc_acquire_fp", "arg#0 pointer type STRUCT task_struct must point"}, {"task_kfunc_acquire_unsafe_kretprobe", "reg type unsupported for arg#0 function"}, {"task_kfunc_acquire_trusted_walked", "R1 must be referenced or trusted"}, - {"task_kfunc_acquire_null", "arg#0 pointer type STRUCT task_struct must point"}, + {"task_kfunc_acquire_null", "Possibly NULL pointer passed to trusted arg0"}, {"task_kfunc_acquire_unreleased", "Unreleased reference"}, {"task_kfunc_get_non_kptr_param", "arg#0 expected pointer to map value"}, {"task_kfunc_get_non_kptr_acquired", "arg#0 expected pointer to map value"}, From patchwork Fri Jan 20 19:25:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46597 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp387725wrn; Fri, 20 Jan 2023 11:35:43 -0800 (PST) X-Google-Smtp-Source: AMrXdXtqdx7w2XNLhWRPsmGfX4POdcIUv3ntNyf5NM1ZspjWcMar8GPIGgr24YSNhOads6XSusju X-Received: by 2002:a05:6402:2421:b0:49e:ed9c:215f with SMTP id t33-20020a056402242100b0049eed9c215fmr2743824eda.38.1674243343811; Fri, 20 Jan 2023 11:35:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243343; cv=none; d=google.com; s=arc-20160816; b=vJhw8jdgG0ODOkVjQsVEi6e3sVFXw7N8/vJAwKE0Yhvmfz3zTC6rGE/htaMv0NZfwv wz2RZiLvZT853V3xCAOPBtUT63C9wiHwbr/KacEfeQbopQySQM/nUL20qXAXmsCZbBUO yeHRSwP963JjdHHSf38ScGocEKhl6+34rFGqZrvpszuFyZrM0eyXaYNPe6S3iF3qGcHr fu2tY+No8OFLav42oW5aNt0JygriwYlrGScftId1EwzyxTsVZO/n7NCws4YMW0U2NPT+ +/xgWjwFTbnmLi2wcD6PE/yMAXWmhOpttf7ueOJoYpUuXZYICI92T3IkndRbhRbEGPhn VK9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+w2MA7VgCoYzPDh+fd2OtSy1Z701e04EjKWG3BmqBSU=; b=TwO97+Ani/aiEEWqPe4VzIJqtPWccjGNvzFUUXO21PQMtZ9E+AbOyw6xlzHKa9F0W+ PKIZJSuwUu8vU2ubHEh0VAb7anr1yI5TM7QG7dGG36DOA+cCfLzoBmhXye+1kkCua5Xd nNoHt5f/b8z7ORuii2peTiHSc54RddjQdCLfRzsjZ2LnmJBd9GezQElIV1iJpRzyusOY aI9G9cCqgmc2lj+oSVSX3pXOkgJT4F7gGZBJOxN1iXkmB1UF7PMXmvfreRsXjOdrXYbw TPJ0weeW2VvaI6eDNpWtG36/bDXht6Uyuz10Q8dMP8SEOQvBdY80vQyvdnfnn5TFLLUY vd+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ju5-20020a17090798a500b0087785e0c520si3617143ejc.507.2023.01.20.11.35.17; Fri, 20 Jan 2023 11:35:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230310AbjATTZs (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230006AbjATTZj (ORCPT ); Fri, 20 Jan 2023 14:25:39 -0500 Received: from mail-vk1-f172.google.com (mail-vk1-f172.google.com [209.85.221.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 257638BA8F; Fri, 20 Jan 2023 11:25:32 -0800 (PST) Received: by mail-vk1-f172.google.com with SMTP id q21so3033979vka.3; Fri, 20 Jan 2023 11:25:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+w2MA7VgCoYzPDh+fd2OtSy1Z701e04EjKWG3BmqBSU=; b=BIu/K/Iem93j7QS66RsmIz3g2CpfsBUu/iHgX0bnoZkltvVUy0uXe+QJYLF9tty+U/ XlE9ngRUA8CD29XHL1AiQ18xRPMdSC/Ysj1Eci/VYWWWSMZAfjv9iXIQpNyYQ07DKpy5 vJmWtums6ZmCo6D42WWf1C/iA+CDr8fYbNrWvAF67FtSWeIynAnuiZOo8urmEUHY1aus qtEvWQxerDQp2+mbD+/vNNCoq8WgC7u4eciIawQxrjUi+fVwQy2/cpqMRJmlvtTQQh8Q RN+5oW09vZw0/BGDQ0fa7el4H47iBpWSFFYHPjrBPFtlq2HsudsJwIT97f2qGr6FI5jY snmQ== X-Gm-Message-State: AFqh2krSV0UPG87BYWQVI6aIkTZWn3G80h1NxLazBZ7H8TEh/yAIW9N1 DEkhuUP2oO5QadSvb5PYeJ/9cs2UEd95stZc X-Received: by 2002:a05:6122:2086:b0:3dd:f28e:ca49 with SMTP id i6-20020a056122208600b003ddf28eca49mr13277770vkd.7.1674242730641; Fri, 20 Jan 2023 11:25:30 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id x6-20020a05620a448600b0070736988c10sm4738108qkp.110.2023.01.20.11.25.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:30 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 4/9] bpf: Enable cpumasks to be queried and used as kptrs Date: Fri, 20 Jan 2023 13:25:18 -0600 Message-Id: <20230120192523.3650503-5-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571388344265925?= X-GMAIL-MSGID: =?utf-8?q?1755571388344265925?= Certain programs may wish to be able to query cpumasks. For example, if a program that is tracing percpu operations may wish to track which tasks end up running on which CPUs, and it could be useful to associate that with the tasks' cpumasks. Similarly, a program tracking NUMA allocations, CPU scheduling domains, etc, would potentially benefit from being able to see which CPUs a task could be migrated to, etc. This patch enables these such cases by introducing a series of bpf_cpumask_* kfuncs. Amongst these kfuncs, there are two separate "classes" of operations: 1. kfuncs which allow the caller to allocate and mutate their own cpumasks in the form of a struct bpf_cpumask * object. Such kfuncs include e.g. bpf_cpumask_create() to allocate the cpumask, and bpf_cpumask_or() to mutate it. "Regular" cpumasks such as p->cpus_ptr may not be passed to these kfuncs, and the verifier will ensure this is the case by comparing BTF IDs. 2. Read-only operations which operate on const struct cpumask * arguments. For example, bpf_cpumask_test_cpu(), which tests whether a CPU is set in the cpumask. Any trusted struct cpumask * or struct bpf_cpumask * may be passed to these kfuncs. The verifier allows struct bpf_cpumask * even though the kfunc is defined with struct cpumask * because the first element of a struct bpf_cpumask is a cpumask_t, so it is safe to cast. A follow-on patch will add selftests which validate these kfuncs, and another will document them. Note that some of the kfuncs that were added would benefit from additional verification logic. For example, any kfunc taking a CPU argument that exceeds the number of CPUs on the system, etc. For now, we silently check for and ignore these cases at runtime. When we have e.g. per-argument kfunc flags, it might be helpful to add another KF_CPU-type flag that specifies that the verifier should validate that it's a valid CPU. Signed-off-by: David Vernet --- kernel/bpf/Makefile | 1 + kernel/bpf/cpumask.c | 269 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 270 insertions(+) create mode 100644 kernel/bpf/cpumask.c diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 3a12e6b400a2..02242614dcc7 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -36,6 +36,7 @@ obj-$(CONFIG_DEBUG_INFO_BTF) += sysfs_btf.o endif ifeq ($(CONFIG_BPF_JIT),y) obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o +obj-$(CONFIG_BPF_SYSCALL) += cpumask.o obj-${CONFIG_BPF_LSM} += bpf_lsm.o endif obj-$(CONFIG_BPF_PRELOAD) += preload/ diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c new file mode 100644 index 000000000000..92eedc84dbfc --- /dev/null +++ b/kernel/bpf/cpumask.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2023 Meta, Inc + */ +#include +#include +#include +#include +#include + +/** + * struct bpf_cpumask - refcounted BPF cpumask wrapper structure + * @cpumask: The actual cpumask embedded in the struct. + * @usage: Object reference counter. When the refcount goes to 0, the + * memory is released back to the BPF allocator, which provides + * RCU safety. + * + * Note that we explicitly embed a cpumask_t rather than a cpumask_var_t. This + * is done to avoid confusing the verifier due to the typedef of cpumask_var_t + * changing depending on whether CONFIG_CPUMASK_OFFSTACK is defined or not. See + * the details in . The consequence is that this structure is + * likely a bit larger than it needs to be when CONFIG_CPUMASK_OFFSTACK is + * defined due to embedding the whole NR_CPUS-size bitmap, but the extra memory + * overhead is minimal. For the more typical case of CONFIG_CPUMASK_OFFSTACK + * not being defined, the structure is the same size regardless. + */ +struct bpf_cpumask { + cpumask_t cpumask; + refcount_t usage; +}; + +static struct bpf_mem_alloc bpf_cpumask_ma; + +static bool cpu_valid(u32 cpu) +{ + return cpu < nr_cpu_ids; +} + +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global kfuncs as their definitions will be in BTF"); + +struct bpf_cpumask *bpf_cpumask_create(void) +{ + struct bpf_cpumask *cpumask; + + cpumask = bpf_mem_alloc(&bpf_cpumask_ma, sizeof(*cpumask)); + if (!cpumask) + return NULL; + + memset(cpumask, 0, sizeof(*cpumask)); + refcount_set(&cpumask->usage, 1); + + return cpumask; +} + +struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) +{ + refcount_inc(&cpumask->usage); + return cpumask; +} + +struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) +{ + struct bpf_cpumask *cpumask; + + /* The BPF memory allocator frees memory backing its caches in an RCU + * callback. Thus, we can safely use RCU to ensure that the cpumask is + * safe to read. + */ + rcu_read_lock(); + + cpumask = READ_ONCE(*cpumaskp); + if (cpumask && !refcount_inc_not_zero(&cpumask->usage)) + cpumask = NULL; + + rcu_read_unlock(); + return cpumask; +} + +void bpf_cpumask_release(struct bpf_cpumask *cpumask) +{ + if (!cpumask) + return; + + if (refcount_dec_and_test(&cpumask->usage)) { + migrate_disable(); + bpf_mem_free(&bpf_cpumask_ma, cpumask); + migrate_enable(); + } +} + +u32 bpf_cpumask_first(const struct cpumask *cpumask) +{ + return cpumask_first(cpumask); +} + +u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) +{ + return cpumask_first_zero(cpumask); +} + +void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) +{ + if (!cpu_valid(cpu)) + return; + + cpumask_set_cpu(cpu, (struct cpumask *)cpumask); +} + +void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) +{ + if (!cpu_valid(cpu)) + return; + + cpumask_clear_cpu(cpu, (struct cpumask *)cpumask); +} + +bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) +{ + if (!cpu_valid(cpu)) + return false; + + return cpumask_test_cpu(cpu, (struct cpumask *)cpumask); +} + +bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) +{ + if (!cpu_valid(cpu)) + return false; + + return cpumask_test_and_set_cpu(cpu, (struct cpumask *)cpumask); +} + +bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) +{ + if (!cpu_valid(cpu)) + return false; + + return cpumask_test_and_clear_cpu(cpu, (struct cpumask *)cpumask); +} + +void bpf_cpumask_setall(struct bpf_cpumask *cpumask) +{ + cpumask_setall((struct cpumask *)cpumask); +} + +void bpf_cpumask_clear(struct bpf_cpumask *cpumask) +{ + cpumask_clear((struct cpumask *)cpumask); +} + +bool bpf_cpumask_and(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) +{ + return cpumask_and((struct cpumask *)dst, src1, src2); +} + +void bpf_cpumask_or(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) +{ + cpumask_or((struct cpumask *)dst, src1, src2); +} + +void bpf_cpumask_xor(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) +{ + cpumask_xor((struct cpumask *)dst, src1, src2); +} + +bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) +{ + return cpumask_equal(src1, src2); +} + +bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *src2) +{ + return cpumask_intersects(src1, src2); +} + +bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) +{ + return cpumask_subset(src1, src2); +} + +bool bpf_cpumask_empty(const struct cpumask *cpumask) +{ + return cpumask_empty(cpumask); +} + +bool bpf_cpumask_full(const struct cpumask *cpumask) +{ + return cpumask_full(cpumask); +} + +void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) +{ + cpumask_copy((struct cpumask *)dst, src); +} + +u32 bpf_cpumask_any(const struct cpumask *cpumask) +{ + return cpumask_any(cpumask); +} + +u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) +{ + return cpumask_any_and(src1, src2); +} + +__diag_pop(); + +BTF_SET8_START(cpumask_kfunc_btf_ids) +BTF_ID_FLAGS(func, bpf_cpumask_create, KF_ACQUIRE | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_cpumask_first, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_clear_cpu, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_test_cpu, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_test_and_set_cpu, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_test_and_clear_cpu, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_setall, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_clear, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_and, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_or, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_xor, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_equal, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_intersects, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_subset, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_empty, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_full, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_copy, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_any, KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_any_and, KF_TRUSTED_ARGS) +BTF_SET8_END(cpumask_kfunc_btf_ids) + +static const struct btf_kfunc_id_set cpumask_kfunc_set = { + .owner = THIS_MODULE, + .set = &cpumask_kfunc_btf_ids, +}; + +BTF_ID_LIST(cpumask_dtor_ids) +BTF_ID(struct, bpf_cpumask) +BTF_ID(func, bpf_cpumask_release) + +static int __init cpumask_kfunc_init(void) +{ + int ret; + const struct btf_id_dtor_kfunc cpumask_dtors[] = { + { + .btf_id = cpumask_dtor_ids[0], + .kfunc_btf_id = cpumask_dtor_ids[1] + }, + }; + + ret = bpf_mem_alloc_init(&bpf_cpumask_ma, 0, false); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &cpumask_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &cpumask_kfunc_set); + return ret ?: register_btf_id_dtor_kfuncs(cpumask_dtors, + ARRAY_SIZE(cpumask_dtors), + THIS_MODULE); +} + +late_initcall(cpumask_kfunc_init); From patchwork Fri Jan 20 19:25:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46596 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp387563wrn; Fri, 20 Jan 2023 11:35:22 -0800 (PST) X-Google-Smtp-Source: AMrXdXtK5hVLltF8xfqzw2Rb9V/XWQteZOmxriQbpif5vW3yfGeUIAjui1aP1Q3CAyddmZ+eZ/lp X-Received: by 2002:a17:906:22c9:b0:7c1:65d1:c4ca with SMTP id q9-20020a17090622c900b007c165d1c4camr15362807eja.33.1674243322804; Fri, 20 Jan 2023 11:35:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243322; cv=none; d=google.com; s=arc-20160816; b=fWSeEHk0bwLlRX04En+ZnwbQSgi3576NY0hcKRyV89NphDg6WN4z+FO8SRg9f/RIzj QVu3cUruQKpVkDRW4QqupNXgsYp3jAgjAN7nGhEqVnalJeA8s+KG6EtO6b3UrUZJ/Zkm OgzuINJvdYAG5h3Q6GmtclOhEfsjrh7hKe5fKcXy2RN4rRKtrOQy4CPsCzx9RcMyatZd YCreaBOJVb80wOeA0Rh7xSfBdY1rfwuGy+pHpGmVUN/armLd5gSPyhH3d+nxs1ol+B8y UMg0T0NLOsrW5Nxkwe8vG8dnZZ0svUqaY89IHekhIdIK9gM44mu5VvkFY9GHO8OtTUpD 8L8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=yYSOy7KaZSpPPXApCQ/R8qmtFi6iw6Cs65q0EVBBJUw=; b=uPEmb5QvMNeoDjiIVRMCbvVcy1tjYR1F5HNH3bxcgFS8xXn0cWxLn5d3mrOzxOiOH1 QWLaJ5ZKjbkJchBv0uAGcYP4lCRWu6+Wjvjgwmyeans4jat09u2s3VdM6xYdW+Z+Ntq1 ypYV6x0E+Sz5B2CYlql6hRSzMP6ZBlmZ4Iutt56nXlI/2JmZQl33ZlXRGwWLPNPE8ysY L8hfhOAM+18al96WdG8jo29grMmcF+ZOShgUfTA61PtMxpPNXiLcgAuAr/rFX0bnMY7M 6LN8aO2fTvibNmk8Ha2XkLuK/7E3UHgq7sPJ4e82oreZttNeO/ESTdjBi9lT2KCcXBoe OD0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp23-20020a1709073e1700b00870e0c1f5b6si19775799ejc.370.2023.01.20.11.34.59; Fri, 20 Jan 2023 11:35:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230323AbjATTZu (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229660AbjATTZn (ORCPT ); Fri, 20 Jan 2023 14:25:43 -0500 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CCB8BCE32; Fri, 20 Jan 2023 11:25:34 -0800 (PST) Received: by mail-vs1-f53.google.com with SMTP id k6so6791774vsk.1; Fri, 20 Jan 2023 11:25:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yYSOy7KaZSpPPXApCQ/R8qmtFi6iw6Cs65q0EVBBJUw=; b=IVlZSC59kp5op/c3OrvPZMyb4I13/flRUGiqVango4bqSEzCqPsk2eiGqvbgCmMLSn bWF6ZJllxXgL5GlJt202xCygH4PLfEUr/EuAOZXN20GlAMl+e9bSgZKdP5e/eMtyG4Fk 5OyogdRQ8bAABmDKNm6HTvbNDaZdJ9BNePixuP+5EPox2/OVC8QqMpm1u508IRbltSAk 7CHNEmObQD53md/KrXAgsTEVkWkSm5gb31NJOxrWSaZ9i3+xnvFUIGHnULMVM6ZeKt3K IR99U+ZVHs6Pz9kkaf/mL1OffIf+Wx/Sz653FBtd7hPl8WlHqUA2dKMiuWcufrIqTKC9 6rlQ== X-Gm-Message-State: AFqh2kqIHiXIdxo56cxtOGTHNW8EBs1S+FiW+T9c37LChrNk1+yzH8k7 9u6ZXjuBIAk4hqalC5EtJ6EWaBZhekl5EXHS X-Received: by 2002:a05:6102:304f:b0:3d3:e24b:fdb6 with SMTP id w15-20020a056102304f00b003d3e24bfdb6mr15165176vsa.0.1674242732755; Fri, 20 Jan 2023 11:25:32 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id v14-20020a05620a0f0e00b006fbb4b98a25sm26932908qkl.109.2023.01.20.11.25.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:32 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 5/9] selftests/bpf: Add nested trust selftests suite Date: Fri, 20 Jan 2023 13:25:19 -0600 Message-Id: <20230120192523.3650503-6-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571366099048530?= X-GMAIL-MSGID: =?utf-8?q?1755571366099048530?= Now that defining trusted fields in a struct is supported, we should add selftests to verify the behavior. This patch adds a few such testcases. Signed-off-by: David Vernet --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/nested_trust.c | 12 +++++++ .../selftests/bpf/progs/nested_trust_common.h | 12 +++++++ .../bpf/progs/nested_trust_failure.c | 33 +++++++++++++++++++ .../bpf/progs/nested_trust_success.c | 31 +++++++++++++++++ 5 files changed, 89 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/nested_trust.c create mode 100644 tools/testing/selftests/bpf/progs/nested_trust_common.h create mode 100644 tools/testing/selftests/bpf/progs/nested_trust_failure.c create mode 100644 tools/testing/selftests/bpf/progs/nested_trust_success.c diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 96e8371f5c2a..1cf5b94cda30 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -44,6 +44,7 @@ map_kptr # failed to open_and_load program: -524 modify_return # modify_return attach failed: -524 (trampoline) module_attach # skel_attach skeleton attach failed: -524 (trampoline) mptcp +nested_trust # JIT does not support calling kernel function netcnt # failed to load BPF skeleton 'netcnt_prog': -7 (?) probe_user # check_kprobe_res wrong kprobe res from probe read (?) rcu_read_lock # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?) diff --git a/tools/testing/selftests/bpf/prog_tests/nested_trust.c b/tools/testing/selftests/bpf/prog_tests/nested_trust.c new file mode 100644 index 000000000000..39886f58924e --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/nested_trust.c @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include "nested_trust_failure.skel.h" +#include "nested_trust_success.skel.h" + +void test_nested_trust(void) +{ + RUN_TESTS(nested_trust_success); + RUN_TESTS(nested_trust_failure); +} diff --git a/tools/testing/selftests/bpf/progs/nested_trust_common.h b/tools/testing/selftests/bpf/progs/nested_trust_common.h new file mode 100644 index 000000000000..83d33931136e --- /dev/null +++ b/tools/testing/selftests/bpf/progs/nested_trust_common.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#ifndef _NESTED_TRUST_COMMON_H +#define _NESTED_TRUST_COMMON_H + +#include + +bool bpf_cpumask_test_cpu(unsigned int cpu, const struct cpumask *cpumask) __ksym; +bool bpf_cpumask_first_zero(const struct cpumask *cpumask) __ksym; + +#endif /* _NESTED_TRUST_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/nested_trust_failure.c b/tools/testing/selftests/bpf/progs/nested_trust_failure.c new file mode 100644 index 000000000000..14aff7676436 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/nested_trust_failure.c @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include "bpf_misc.h" + +#include "nested_trust_common.h" + +char _license[] SEC("license") = "GPL"; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +SEC("tp_btf/task_newtask") +__failure __msg("R2 must be referenced or trusted") +int BPF_PROG(test_invalid_nested_user_cpus, struct task_struct *task, u64 clone_flags) +{ + bpf_cpumask_test_cpu(0, task->user_cpus_ptr); + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("R1 must have zero offset when passed to release func or trusted arg to kfunc") +int BPF_PROG(test_invalid_nested_offset, struct task_struct *task, u64 clone_flags) +{ + bpf_cpumask_first_zero(&task->cpus_mask); + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/nested_trust_success.c b/tools/testing/selftests/bpf/progs/nested_trust_success.c new file mode 100644 index 000000000000..398098d24987 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/nested_trust_success.c @@ -0,0 +1,31 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include "bpf_misc.h" + +#include "nested_trust_common.h" + +char _license[] SEC("license") = "GPL"; + +int pid, err; + +static bool is_test_task(void) +{ + int cur_pid = bpf_get_current_pid_tgid() >> 32; + + return pid == cur_pid; +} + +SEC("tp_btf/task_newtask") +__success +int BPF_PROG(test_read_cpumask, struct task_struct *task, u64 clone_flags) +{ + if (!is_test_task()) + return 0; + + bpf_cpumask_test_cpu(0, task->cpus_ptr); + return 0; +} From patchwork Fri Jan 20 19:25:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46598 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp389850wrn; Fri, 20 Jan 2023 11:40:48 -0800 (PST) X-Google-Smtp-Source: AMrXdXvwPE1JOELZZPpOxkmn6xl+mfh9WnIYpnK5/jud4iTFSeBLz/KyIhuyyzRcLPLTUaEiEVga X-Received: by 2002:a17:906:edc9:b0:870:2f70:c63e with SMTP id sb9-20020a170906edc900b008702f70c63emr16431856ejb.18.1674243648384; Fri, 20 Jan 2023 11:40:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243648; cv=none; d=google.com; s=arc-20160816; b=Y+0euK4RiIm0WWMYxj4oQYTo3f+Rbm33aZ55lVQJCcP9YMTt2K69Tn+jIo+H54+IrJ dgFQd1CWzxgyCuIjeojrxdF1dUkjLighpmEhsi/ysERztonHtHJm6CDHIRjB5mCqHNe1 IULJgcrZ43RwKp6RvWiHeGCz+Cp83xVbnF6Q5ZOaCziX03RPkhNHcJNRVY/5BXeFpMPR 8W7nemZiGrp0WXg0kawYiDG3lv641y7oVsV0KC+oHxJgU28LfodljpYJVoi60ta4UanA gE4ADxmPGy7wNEHg0NUbvYZft3e87TSOLNA2UF1iMtDzEGMYShbs7Qg3niuOliJ4i1EF T6fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ZoVMmJF8Zsjaz+7RCAKJwmMurWbPmCAcZuKJOCBBD4U=; b=k0+cLmGojWgSfJXAkglt5I9jzXBPfqsaoS1m/Cp+21NRsJPhRU9wHNuF9yvL65nsTq C2Cjax01RfGybQkMJSp/dIDrHr4YVIGHV1jXYk6+4GVLrdK/GuTerapJT7s87c4oLeYy zdiKMEN6rNOwL4xZTvO+OYBLefeUw1gbZM25qpjFuj3NTVrkhpvLUNBjcs2TWCp7x4TR 1TCQK4MX9uO5fTFV0sb+mYD3XYGd+s5hkWECgzRH9dXTimR4NWwgl7ZCVjtDAiV9Mzh/ KmhPvORS1qSEMhRxobyDHJihr4VQ7RuVN2T3j4IWPzsldmLd6UDDSmNlZkqWtsNB0YUy rzyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp23-20020a1709073e1700b00870e0c1f5b6si19775799ejc.370.2023.01.20.11.40.24; Fri, 20 Jan 2023 11:40:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230360AbjATTZw (ORCPT + 99 others); Fri, 20 Jan 2023 14:25:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230037AbjATTZr (ORCPT ); Fri, 20 Jan 2023 14:25:47 -0500 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0AACFD26; Fri, 20 Jan 2023 11:25:36 -0800 (PST) Received: by mail-qv1-f42.google.com with SMTP id h10so4448355qvq.7; Fri, 20 Jan 2023 11:25:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZoVMmJF8Zsjaz+7RCAKJwmMurWbPmCAcZuKJOCBBD4U=; b=UGu9AMhsLY3JBMoaWLBoCtbzxnMBza13dh09ajXTXIsD0/Arlc0L59HKEAiiG8ddmu jYSSWyTEFJRyVTxmuGF59nTznL79EcBYGP12YV28VzhHdGZAHzy2dkWc7rTeKRRzwDgu ExEIkimq+UglKWQ2+NvZSe4C198+TKnKV10wXWnXvDfPX7LCwBsl77d46nbSDTpkj+hz 9/G1/ZzvI6N+rQeYQfBYUQsHaggDef8L9s5+0MbSv5VZeKEBo/V37hJEaipsYKybCh5E LPGTG9qHyg5WJV76RMajoRqSIk7IIWSIJuQQE3p0igoKAeiaplGvoQ3b4Pv0NjVRHKRa WdGw== X-Gm-Message-State: AFqh2krmV9ZB8qf5zeesR/4d95582BHgcG6gPkMe8Yje51MPGoWT8VYz QYWRue1kgY9Q3bobL7NthkDGgDR/W8v5sIH0 X-Received: by 2002:a05:6214:3483:b0:532:3bcd:2c84 with SMTP id mr3-20020a056214348300b005323bcd2c84mr24988084qvb.32.1674242735069; Fri, 20 Jan 2023 11:25:35 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id w6-20020a05620a424600b00705be892191sm22185382qko.56.2023.01.20.11.25.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:34 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 6/9] selftests/bpf: Add selftest suite for cpumask kfuncs Date: Fri, 20 Jan 2023 13:25:20 -0600 Message-Id: <20230120192523.3650503-7-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571707842366828?= X-GMAIL-MSGID: =?utf-8?q?1755571707842366828?= A recent patch added a new set of kfuncs for allocating, freeing, manipulating, and querying cpumasks. This patch adds a new 'cpumask' selftest suite which verifies their behavior. Signed-off-by: David Vernet --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../selftests/bpf/prog_tests/cpumask.c | 74 +++ .../selftests/bpf/progs/cpumask_common.h | 114 +++++ .../selftests/bpf/progs/cpumask_failure.c | 125 +++++ .../selftests/bpf/progs/cpumask_success.c | 426 ++++++++++++++++++ 5 files changed, 740 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/cpumask.c create mode 100644 tools/testing/selftests/bpf/progs/cpumask_common.h create mode 100644 tools/testing/selftests/bpf/progs/cpumask_failure.c create mode 100644 tools/testing/selftests/bpf/progs/cpumask_success.c diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 1cf5b94cda30..4c2c58e9c4e5 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -13,6 +13,7 @@ cgroup_hierarchical_stats # JIT does not support calling kernel f cgrp_kfunc # JIT does not support calling kernel function cgrp_local_storage # prog_attach unexpected error: -524 (trampoline) core_read_macros # unknown func bpf_probe_read#4 (overlapping) +cpumask # JIT does not support calling kernel function d_path # failed to auto-attach program 'prog_stat': -524 (trampoline) decap_sanity # JIT does not support calling kernel function (kfunc) deny_namespace # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c new file mode 100644 index 000000000000..5fbe457c4ebe --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include "cpumask_failure.skel.h" +#include "cpumask_success.skel.h" + +static const char * const cpumask_success_testcases[] = { + "test_alloc_free_cpumask", + "test_set_clear_cpu", + "test_setall_clear_cpu", + "test_first_firstzero_cpu", + "test_test_and_set_clear", + "test_and_or_xor", + "test_intersects_subset", + "test_copy_any_anyand", + "test_insert_leave", + "test_insert_remove_release", + "test_insert_kptr_get_release", +}; + +static void verify_success(const char *prog_name) +{ + struct cpumask_success *skel; + struct bpf_program *prog; + struct bpf_link *link = NULL; + pid_t child_pid; + int status; + + skel = cpumask_success__open(); + if (!ASSERT_OK_PTR(skel, "cpumask_success__open")) + return; + + skel->bss->pid = getpid(); + skel->bss->nr_cpus = libbpf_num_possible_cpus(); + + cpumask_success__load(skel); + if (!ASSERT_OK_PTR(skel, "cpumask_success__load")) + goto cleanup; + + prog = bpf_object__find_program_by_name(skel->obj, prog_name); + if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name")) + goto cleanup; + + link = bpf_program__attach(prog); + if (!ASSERT_OK_PTR(link, "bpf_program__attach")) + goto cleanup; + + child_pid = fork(); + if (!ASSERT_GT(child_pid, -1, "child_pid")) + goto cleanup; + if (child_pid == 0) + _exit(0); + waitpid(child_pid, &status, 0); + ASSERT_OK(skel->bss->err, "post_wait_err"); + +cleanup: + bpf_link__destroy(link); + cpumask_success__destroy(skel); +} + +void test_cpumask(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(cpumask_success_testcases); i++) { + if (!test__start_subtest(cpumask_success_testcases[i])) + continue; + + verify_success(cpumask_success_testcases[i]); + } + + RUN_TESTS(cpumask_failure); +} diff --git a/tools/testing/selftests/bpf/progs/cpumask_common.h b/tools/testing/selftests/bpf/progs/cpumask_common.h new file mode 100644 index 000000000000..ad34f3b602be --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cpumask_common.h @@ -0,0 +1,114 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#ifndef _CPUMASK_COMMON_H +#define _CPUMASK_COMMON_H + +#include "errno.h" +#include + +int err; + +struct __cpumask_map_value { + struct bpf_cpumask __kptr_ref * cpumask; +}; + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct __cpumask_map_value); + __uint(max_entries, 1); +} __cpumask_map SEC(".maps"); + +struct bpf_cpumask *bpf_cpumask_create(void) __ksym; +void bpf_cpumask_release(struct bpf_cpumask *cpumask) __ksym; +struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) __ksym; +struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumask) __ksym; +u32 bpf_cpumask_first(const struct cpumask *cpumask) __ksym; +u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) __ksym; +void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym; +void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym; +bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) __ksym; +bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym; +bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym; +void bpf_cpumask_setall(struct bpf_cpumask *cpumask) __ksym; +void bpf_cpumask_clear(struct bpf_cpumask *cpumask) __ksym; +bool bpf_cpumask_and(struct bpf_cpumask *cpumask, + const struct cpumask *src1, + const struct cpumask *src2) __ksym; +void bpf_cpumask_or(struct bpf_cpumask *cpumask, + const struct cpumask *src1, + const struct cpumask *src2) __ksym; +void bpf_cpumask_xor(struct bpf_cpumask *cpumask, + const struct cpumask *src1, + const struct cpumask *src2) __ksym; +bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) __ksym; +bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *src2) __ksym; +bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) __ksym; +bool bpf_cpumask_empty(const struct cpumask *cpumask) __ksym; +bool bpf_cpumask_full(const struct cpumask *cpumask) __ksym; +void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) __ksym; +u32 bpf_cpumask_any(const struct cpumask *src) __ksym; +u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) __ksym; + +static inline const struct cpumask *cast(struct bpf_cpumask *cpumask) +{ + return (const struct cpumask *)cpumask; +} + +static inline struct bpf_cpumask *create_cpumask(void) +{ + struct bpf_cpumask *cpumask; + + cpumask = bpf_cpumask_create(); + if (!cpumask) { + err = 1; + return NULL; + } + + if (!bpf_cpumask_empty(cast(cpumask))) { + err = 2; + bpf_cpumask_release(cpumask); + return NULL; + } + + return cpumask; +} + +static inline struct __cpumask_map_value *cpumask_map_value_lookup(void) +{ + u32 key = 0; + + return bpf_map_lookup_elem(&__cpumask_map, &key); +} + +static inline int cpumask_map_insert(struct bpf_cpumask *mask) +{ + struct __cpumask_map_value local, *v; + long status; + struct bpf_cpumask *old; + u32 key = 0; + + local.cpumask = NULL; + status = bpf_map_update_elem(&__cpumask_map, &key, &local, 0); + if (status) { + bpf_cpumask_release(mask); + return status; + } + + v = bpf_map_lookup_elem(&__cpumask_map, &key); + if (!v) { + bpf_cpumask_release(mask); + return -ENOENT; + } + + old = bpf_kptr_xchg(&v->cpumask, mask); + if (old) { + bpf_cpumask_release(old); + return -EEXIST; + } + + return 0; +} + +#endif /* _CPUMASK_COMMON_H */ diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c new file mode 100644 index 000000000000..8a6ac7a91e92 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c @@ -0,0 +1,125 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include "bpf_misc.h" + +#include "cpumask_common.h" + +char _license[] SEC("license") = "GPL"; + +/* Prototype for all of the program trace events below: + * + * TRACE_EVENT(task_newtask, + * TP_PROTO(struct task_struct *p, u64 clone_flags) + */ + +SEC("tp_btf/task_newtask") +__failure __msg("Unreleased reference") +int BPF_PROG(test_alloc_no_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + cpumask = create_cpumask(); + + /* cpumask is never released. */ + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("NULL pointer passed to trusted arg0") +int BPF_PROG(test_alloc_double_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + cpumask = create_cpumask(); + + /* cpumask is released twice. */ + bpf_cpumask_release(cpumask); + bpf_cpumask_release(cpumask); + + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("bpf_cpumask_acquire args#0 expected pointer to STRUCT bpf_cpumask") +int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + /* Can't acquire a non-struct bpf_cpumask. */ + cpumask = bpf_cpumask_acquire((struct bpf_cpumask *)task->cpus_ptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("bpf_cpumask_set_cpu args#1 expected pointer to STRUCT bpf_cpumask") +int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + /* Can't set the CPU of a non-struct bpf_cpumask. */ + bpf_cpumask_set_cpu(0, (struct bpf_cpumask *)task->cpus_ptr); + + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("Unreleased reference") +int BPF_PROG(test_insert_remove_no_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + struct __cpumask_map_value *v; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (cpumask_map_insert(cpumask)) + return 0; + + v = cpumask_map_value_lookup(); + if (!v) + return 0; + + cpumask = bpf_kptr_xchg(&v->cpumask, NULL); + + /* cpumask is never released. */ + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("Unreleased reference") +int BPF_PROG(test_kptr_get_no_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + struct __cpumask_map_value *v; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (cpumask_map_insert(cpumask)) + return 0; + + v = cpumask_map_value_lookup(); + if (!v) + return 0; + + cpumask = bpf_cpumask_kptr_get(&v->cpumask); + + /* cpumask is never released. */ + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("NULL pointer passed to trusted arg0") +int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags) +{ + bpf_cpumask_empty(NULL); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c new file mode 100644 index 000000000000..1d38bc65d4b0 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/cpumask_success.c @@ -0,0 +1,426 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include + +#include "cpumask_common.h" + +char _license[] SEC("license") = "GPL"; + +int pid, nr_cpus; + +static bool is_test_task(void) +{ + int cur_pid = bpf_get_current_pid_tgid() >> 32; + + return pid == cur_pid; +} + +static bool create_cpumask_set(struct bpf_cpumask **out1, + struct bpf_cpumask **out2, + struct bpf_cpumask **out3, + struct bpf_cpumask **out4) +{ + struct bpf_cpumask *mask1, *mask2, *mask3, *mask4; + + mask1 = create_cpumask(); + if (!mask1) + return false; + + mask2 = create_cpumask(); + if (!mask2) { + bpf_cpumask_release(mask1); + err = 3; + return false; + } + + mask3 = create_cpumask(); + if (!mask3) { + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + err = 4; + return false; + } + + mask4 = create_cpumask(); + if (!mask4) { + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + bpf_cpumask_release(mask3); + err = 5; + return false; + } + + *out1 = mask1; + *out2 = mask2; + *out3 = mask3; + *out4 = mask4; + + return true; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_alloc_free_cpumask, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + if (!is_test_task()) + return 0; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + bpf_cpumask_release(cpumask); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_set_clear_cpu, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + if (!is_test_task()) + return 0; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + bpf_cpumask_set_cpu(0, cpumask); + if (!bpf_cpumask_test_cpu(0, cast(cpumask))) { + err = 3; + goto release_exit; + } + + bpf_cpumask_clear_cpu(0, cpumask); + if (bpf_cpumask_test_cpu(0, cast(cpumask))) { + err = 4; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(cpumask); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_setall_clear_cpu, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + if (!is_test_task()) + return 0; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + bpf_cpumask_setall(cpumask); + if (!bpf_cpumask_full(cast(cpumask))) { + err = 3; + goto release_exit; + } + + bpf_cpumask_clear(cpumask); + if (!bpf_cpumask_empty(cast(cpumask))) { + err = 4; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(cpumask); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_first_firstzero_cpu, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + if (!is_test_task()) + return 0; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (bpf_cpumask_first(cast(cpumask)) < nr_cpus) { + err = 3; + goto release_exit; + } + + if (bpf_cpumask_first_zero(cast(cpumask)) != 0) { + bpf_printk("first zero: %d", bpf_cpumask_first_zero(cast(cpumask))); + err = 4; + goto release_exit; + } + + bpf_cpumask_set_cpu(0, cpumask); + if (bpf_cpumask_first(cast(cpumask)) != 0) { + err = 5; + goto release_exit; + } + + if (bpf_cpumask_first_zero(cast(cpumask)) != 1) { + err = 6; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(cpumask); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_test_and_set_clear, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + + if (!is_test_task()) + return 0; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (bpf_cpumask_test_and_set_cpu(0, cpumask)) { + err = 3; + goto release_exit; + } + + if (!bpf_cpumask_test_and_set_cpu(0, cpumask)) { + err = 4; + goto release_exit; + } + + if (!bpf_cpumask_test_and_clear_cpu(0, cpumask)) { + err = 5; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(cpumask); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_and_or_xor, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *mask1, *mask2, *dst1, *dst2; + + if (!is_test_task()) + return 0; + + if (!create_cpumask_set(&mask1, &mask2, &dst1, &dst2)) + return 0; + + bpf_cpumask_set_cpu(0, mask1); + bpf_cpumask_set_cpu(1, mask2); + + if (bpf_cpumask_and(dst1, cast(mask1), cast(mask2))) { + err = 6; + goto release_exit; + } + if (!bpf_cpumask_empty(cast(dst1))) { + err = 7; + goto release_exit; + } + + bpf_cpumask_or(dst1, cast(mask1), cast(mask2)); + if (!bpf_cpumask_test_cpu(0, cast(dst1))) { + err = 8; + goto release_exit; + } + if (!bpf_cpumask_test_cpu(1, cast(dst1))) { + err = 9; + goto release_exit; + } + + bpf_cpumask_xor(dst2, cast(mask1), cast(mask2)); + if (!bpf_cpumask_equal(cast(dst1), cast(dst2))) { + err = 10; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + bpf_cpumask_release(dst1); + bpf_cpumask_release(dst2); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_intersects_subset, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *mask1, *mask2, *dst1, *dst2; + + if (!is_test_task()) + return 0; + + if (!create_cpumask_set(&mask1, &mask2, &dst1, &dst2)) + return 0; + + bpf_cpumask_set_cpu(0, mask1); + bpf_cpumask_set_cpu(1, mask2); + if (bpf_cpumask_intersects(cast(mask1), cast(mask2))) { + err = 6; + goto release_exit; + } + + bpf_cpumask_or(dst1, cast(mask1), cast(mask2)); + if (!bpf_cpumask_subset(cast(mask1), cast(dst1))) { + err = 7; + goto release_exit; + } + + if (!bpf_cpumask_subset(cast(mask2), cast(dst1))) { + err = 8; + goto release_exit; + } + + if (bpf_cpumask_subset(cast(dst1), cast(mask1))) { + err = 9; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + bpf_cpumask_release(dst1); + bpf_cpumask_release(dst2); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_copy_any_anyand, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *mask1, *mask2, *dst1, *dst2; + u32 cpu; + + if (!is_test_task()) + return 0; + + if (!create_cpumask_set(&mask1, &mask2, &dst1, &dst2)) + return 0; + + bpf_cpumask_set_cpu(0, mask1); + bpf_cpumask_set_cpu(1, mask2); + bpf_cpumask_or(dst1, cast(mask1), cast(mask2)); + + cpu = bpf_cpumask_any(cast(mask1)); + if (cpu != 0) { + err = 6; + goto release_exit; + } + + cpu = bpf_cpumask_any(cast(dst2)); + if (cpu < nr_cpus) { + err = 7; + goto release_exit; + } + + bpf_cpumask_copy(dst2, cast(dst1)); + if (!bpf_cpumask_equal(cast(dst1), cast(dst2))) { + err = 8; + goto release_exit; + } + + cpu = bpf_cpumask_any(cast(dst2)); + if (cpu > 1) { + err = 9; + goto release_exit; + } + + cpu = bpf_cpumask_any_and(cast(mask1), cast(mask2)); + if (cpu < nr_cpus) { + err = 10; + goto release_exit; + } + +release_exit: + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + bpf_cpumask_release(dst1); + bpf_cpumask_release(dst2); + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_insert_leave, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + struct __cpumask_map_value *v; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (cpumask_map_insert(cpumask)) + err = 3; + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_insert_remove_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + struct __cpumask_map_value *v; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (cpumask_map_insert(cpumask)) { + err = 3; + return 0; + } + + v = cpumask_map_value_lookup(); + if (!v) { + err = 4; + return 0; + } + + cpumask = bpf_kptr_xchg(&v->cpumask, NULL); + if (cpumask) + bpf_cpumask_release(cpumask); + else + err = 5; + + return 0; +} + +SEC("tp_btf/task_newtask") +int BPF_PROG(test_insert_kptr_get_release, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *cpumask; + struct __cpumask_map_value *v; + + cpumask = create_cpumask(); + if (!cpumask) + return 0; + + if (cpumask_map_insert(cpumask)) { + err = 3; + return 0; + } + + v = cpumask_map_value_lookup(); + if (!v) { + err = 4; + return 0; + } + + cpumask = bpf_cpumask_kptr_get(&v->cpumask); + if (cpumask) + bpf_cpumask_release(cpumask); + else + err = 5; + + return 0; +} From patchwork Fri Jan 20 19:25:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46599 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp389899wrn; Fri, 20 Jan 2023 11:40:56 -0800 (PST) X-Google-Smtp-Source: AMrXdXvfsXUrIJvHPkhpI5lIVXskykFOagG11Vr3AbAGrAd8GlmK6kPP4d1AHsHFIi4r5bXg8UAm X-Received: by 2002:a17:906:ca16:b0:877:6a03:9ad4 with SMTP id jt22-20020a170906ca1600b008776a039ad4mr8807153ejb.56.1674243656160; Fri, 20 Jan 2023 11:40:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243656; cv=none; d=google.com; s=arc-20160816; b=EauvZEMHDI4idhVB065Gy/7iZAEHVvryacJAduM4QpwUARSHqtONIB58PmlgP5nYO9 hQh3/nes5uMUlBIBppl3ipkdSZPjlIgP3xiCt35zsOdk9vGnQ8RiO+zygB6VN9vRFOaf PW4kuX+UzTLdiaIq0XiXTrSHheScFmcyK6pDCusV+HU32ivJHbP9wAUgihZaPelOYZ3z wyLSKPlMM947Sc6nDapbhdzPvs5VFuaS6dzA5jzJ0y25HyyadA+3laPRkRAS1p/MfyDY d4l+ZBt+WJjAlfsQ7duJunFzYnvgoXQTEXAAdWBQdakd2aE/nAeLr2yRLZMy6nwKEUyR rM4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uNXs/IYxg4C2+q71Gm83rCQ9Ahfz73ASehQmBY0Ouu8=; b=GEFsckJMfLYhqi8CvbWbOpuiRK5bAM9bYOuKsto6DV5p8CSP/FbpEX6SS489LN3hAQ bKJ3KGWQ3n4C4N7AxuaG5AwM/7Q9dCzB8O2LkEhoxd5cn6CrOeJ1rKiNv7Qzj4sme0z7 og8cBIOcRzbZsw8fsu+rw6e2HdBlQbjljBbr+rPT0m7+PbIlTjyaExYCo7n0L+ZIsmm6 dLhDOo0tqbXmjyd+HiZn+Qfu2gJDcekvkakZBZS1yiOdV9LZIrf598ovwUSqndTFkv0M 9Y0L5sMOk4MADyfp9JEMOrPDd4PbMuqtRUAoEKQM2rDPV+stGFAlsGH0CCF2NRvr52en DaPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt34-20020a1709072da200b00874403624c7si13126230ejc.192.2023.01.20.11.40.32; Fri, 20 Jan 2023 11:40:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230037AbjATT0C (ORCPT + 99 others); Fri, 20 Jan 2023 14:26:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229954AbjATTZv (ORCPT ); Fri, 20 Jan 2023 14:25:51 -0500 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83648D0DAE; Fri, 20 Jan 2023 11:25:39 -0800 (PST) Received: by mail-qv1-f50.google.com with SMTP id q10so4433177qvt.10; Fri, 20 Jan 2023 11:25:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uNXs/IYxg4C2+q71Gm83rCQ9Ahfz73ASehQmBY0Ouu8=; b=jBlf0WEf9r8RWXCuTXF7F3X+NvDPB5V6JodBep++r9QNba+Ay/PnOQzllypmB9vrQQ 2wzUH5rTqAUVp0iDKEJLlpusgEVlA993xY7iOVXra3cUGnQiJCQ8A85EFFZk/4vjYSt1 JEDpnzUL0ShbYZjWBrF50zU+59Rgf8owhL8dbDC0v3wWr7CfCDgFmBwDxLOyZql+uYfQ gh8gHKHIjZ54sPyMZV0lMnwwjWuoNqX1TY1M7P+v3n8sC11KEEUzQTH6ZJWQgbUMox0a YIGN0wmTlwtRuz9BIdNuYOH+/7bEnt1975VjYJaTNgo1BAKSHlhMjXfWI8GddAPM+muC s/Zg== X-Gm-Message-State: AFqh2koFm2Z88weRkqHY99ZmP919+o4lVUhHVU+mDNLmClUJJPICQqNK +DSkF79ReWXti9wle9Zv0o7dELdCo5tvUNEz X-Received: by 2002:a05:6214:14ed:b0:535:567b:d5e9 with SMTP id k13-20020a05621414ed00b00535567bd5e9mr8572135qvw.7.1674242737621; Fri, 20 Jan 2023 11:25:37 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id s15-20020a05620a254f00b007056237b41bsm26713700qko.67.2023.01.20.11.25.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:37 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 7/9] bpf/docs: Document cpumask kfuncs in a new file Date: Fri, 20 Jan 2023 13:25:21 -0600 Message-Id: <20230120192523.3650503-8-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571715684886807?= X-GMAIL-MSGID: =?utf-8?q?1755571715684886807?= Now that we've added a series of new cpumask kfuncs, we should document them so users can easily use them. This patch adds a new cpumasks.rst file to document them. Signed-off-by: David Vernet --- Documentation/bpf/cpumasks.rst | 393 +++++++++++++++++++++++++++++++++ Documentation/bpf/index.rst | 1 + Documentation/bpf/kfuncs.rst | 11 + kernel/bpf/cpumask.c | 208 +++++++++++++++++ 4 files changed, 613 insertions(+) create mode 100644 Documentation/bpf/cpumasks.rst diff --git a/Documentation/bpf/cpumasks.rst b/Documentation/bpf/cpumasks.rst new file mode 100644 index 000000000000..50be4688b1ec --- /dev/null +++ b/Documentation/bpf/cpumasks.rst @@ -0,0 +1,393 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _cpumasks-header-label: + +================== +BPF cpumask kfuncs +================== + +1. Introduction +=============== + +``struct cpumask`` is a bitmap data structure in the kernel whose indices +reflect the CPUs on the system. Commonly, cpumasks are used to track which CPUs +a task is affinitized to, but they can also be used to e.g. track which cores +are associated with a scheduling domain, which cores on a machine are idle, +etc. + +BPF provides programs with a set of :ref:`kfuncs-header-label` that can be +used to allocate, mutate, query, and free cpumasks. + +2. BPF cpumask objects +====================== + +There are two different types of cpumasks that can be used by BPF programs. + +2.1 ``struct bpf_cpumask *`` +---------------------------- + +``struct bpf_cpumask *`` is a cpumask that is allocated by BPF, on behalf of a +BPF program, and whose lifecycle is entirely controlled by BPF. These cpumasks +are RCU-protected, can be mutated, can be used as kptrs, and can be safely cast +to a ``struct cpumask *``. + +2.1.1 ``struct bpf_cpumask *`` lifecycle +---------------------------------------- + +A ``struct bpf_cpumask *`` is allocated, acquired, and released, using the +following functions: + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_create + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_acquire + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_release + +For example: + +.. code-block:: c + + struct cpumask_map_value { + struct bpf_cpumask __kptr_ref * cpumask; + }; + + struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct cpumask_map_value); + __uint(max_entries, 65536); + } cpumask_map SEC(".maps"); + + static int cpumask_map_insert(struct bpf_cpumask *mask, u32 pid) + { + struct cpumask_map_value local, *v; + long status; + struct bpf_cpumask *old; + u32 key = pid; + + local.cpumask = NULL; + status = bpf_map_update_elem(&cpumask_map, &key, &local, 0); + if (status) { + bpf_cpumask_release(mask); + return status; + } + + v = bpf_map_lookup_elem(&cpumask_map, &key); + if (!v) { + bpf_cpumask_release(mask); + return -ENOENT; + } + + old = bpf_kptr_xchg(&v->cpumask, mask); + if (old) + bpf_cpumask_release(old); + + return 0; + } + + /** + * A sample tracepoint showing how a task's cpumask can be queried and + * recorded as a kptr. + */ + SEC("tp_btf/task_newtask") + int BPF_PROG(record_task_cpumask, struct task_struct *task, u64 clone_flags) + { + struct bpf_cpumask *cpumask; + int ret; + + cpumask = bpf_cpumask_create(); + if (!cpumask) + return -ENOMEM; + + if (!bpf_cpumask_full(task->cpus_ptr)) + bpf_printk("task %s has CPU affinity", task->comm); + + bpf_cpumask_copy(cpumask, task->cpus_ptr); + return cpumask_map_insert(cpumask, task->pid); + } + +---- + +2.1.1 ``struct bpf_cpumask *`` as kptrs +--------------------------------------- + +As mentioned and illustrated above, these ``struct bpf_cpumask *`` objects can +also be stored in a map and used as kptrs. If a ``struct bpf_cpumask *`` is in +a map, the reference can be removed from the map with bpf_kptr_xchg(), or +opportunistically acquired with bpf_cpumask_kptr_get(): + + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_kptr_get + +Here is an example of a ``struct bpf_cpumask *`` being retrieved from a map: + +.. code-block:: c + + /* struct containing the struct bpf_cpumask kptr which is actually stored in the map. */ + struct cpumasks_kfunc_map_value { + struct bpf_cpumask __kptr_ref * bpf_cpumask; + }; + + /* The map containing struct cpumasks_kfunc_map_value entries. */ + struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct cpumasks_kfunc_map_value); + __uint(max_entries, 1); + } cpumasks_kfunc_map SEC(".maps"); + + /* ... */ + + /** + * A simple example tracepoint program showing how a + * struct bpf_cpumask * kptr that is stored in a map can + * be acquired using the bpf_cpumask_kptr_get() kfunc. + */ + SEC("tp_btf/cgroup_mkdir") + int BPF_PROG(cgrp_ancestor_example, struct cgroup *cgrp, const char *path) + { + struct bpf_cpumask *kptr; + struct cpumasks_kfunc_map_value *v; + u32 key = 0; + + /* Assume a bpf_cpumask * kptr was previously stored in the map. */ + v = bpf_map_lookup_elem(&cpumasks_kfunc_map, &key); + if (!v) + return -ENOENT; + + /* Acquire a reference to the bpf_cpumask * kptr that's already stored in the map. */ + kptr = bpf_cpumask_kptr_get(&v->cpumask); + if (!kptr) + /* If no bpf_cpumask was present in the map, it's because + * we're racing with another CPU that removed it with + * bpf_kptr_xchg() between the bpf_map_lookup_elem() + * above, and our call to bpf_cpumask_kptr_get(). + * bpf_cpumask_kptr_get() internally safely handles this + * race, and will return NULL if the cpumask is no longer + * present in the map by the time we invoke the kfunc. + */ + return -EBUSY; + + /* Free the reference we just took above. Note that the + * original struct bpf_cpumask * kptr is still in the map. It will + * be freed either at a later time if another context deletes + * it from the map, or automatically by the BPF subsystem if + * it's still present when the map is destroyed. + */ + bpf_cpumask_release(kptr); + + return 0; + } + +---- + +2.2 ``struct cpumask`` +---------------------- + +``struct cpumask`` is the object that actually contains the cpumask bitmap +being queried, mutated, etc. A ``struct bpf_cpumask`` wraps a ``struct +cpumask``, which is why it's safe to cast it as such (note however that it is +**not** safe to cast a ``struct cpumask *`` to a ``struct bpf_cpumask *``, and +the verifier will reject any program that tries to do so). + +As we'll see below, any kfunc that mutates its cpumask argument will take a +``struct bpf_cpumask *`` as that argument. Any argument that simply queries the +cpumask will instead take a ``struct cpumask *``. + +3. cpumask kfuncs +================= + +Above, we described the kfuncs that can be used to allocate, acquire, release, +etc a ``struct bpf_cpumask *``. This section of the document will describe the +kfuncs for mutating and querying cpumasks. + +3.1 Mutating cpumasks +--------------------- + +Some cpumask kfuncs are "read-only" in that they don't mutate any of their +arguments, whereas others mutate at least one argument (which means that the +argument must be a ``struct bpf_cpumask *``, as described above). + +This section will describe all of the cpumask kfuncs which mutate at least one +argument. :ref:`cpumasks-querying-label` below describes the read-only kfuncs. + +3.1.1 Setting and clearing CPUs +------------------------------- + +bpf_cpumask_set_cpu() and bpf_cpumask_clear_cpu() can be used to set and clear +a CPU in a ``struct bpf_cpumask`` respectively: + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_set_cpu bpf_cpumask_clear_cpu + +These kfuncs are pretty straightforward, and can be used, for example, as +follows: + +.. code-block:: c + + /** + * A sample tracepoint showing how a cpumask can be queried. + */ + SEC("tp_btf/task_newtask") + int BPF_PROG(test_set_clear_cpu, struct task_struct *task, u64 clone_flags) + { + struct bpf_cpumask *cpumask; + + cpumask = bpf_cpumask_create(); + if (!cpumask) + return -ENOMEM; + + bpf_cpumask_set_cpu(0, cpumask); + if (!bpf_cpumask_test_cpu(0, cast(cpumask))) + /* Should never happen. */ + goto release_exit; + + bpf_cpumask_clear_cpu(0, cpumask); + if (bpf_cpumask_test_cpu(0, cast(cpumask))) + /* Should never happen. */ + goto release_exit; + + /* struct cpumask * pointers such as task->cpus_ptr can also be queried. */ + if (bpf_cpumask_test_cpu(0, task->cpus_ptr)) + bpf_printk("task %s can use CPU %d", task->comm, 0); + + release_exit: + bpf_cpumask_release(cpumask); + return 0; + } + +---- + +bpf_cpumask_test_and_set_cpu() and bpf_cpumask_test_and_clear_cpu() are +analogous kfuncs that allow callers to atomically test and set (or clear) CPUs: + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_test_and_set_cpu bpf_cpumask_test_and_clear_cpu + +---- + +We can also set and clear entire ``struct bpf_cpumask *`` objects in one +operation using bpf_cpumask_setall() and bpf_cpumask_clear(): + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_setall bpf_cpumask_clear + +3.1.2 Operations between cpumasks +--------------------------------- + +In addition to setting and clearing individual CPUs in a single cpumask, +callers can also perform bitwise operations between multiple cpumasks using +bpf_cpumask_and(), bpf_cpumask_or(), and bpf_cpumask_xor(): + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_and bpf_cpumask_or bpf_cpumask_xor + +The following is an example of how they may be used. Note that some of the +kfuncs shown in this example will be covered in more detail below. + +.. code-block:: c + + /** + * A sample tracepoint showing how a cpumask can be mutated using + bitwise operators (and queried). + */ + SEC("tp_btf/task_newtask") + int BPF_PROG(test_and_or_xor, struct task_struct *task, u64 clone_flags) + { + struct bpf_cpumask *mask1, *mask2, *dst1, *dst2; + + mask1 = bpf_cpumask_create(); + if (!mask1) + return -ENOMEM; + + mask2 = bpf_cpumask_create(); + if (!mask2) { + bpf_cpumask_release(mask1); + return -ENOMEM; + } + + // ...Safely create the other two masks... */ + + bpf_cpumask_set_cpu(0, mask1); + bpf_cpumask_set_cpu(1, mask2); + bpf_cpumask_and(dst1, (const struct cpumask *)mask1, (const struct cpumask *)mask2); + if (!bpf_cpumask_empty((const struct cpumask *)dst1)) + /* Should never happen. */ + goto release_exit; + + bpf_cpumask_or(dst1, (const struct cpumask *)mask1, (const struct cpumask *)mask2); + if (!bpf_cpumask_test_cpu(0, (const struct cpumask *)dst1)) + /* Should never happen. */ + goto release_exit; + + if (!bpf_cpumask_test_cpu(1, (const struct cpumask *)dst1)) + /* Should never happen. */ + goto release_exit; + + bpf_cpumask_xor(dst2, (const struct cpumask *)mask1, (const struct cpumask *)mask2); + if (!bpf_cpumask_equal((const struct cpumask *)dst1, + (const struct cpumask *)dst2)) + /* Should never happen. */ + goto release_exit; + + release_exit: + bpf_cpumask_release(mask1); + bpf_cpumask_release(mask2); + bpf_cpumask_release(dst1); + bpf_cpumask_release(dst2); + return 0; + } + +---- + +The contents of an entire cpumask may be copied to another using +bpf_cpumask_copy(): + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_copy + +---- + +.. _cpumasks-querying-label: + +3.2 Querying cpumasks +--------------------- + +In addition to the above kfuncs, there is also a set of read-only kfuncs that +can be used to query the contents of cpumasks. + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_first bpf_cpumask_first_zero bpf_cpumask_test_cpu + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_equal bpf_cpumask_intersects bpf_cpumask_subset + bpf_cpumask_empty bpf_cpumask_full + +.. kernel-doc:: kernel/bpf/cpumask.c + :identifiers: bpf_cpumask_any bpf_cpumask_any_and + +---- + +Some example usages of these querying kfuncs were shown above. We will not +replicate those exmaples here. Note, however, that all of the aforementioned +kfuncs are tested in `tools/testing/selftests/bpf/progs/cpumask_success.c`_, so +please take a look there if you're looking for more examples of how they can be +used. + +.. _tools/testing/selftests/bpf/progs/cpumask_success.c: + https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/tools/testing/selftests/bpf/progs/cpumask_success.c + + +4. Adding BPF cpumask kfuncs +============================ + +The set of supported BPF cpumask kfuncs are not (yet) a 1-1 match with the +cpumask operations in include/linux/cpumask.h. Any of those cpumask operations +could easily be encapsulated in a new kfunc if and when required. If you'd like +to support a new cpumask operation, please feel free to submit a patch. If you +do add a new cpumask kfunc, please document it here, and add any relevant +selftest testcases to the cpumask selftest suite. diff --git a/Documentation/bpf/index.rst b/Documentation/bpf/index.rst index b81533d8b061..dbb39e8f9889 100644 --- a/Documentation/bpf/index.rst +++ b/Documentation/bpf/index.rst @@ -20,6 +20,7 @@ that goes into great technical depth about the BPF Architecture. syscall_api helpers kfuncs + cpumasks programs maps bpf_prog_run diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 9fd7fb539f85..a74f9e74087b 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -1,3 +1,7 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _kfuncs-header-label: + ============================= BPF Kernel Functions (kfuncs) ============================= @@ -420,3 +424,10 @@ the verifier. bpf_cgroup_ancestor() can be used as follows: bpf_cgroup_release(parent); return 0; } + +3.3 struct cpumask * kfuncs +--------------------------- + +BPF provides a set of kfuncs that can be used to query, allocate, mutate, and +destroy struct cpumask * objects. Please refer to :ref:`cpumasks-header-label` +for more details. diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c index 92eedc84dbfc..985bfb6f5c81 100644 --- a/kernel/bpf/cpumask.c +++ b/kernel/bpf/cpumask.c @@ -39,6 +39,16 @@ __diag_push(); __diag_ignore_all("-Wmissing-prototypes", "Global kfuncs as their definitions will be in BTF"); +/** + * bpf_cpumask_create() - Create a mutable BPF cpumask. + * + * Allocates a cpumask that can be queried, mutated, acquired, and released by + * a BPF program. The cpumask returned by this function must either be embedded + * in a map as a kptr, or freed with bpf_cpumask_release(). + * + * bpf_cpumask_create() allocates memory using the BPF memory allocator, and + * will not block. It may return NULL if no memory is available. + */ struct bpf_cpumask *bpf_cpumask_create(void) { struct bpf_cpumask *cpumask; @@ -53,12 +63,31 @@ struct bpf_cpumask *bpf_cpumask_create(void) return cpumask; } +/** + * bpf_cpumask_acquire() - Acquire a reference to a BPF cpumask. + * @cpumask: The BPF cpumask being acquired. The cpumask must be a trusted + * pointer. + * + * Acquires a reference to a BPF cpumask. The cpumask returned by this function + * must either be embedded in a map as a kptr, or freed with + * bpf_cpumask_release(). + */ struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) { refcount_inc(&cpumask->usage); return cpumask; } +/** + * bpf_cpumask_kptr_get() - Attempt to acquire a reference to a BPF cpumask + * stored in a map. + * @cpumaskp: A pointer to a BPF cpumask map value. + * + * Attempts to acquire a reference to a BPF cpumask stored in a map value. The + * cpumask returned by this function must either be embedded in a map as a + * kptr, or freed with bpf_cpumask_release(). This function may return NULL if + * no BPF cpumask was found in the specified map value. + */ struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) { struct bpf_cpumask *cpumask; @@ -77,6 +106,14 @@ struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) return cpumask; } +/** + * bpf_cpumask_release() - Release a previously acquired BPF cpumask. + * @cpumask: The cpumask being released. + * + * Releases a previously acquired reference to a BPF cpumask. When the final + * reference of the BPF cpumask has been released, it is subsequently freed in + * an RCU callback in the BPF memory allocator. + */ void bpf_cpumask_release(struct bpf_cpumask *cpumask) { if (!cpumask) @@ -89,16 +126,36 @@ void bpf_cpumask_release(struct bpf_cpumask *cpumask) } } +/** + * bpf_cpumask_first() - Get the index of the first nonzero bit in the cpumask. + * @cpumask: The cpumask being queried. + * + * Find the index of the first nonzero bit of the cpumask. A struct bpf_cpumask + * pointer may be safely passed to this function. + */ u32 bpf_cpumask_first(const struct cpumask *cpumask) { return cpumask_first(cpumask); } +/** + * bpf_cpumask_first_zero() - Get the index of the first unset bit in the + * cpumask. + * @cpumask: The cpumask being queried. + * + * Find the index of the first unset bit of the cpumask. A struct bpf_cpumask + * pointer may be safely passed to this function. + */ u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) { return cpumask_first_zero(cpumask); } +/** + * bpf_cpumask_set_cpu() - Set a bit for a CPU in a BPF cpumask. + * @cpu: The CPU to be set in the cpumask. + * @cpumask: The BPF cpumask in which a bit is being set. + */ void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) @@ -107,6 +164,11 @@ void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) cpumask_set_cpu(cpu, (struct cpumask *)cpumask); } +/** + * bpf_cpumask_clear_cpu() - Clear a bit for a CPU in a BPF cpumask. + * @cpu: The CPU to be cleared from the cpumask. + * @cpumask: The BPF cpumask in which a bit is being cleared. + */ void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) @@ -115,6 +177,15 @@ void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) cpumask_clear_cpu(cpu, (struct cpumask *)cpumask); } +/** + * bpf_cpumask_test_cpu() - Test whether a CPU is set in a cpumask. + * @cpu: The CPU being queried for. + * @cpumask: The cpumask being queried for containing a CPU. + * + * Return: + * * true - @cpu is set in the cpumask + * * false - @cpu was not set in the cpumask, or @cpu is an invalid cpu. + */ bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) { if (!cpu_valid(cpu)) @@ -123,6 +194,15 @@ bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) return cpumask_test_cpu(cpu, (struct cpumask *)cpumask); } +/** + * bpf_cpumask_test_and_set_cpu() - Atomically test and set a CPU in a BPF cpumask. + * @cpu: The CPU being set and queried for. + * @cpumask: The BPF cpumask being set and queried for containing a CPU. + * + * Return: + * * true - @cpu is set in the cpumask + * * false - @cpu was not set in the cpumask, or @cpu is invalid. + */ bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) @@ -131,6 +211,16 @@ bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) return cpumask_test_and_set_cpu(cpu, (struct cpumask *)cpumask); } +/** + * bpf_cpumask_test_and_clear_cpu() - Atomically test and clear a CPU in a BPF + * cpumask. + * @cpu: The CPU being cleared and queried for. + * @cpumask: The BPF cpumask being cleared and queried for containing a CPU. + * + * Return: + * * true - @cpu is set in the cpumask + * * false - @cpu was not set in the cpumask, or @cpu is invalid. + */ bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) @@ -139,16 +229,36 @@ bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) return cpumask_test_and_clear_cpu(cpu, (struct cpumask *)cpumask); } +/** + * bpf_cpumask_setall() - Set all of the bits in a BPF cpumask. + * @cpumask: The BPF cpumask having all of its bits set. + */ void bpf_cpumask_setall(struct bpf_cpumask *cpumask) { cpumask_setall((struct cpumask *)cpumask); } +/** + * bpf_cpumask_clear() - Clear all of the bits in a BPF cpumask. + * @cpumask: The BPF cpumask being cleared. + */ void bpf_cpumask_clear(struct bpf_cpumask *cpumask) { cpumask_clear((struct cpumask *)cpumask); } +/** + * bpf_cpumask_and() - AND two cpumasks and store the result. + * @dst: The BPF cpumask where the result is being stored. + * @src1: The first input. + * @src2: The second input. + * + * Return: + * * true - @dst has at least one bit set following the operation + * * false - @dst is empty following the operation + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ bool bpf_cpumask_and(struct bpf_cpumask *dst, const struct cpumask *src1, const struct cpumask *src2) @@ -156,6 +266,14 @@ bool bpf_cpumask_and(struct bpf_cpumask *dst, return cpumask_and((struct cpumask *)dst, src1, src2); } +/** + * bpf_cpumask_or() - OR two cpumasks and store the result. + * @dst: The BPF cpumask where the result is being stored. + * @src1: The first input. + * @src2: The second input. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ void bpf_cpumask_or(struct bpf_cpumask *dst, const struct cpumask *src1, const struct cpumask *src2) @@ -163,6 +281,14 @@ void bpf_cpumask_or(struct bpf_cpumask *dst, cpumask_or((struct cpumask *)dst, src1, src2); } +/** + * bpf_cpumask_xor() - XOR two cpumasks and store the result. + * @dst: The BPF cpumask where the result is being stored. + * @src1: The first input. + * @src2: The second input. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ void bpf_cpumask_xor(struct bpf_cpumask *dst, const struct cpumask *src1, const struct cpumask *src2) @@ -170,41 +296,123 @@ void bpf_cpumask_xor(struct bpf_cpumask *dst, cpumask_xor((struct cpumask *)dst, src1, src2); } +/** + * bpf_cpumask_equal() - Check two cpumasks for equality. + * @src1: The first input. + * @src2: The second input. + * + * Return: + * * true - @src1 and @src2 have the same bits set. + * * false - @src1 and @src2 differ in at least one bit. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_equal(src1, src2); } +/** + * bpf_cpumask_intersects() - Check two cpumasks for overlap. + * @src1: The first input. + * @src2: The second input. + * + * Return: + * * true - @src1 and @src2 have at least one of the same bits set. + * * false - @src1 and @src2 don't have any of the same bits set. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_intersects(src1, src2); } +/** + * bpf_cpumask_subset() - Check if a cpumask is a subset of another. + * @src1: The first cpumask being checked as a subset. + * @src2: The second cpumask being checked as a superset. + * + * Return: + * * true - All of the bits of @src1 are set in @src2. + * * false - At least one bit in @src1 is not set in @src2. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_subset(src1, src2); } +/** + * bpf_cpumask_empty() - Check if a cpumask is empty. + * @cpumask: The cpumask being checked. + * + * Return: + * * true - None of the bits in @cpumask are set. + * * false - At least one bit in @cpumask is set. + * + * A struct bpf_cpumask pointer may be safely passed to @cpumask. + */ bool bpf_cpumask_empty(const struct cpumask *cpumask) { return cpumask_empty(cpumask); } +/** + * bpf_cpumask_full() - Check if a cpumask has all bits set. + * @cpumask: The cpumask being checked. + * + * Return: + * * true - All of the bits in @cpumask are set. + * * false - At least one bit in @cpumask is cleared. + * + * A struct bpf_cpumask pointer may be safely passed to @cpumask. + */ bool bpf_cpumask_full(const struct cpumask *cpumask) { return cpumask_full(cpumask); } +/** + * bpf_cpumask_copy() - Copy the contents of a cpumask into a BPF cpumask. + * @dst: The BPF cpumask being copied into. + * @src: The cpumask being copied. + * + * A struct bpf_cpumask pointer may be safely passed to @src. + */ void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) { cpumask_copy((struct cpumask *)dst, src); } +/** + * bpf_cpumask_any() - Return a random set CPU from a cpumask. + * @cpumask: The cpumask being queried. + * + * Return: + * * A random set bit within [0, num_cpus) if at least one bit is set. + * * >= num_cpus if no bit is set. + * + * A struct bpf_cpumask pointer may be safely passed to @src. + */ u32 bpf_cpumask_any(const struct cpumask *cpumask) { return cpumask_any(cpumask); } +/** + * bpf_cpumask_any_and() - Return a random set CPU from the AND of two + * cpumasks. + * @src1: The first cpumask. + * @src2: The second cpumask. + * + * Return: + * * A random set bit within [0, num_cpus) if at least one bit is set. + * * >= num_cpus if no bit is set. + * + * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. + */ u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_any_and(src1, src2); From patchwork Fri Jan 20 19:25:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46600 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp390225wrn; Fri, 20 Jan 2023 11:41:45 -0800 (PST) X-Google-Smtp-Source: AMrXdXsUkPNsizn428flmxaOCx4U379InEFdivhuhCN88EFuN3u4FCaYXCPpD9XZyXNe6CIIB+1Y X-Received: by 2002:a05:6402:2419:b0:49d:2a42:b8c8 with SMTP id t25-20020a056402241900b0049d2a42b8c8mr18413076eda.26.1674243705106; Fri, 20 Jan 2023 11:41:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243705; cv=none; d=google.com; s=arc-20160816; b=HtT5Dntf07ObYvTxkWEPxNlbcXWJrmY9/in0mQZejpDNxCYUCKM6K+sKKTbnKZYez7 WN2nTAbdgzE0WVPZ8laBORYsIl2FWvMy/G+GACmnuR+WJkOgr1xbuTf2s2JcJzBiUOhP bI6T1w2spE3oJMvEvBMy5ALxICCSTnCGSI0SaKF4PA6Gjr8R0DRKC+demoUqyDvDf5eb XY7HZaR0GAdIYoRg9A/XvtWnWxlSQZ+0mG5kEohwhGSx7dm/wrU2+9Zxj+iXqpaginI5 PDKvVpPPIP37C1RA9eUAwHyF3Gj0sBMgMzRlwtlBttShudzOKXgN9TsVm/sX5Iu+LPv3 eDkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=szLEwMFPNAUcOlQXjDN9W13GrrQRzdrtWVVmponsE0I=; b=uJnByeWfRbgx3mp7a1BBhh8MNfYhhyHQJra9S9qFugpeflIaaROMtkAyX78I/gp47P EhoJOohVXEAEEhnYZupZUYVYUiZOJQEnDyp6ps95sRE8BmpcJW736tQH2XVggEuM3LBN 5Ob7eyBSkyswgGDu6bsXdNNYJfdYu00hwPSjrnWWWdd8EIcGXDpQgnaK4Rgdpg8+z7GV 05mFEComm25vReCUPcRT1tLn1en5hzxiYgmKftApmWTG5HMelzrIJYQ1Mcuti40BKPPc oP5V6GLxEZBQH3mn8shcy3jdZszdsZdTiDXY5glBcTKJXh1w3XUZqlBLxd39FLnG9ONl Mjrg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w2-20020a05640234c200b0049e05990fe7si24068657edc.27.2023.01.20.11.41.21; Fri, 20 Jan 2023 11:41:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230150AbjATT0E (ORCPT + 99 others); Fri, 20 Jan 2023 14:26:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbjATTZw (ORCPT ); Fri, 20 Jan 2023 14:25:52 -0500 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26E88DB7B2; Fri, 20 Jan 2023 11:25:41 -0800 (PST) Received: by mail-qt1-f173.google.com with SMTP id fd15so4929552qtb.9; Fri, 20 Jan 2023 11:25:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=szLEwMFPNAUcOlQXjDN9W13GrrQRzdrtWVVmponsE0I=; b=rg0K5vLlJ1/duQCEdGhTRWDn8hSn4CMh5fQIqVkuFZhJi5A3vYVKb+lCKtQhZsuF4B 8k97URDwpmd52xvHS4IipjEDYjMx0m+EGLI8MBBpBirYp5/pVuDi9h72aBKzT5mAi7rb CoIz5dSa3wPqd0Nt8iBPPVrTw3HV9ZqnXs/HE0Vk9JoWg5gl+n+EN5azOPwL7nsyvGhb UGy3l9EjT8ijU4bryL4IypbOz4yOcDS5R6w9JTcgwwsfYDQG/gnCjdEH4FWavdOGTpcf cYNzZ2cca/GVKWwI0ibhbKO+/PdLID24o9jEov6YcSCyUGnlitZJf7wZeQhoyPtVvKBK 2xJw== X-Gm-Message-State: AFqh2komDKBTUgiiPVW7IEz8LJu4W/32RG7FXi6J0dEyhyjnDlGRLk7z cHDU4RqVOQ2tZjTXmc5px6OIptD21N7WSD68 X-Received: by 2002:a05:622a:1c15:b0:3b6:8d71:fd2b with SMTP id bq21-20020a05622a1c1500b003b68d71fd2bmr11406552qtb.48.1674242739803; Fri, 20 Jan 2023 11:25:39 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id bw5-20020a05622a098500b003b64f1b1f40sm4451070qtb.40.2023.01.20.11.25.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:39 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 8/9] bpf/docs: Document how nested trusted fields may be defined Date: Fri, 20 Jan 2023 13:25:22 -0600 Message-Id: <20230120192523.3650503-9-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571767206195349?= X-GMAIL-MSGID: =?utf-8?q?1755571767206195349?= A prior change defined a new BTF_TYPE_SAFE_NESTED macro in the verifier which allows developers to specify when a pointee field in a struct type should inherit its parent pointer's trusted status. This patch updates the kfuncs documentation to specify this macro and how it can be used. Signed-off-by: David Vernet --- Documentation/bpf/kfuncs.rst | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index a74f9e74087b..560f4ede3a9f 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -167,7 +167,8 @@ KF_ACQUIRE and KF_RET_NULL flags. The KF_TRUSTED_ARGS flag is used for kfuncs taking pointer arguments. It indicates that the all pointer arguments are valid, and that all pointers to BTF objects have been passed in their unmodified form (that is, at a zero -offset, and without having been obtained from walking another pointer). +offset, and without having been obtained from walking another pointer, with one +exception described below). There are two types of pointers to kernel objects which are considered "valid": @@ -180,6 +181,25 @@ KF_TRUSTED_ARGS kfuncs, and may have a non-zero offset. The definition of "valid" pointers is subject to change at any time, and has absolutely no ABI stability guarantees. +As mentioned above, a nested pointer obtained from walking a trusted pointer is +no longer trusted, with one exception. If a struct type has a field that is +guaranteed to be valid as long as its parent pointer is trusted, the +``BTF_TYPE_SAFE_NESTED`` macro can be used to express that to the verifier as +follows: + +.. code-block:: c + + BTF_TYPE_SAFE_NESTED(struct task_struct) { + const cpumask_t *cpus_ptr; + }; + +In other words, you must: + +1. Wrap the trusted pointer type in the ``BTF_TYPE_SAFE_NESTED`` macro. + +2. Specify the type and name of the trusted nested field. This field must match + the field in the original type definition exactly. + 2.4.6 KF_SLEEPABLE flag ----------------------- From patchwork Fri Jan 20 19:25:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vernet X-Patchwork-Id: 46601 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp391330wrn; Fri, 20 Jan 2023 11:44:35 -0800 (PST) X-Google-Smtp-Source: AMrXdXv6PadTKKvTz7lWjkEyOPQpePrPO2AX50hUbBBl1ZxRWIVse0dtP3eCc33qG1gnXRLIaydu X-Received: by 2002:a05:6402:10c9:b0:49d:a87f:ba78 with SMTP id p9-20020a05640210c900b0049da87fba78mr16147478edu.35.1674243875589; Fri, 20 Jan 2023 11:44:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674243875; cv=none; d=google.com; s=arc-20160816; b=AE/2FYBA2R3qVd/Gb+Gy57pIejRZo9Z333lQv3HQTsj/IEm5r96eOPKq+chv7Q+vV5 fuBzFTaOtVNDu9MKcTePfWEWpiQAaVFuXcsDTLiiRe43ejan4f+UFg/Rb2AahoITvGJO T3dmCMVao4VfXbcC4yWurNRyojLNP1OyA6HhHOMNQxXaOJl/ZHOLmUkDOQ5CQmVdpheU JBe9GzToTHikyHhghTCuwzLMFFTp8mUQv/3/wBf2dYHj7JNz+2uShEoXVXcyzbOx2ltz 9cr5rpYEVYszZrImzslnJ+vFaifeCkWSjQoQEur/rYF2GHMntL6TzhMiKlwJKFz8KVOH TsOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=XLylySo7R4JgpjOoJpqrUNeBZBhnsHbN5la5mr6nrKw=; b=FGPyFbDVARrvWY3eHP2vg3uWRkGXCU8YOgAuxu6RMyfLoNXVveuLZv/o3KpJgxDKln sjWMXvdVIRqCzToRJydt8MEjCRduAL3gUPItoLEPAcSXqFd6WjPj2vgc3lQs6FsWNq3s 5AOvAEFn6np11dreGlARKowfAK8pbZEM2BPFkZgSl+/8H5n/zfXbN3+iZ4B9SsVpBYRj RRPmiokG8L7ItA9GqjhV45F0x71F6WF/t0dKU1j6VGhms9mYylI0PHPtpxbgzFG3HSxg KBOcyyYa467YlXn/bJYdWyDHN5Dt2yKDdMt686kBg5ZYwOy83j8xyGdMA5bP00MydYMC ZgKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id et11-20020a056402378b00b0049ea0e729bbsi3636405edb.152.2023.01.20.11.43.47; Fri, 20 Jan 2023 11:44:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbjATT0K (ORCPT + 99 others); Fri, 20 Jan 2023 14:26:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230340AbjATTZ7 (ORCPT ); Fri, 20 Jan 2023 14:25:59 -0500 Received: from mail-vs1-f50.google.com (mail-vs1-f50.google.com [209.85.217.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B975BD0DBE; Fri, 20 Jan 2023 11:25:43 -0800 (PST) Received: by mail-vs1-f50.google.com with SMTP id 3so6741273vsq.7; Fri, 20 Jan 2023 11:25:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XLylySo7R4JgpjOoJpqrUNeBZBhnsHbN5la5mr6nrKw=; b=547YWvQyIZjWDnN20070w6V8NkO9Z8SCnG5nu2I3gZJfE7gaA58BHDPeJnQDD/9YFp j7IPEikK5BZIdhIJvvBhYnQmIQQkPZK/5r0b3lcT2eR+7D6cpLbmcnpZQNbrRwH+FnOA wgfSbEq9I0wxoMK8VqAmzq9o2AauRcSUGwCbEVeNE4/T0Avvwf6lI1Ox1knMhg4JYQ50 Z/huQBC5ygD2DWGiajgyV3ulRacdUV3hqVdYUypqPapCK9Lcw8hjK6aDmtVhJW9J3zTS 0sniC5CZzXKj6qnUssEIxj9lRkoADpdrs6j2b/8rjNfC+EeEKh/hIbZZjJYZAxlbrCNd jQng== X-Gm-Message-State: AFqh2krQhvDCid7utKHVZ8nob8CwaM3qISra1XZpuMLxRK1u7Ut1uzKs 5MOcbZ1/8Jg2H/nOVkeI4LCWOQZRRA01dhOS X-Received: by 2002:a05:6102:1517:b0:3d3:c855:bf54 with SMTP id f23-20020a056102151700b003d3c855bf54mr11263598vsv.34.1674242742092; Fri, 20 Jan 2023 11:25:42 -0800 (PST) Received: from localhost ([2620:10d:c091:480::1:2fc9]) by smtp.gmail.com with ESMTPSA id q15-20020a05620a038f00b00704a2a40cf2sm1419246qkm.38.2023.01.20.11.25.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jan 2023 11:25:41 -0800 (PST) From: David Vernet To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@meta.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, tj@kernel.org, memxor@gmail.com Subject: [PATCH bpf-next v2 9/9] bpf/docs: Document the nocast aliasing behavior of ___init Date: Fri, 20 Jan 2023 13:25:23 -0600 Message-Id: <20230120192523.3650503-10-void@manifault.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230120192523.3650503-1-void@manifault.com> References: <20230120192523.3650503-1-void@manifault.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.4 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755571946072322561?= X-GMAIL-MSGID: =?utf-8?q?1755571946072322561?= When comparing BTF IDs for pointers being passed to kfunc arguments, the verifier will allow pointer types that are equivalent according to the C standard. For example, for: struct bpf_cpumask { cpumask_t cpumask; refcount_t usage; }; The verifier will allow a struct bpf_cpumask * to be passed to a kfunc that takes a const struct cpumask * (cpumask_t is a typedef of struct cpumask). The exception to this rule is if a type is suffixed with ___init, such as: struct nf_conn___init { struct nf_conn ct; }; The verifier will _not_ allow a struct nf_conn___init * to be passed to a kfunc that expects a struct nf_conn *. This patch documents this behavior in the kfuncs documentation page. Signed-off-by: David Vernet --- Documentation/bpf/kfuncs.rst | 43 ++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 560f4ede3a9f..7bdce4955a1b 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -247,6 +247,49 @@ type. An example is shown below:: } late_initcall(init_subsystem); +2.6 Specifying no-cast aliases with ___init +-------------------------------------------- + +The verifier will always enforce that the BTF type of a pointer passed to a +kfunc by a BPF program, matches the type of pointer specified in the kfunc +definition. The verifier, does, however, allow types that are equivalent +according to the C standard to be passed to the same kfunc arg, even if their +BTF_IDs differ . + +For example, for the following type definition: + +.. code-block:: c + + struct bpf_cpumask { + cpumask_t cpumask; + refcount_t usage; + }; + +The verifier would allow a ``struct bpf_cpumask *`` to be passed to a kfunc +taking a ``cpumask_t *`` (which is a typedef of ``struct cpumask *``). For +instance, both ``struct cpumask *`` and ``struct bpf_cpmuask *`` can be passed +to bpf_cpumask_test_cpu(). + +In some cases, this type-aliasing behavior is not desired. ``struct +nf_conn___init`` is one such example: + +.. code-block:: c + + struct nf_conn___init { + struct nf_conn ct; + }; + +The C standard would consider these types to be equivalent, but it would not +always be safe to pass either type to a trusted kfunc. ``struct +nf_conn___init`` represents an allocated ``struct nf_conn`` object that has +*not yet been initialized*, so it would therefore be unsafe to pass a ``struct +nf_conn___init *`` to a kfunc that's expecting a fully initialized ``struct +nf_conn *`` (e.g. ``bpf_ct_change_timeout()``). + +In order to accommodate such requirements, the verifier will enforce strict +PTR_TO_BTF_ID type matching if two types have the exact same name, with one +being suffixed with ``___init``. + 3. Core kfuncs ==============