From patchwork Sat Jul 1 13:42:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115039 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11048495vqr; Sat, 1 Jul 2023 06:46:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6oN8pUXrlYv8QueNxVw6GWY7ZFX4/al7lEeiKoe8EtOXLva/i1PFGZBjdy6jbXP/QwLiWx X-Received: by 2002:a05:6808:7c9:b0:3a1:b638:9c2c with SMTP id f9-20020a05680807c900b003a1b6389c2cmr6179113oij.55.1688219171035; Sat, 01 Jul 2023 06:46:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688219170; cv=none; d=google.com; s=arc-20160816; b=CG+AcljbfJ5CHp8j1jhnVdeeJbuILpdpq9+AzNVl8M35+ZAttoxU4G555+FJnPG5p8 8wBZx7OveX2fFtRaZYaeK/hPf5de28dCdIf8aXwNCe0WKjQfmpuKbOQ8K1Nw1ThAPqiD H/dSHoXYZJBklHx9uRDSQwOcDB/8g6OHdnCQD3lDmR3BV3VhLG0u/w2uP/jMaWENmsNz NbRlXfdj5oFY8iRYOpFLL7B3mKhTXrSYmf2qLBeuACAcBABdzfZTRsBN9vyU0lsFFVM7 urBc+i7lpyFM5n7QJk7oqG4xs0BvB51wSVC+YZvjgdLsgKFMxXGGuHd/BtntUNatt1Bl Y4NQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=qYGsU+IO94tswouU2ZevpRalj/lAu3yd2NZ9FV714gU=; fh=Zf7QZEv2XPne2Sp1Sp6Sap90vv6zKHVBiSoBhmqoeYw=; b=lzcdxGeC5OK+NvOWw1kuWHDjEqw5EIXEWu50f4jkgurXnnEBy5JU8Q6dO1IIjEug95 fA51TrLMUcG+PsE/+UMTayxgng62fwvLpxNoJGc1NJ1oHDjuGL5zcHiVnPxUEQz65C4/ ZORF41kxOdZRiC872tQ/LX26FIV9Yp4QDyR663BPKXb0ixoULLTxa+V0V6y0U3qMj3Bb aQGCD1s8IwyIkPfdGYfuM9dB8hpOOAEvIVZAogwYN5xdP6mCIQU5cFMxOfs89q8lsww2 TGrh4v/1rorrEtV9Yrk9yNnfV7fTPFQWNfYSL7dM1qTTcUxePasacuvZG2p4JoVTeqyo RXUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=egDrajwa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h187-20020a636cc4000000b0055794eef914si14260854pgc.818.2023.07.01.06.45.58; Sat, 01 Jul 2023 06:46:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=egDrajwa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229963AbjGANhV (ORCPT + 99 others); Sat, 1 Jul 2023 09:37:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229723AbjGANhU (ORCPT ); Sat, 1 Jul 2023 09:37:20 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B4493C1F; Sat, 1 Jul 2023 06:37:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218637; x=1719754637; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iQC6kb1rTOKXEDlru5Ov1HPB6vcRfk5Mp5vR5TxV2Iw=; b=egDrajwaeCTeLo/TXvLkiN+HQesfHJ4G7zYfNCZxfHYgAlqoWFAHgTpf JYhrXWhOzjsMiqm0uVT4BO8AsoJduj45+a+fR51n64auM3hD2rBvxTGXo r2zH49yraNI3S4uVu7BEkV2sArFoht+5aiuAF6GK+5jYfRVqkq2xS5vgc +PQykvVCGsExe4NB8Qy3Ky8usocv5S/4iZsu7//GTBH5MwCBZ5cFugvBh Dt/6RX1+ti78UyaqyjIYor/JIbYVUH+QtYl/bWV1cwGiKHy/aK6i80Q9n 7jTJjjX82K+AfwA9AXS1KK7jZJaUaNdhL1tKVbXK4oN3dtsIEKn6IB4jU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926055" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926055" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693840" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693840" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:09 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Like Xu , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 01/13] KVM: arm64: selftests: Replace str_with_index with strdup_printf Date: Sat, 1 Jul 2023 21:42:49 +0800 Message-Id: <6bf03faa291e39fc220cb503cb3ce2556a9c54ca.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770226105295922085?= X-GMAIL-MSGID: =?utf-8?q?1770226105295922085?= From: Andrew Jones The original author of aarch64/get-reg-list.c (me) was wearing tunnel vision goggles when implementing str_with_index(). There's no reason to have such a special case string function. Instead, take inspiration from glib and implement strdup_printf. The implementation builds on vasprintf() which requires _GNU_SOURCE, but we require _GNU_SOURCE in most files already. Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- .../selftests/kvm/aarch64/get-reg-list.c | 23 ++++--------------- .../testing/selftests/kvm/include/test_util.h | 2 ++ tools/testing/selftests/kvm/lib/test_util.c | 15 ++++++++++++ 3 files changed, 22 insertions(+), 18 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index d4e1f4af29d6..c152523a5ed4 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -132,19 +132,6 @@ static bool find_reg(__u64 regs[], __u64 nr_regs, __u64 reg) return false; } -static const char *str_with_index(const char *template, __u64 index) -{ - char *str, *p; - int n; - - str = strdup(template); - p = strstr(str, "##"); - n = sprintf(p, "%lld", index); - strcat(p + n, strstr(template, "##") + 2); - - return (const char *)str; -} - #define REG_MASK (KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_COPROC_MASK) #define CORE_REGS_XX_NR_WORDS 2 @@ -163,7 +150,7 @@ static const char *core_id_to_str(struct vcpu_config *c, __u64 id) KVM_REG_ARM_CORE_REG(regs.regs[30]): idx = (core_off - KVM_REG_ARM_CORE_REG(regs.regs[0])) / CORE_REGS_XX_NR_WORDS; TEST_ASSERT(idx < 31, "%s: Unexpected regs.regs index: %lld", config_name(c), idx); - return str_with_index("KVM_REG_ARM_CORE_REG(regs.regs[##])", idx); + return strdup_printf("KVM_REG_ARM_CORE_REG(regs.regs[%lld])", idx); case KVM_REG_ARM_CORE_REG(regs.sp): return "KVM_REG_ARM_CORE_REG(regs.sp)"; case KVM_REG_ARM_CORE_REG(regs.pc): @@ -178,12 +165,12 @@ static const char *core_id_to_str(struct vcpu_config *c, __u64 id) KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]): idx = (core_off - KVM_REG_ARM_CORE_REG(spsr[0])) / CORE_SPSR_XX_NR_WORDS; TEST_ASSERT(idx < KVM_NR_SPSR, "%s: Unexpected spsr index: %lld", config_name(c), idx); - return str_with_index("KVM_REG_ARM_CORE_REG(spsr[##])", idx); + return strdup_printf("KVM_REG_ARM_CORE_REG(spsr[%lld])", idx); case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ... KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]): idx = (core_off - KVM_REG_ARM_CORE_REG(fp_regs.vregs[0])) / CORE_FPREGS_XX_NR_WORDS; TEST_ASSERT(idx < 32, "%s: Unexpected fp_regs.vregs index: %lld", config_name(c), idx); - return str_with_index("KVM_REG_ARM_CORE_REG(fp_regs.vregs[##])", idx); + return strdup_printf("KVM_REG_ARM_CORE_REG(fp_regs.vregs[%lld])", idx); case KVM_REG_ARM_CORE_REG(fp_regs.fpsr): return "KVM_REG_ARM_CORE_REG(fp_regs.fpsr)"; case KVM_REG_ARM_CORE_REG(fp_regs.fpcr): @@ -212,13 +199,13 @@ static const char *sve_id_to_str(struct vcpu_config *c, __u64 id) n = (id >> 5) & (KVM_ARM64_SVE_NUM_ZREGS - 1); TEST_ASSERT(id == KVM_REG_ARM64_SVE_ZREG(n, 0), "%s: Unexpected bits set in SVE ZREG id: 0x%llx", config_name(c), id); - return str_with_index("KVM_REG_ARM64_SVE_ZREG(##, 0)", n); + return strdup_printf("KVM_REG_ARM64_SVE_ZREG(%lld, 0)", n); case KVM_REG_ARM64_SVE_PREG_BASE ... KVM_REG_ARM64_SVE_PREG_BASE + (1ULL << 5) * KVM_ARM64_SVE_NUM_PREGS - 1: n = (id >> 5) & (KVM_ARM64_SVE_NUM_PREGS - 1); TEST_ASSERT(id == KVM_REG_ARM64_SVE_PREG(n, 0), "%s: Unexpected bits set in SVE PREG id: 0x%llx", config_name(c), id); - return str_with_index("KVM_REG_ARM64_SVE_PREG(##, 0)", n); + return strdup_printf("KVM_REG_ARM64_SVE_PREG(%lld, 0)", n); case KVM_REG_ARM64_SVE_FFR_BASE: TEST_ASSERT(id == KVM_REG_ARM64_SVE_FFR(0), "%s: Unexpected bits set in SVE FFR id: 0x%llx", config_name(c), id); diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index a6e9f215ce70..7e0182f837b5 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -186,4 +186,6 @@ static inline uint32_t atoi_non_negative(const char *name, const char *num_str) return num; } +char *strdup_printf(const char *fmt, ...) __attribute__((format(printf, 1, 2), nonnull(1))); + #endif /* SELFTEST_KVM_TEST_UTIL_H */ diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index b772193f6c18..3e36019eeb4a 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -5,6 +5,9 @@ * Copyright (C) 2020, Google LLC. */ +#define _GNU_SOURCE +#include +#include #include #include #include @@ -377,3 +380,15 @@ int atoi_paranoid(const char *num_str) return num; } + +char *strdup_printf(const char *fmt, ...) +{ + va_list ap; + char *str; + + va_start(ap, fmt); + vasprintf(&str, fmt, ap); + va_end(ap); + + return str; +} From patchwork Sat Jul 1 13:42:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115032 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045127vqr; Sat, 1 Jul 2023 06:39:21 -0700 (PDT) X-Google-Smtp-Source: APBJJlFrpaOVAerh2ceWrZWWbOi7dEQ6x3Luff5MzU6u5pLLJHyQVLOgH2HNr8GkE0q97lOFB+qS X-Received: by 2002:a05:6a00:2d1d:b0:67a:a906:9edb with SMTP id fa29-20020a056a002d1d00b0067aa9069edbmr6887953pfb.30.1688218760928; Sat, 01 Jul 2023 06:39:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218760; cv=none; d=google.com; s=arc-20160816; b=gfcTT2XtpmkL1pyXM8rj60eIth92CGwRPllfGynvIZ3v2kvledJXBpieoKWN0J7YtD LkFSshlCZuimJrrlpqq17+o7gws9ARZbvvsNwU1hdol/cbO7hv5dwKrLfGicVIoLCcpy w0eL9MQdEoqt+9DaxKJLA7EPjTkHmZUgMOMSH9pM01sHIzAP4kx9xwxe9CV8D6srzfwl WMBcnbmqub3NOLiNvMd6dqYNznS8yanB3P+sleBgdChYKx1rdR8/DmvhWiL4tn43WgVL H+0QXONlPbJTXZ/PLrRMphNaiE8JjBqIWqvTIVXUXas57j+IwlhupCVq2O2N1zZBRMtD N2kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=pmrYh6luanuSB/dsFZFnbV3GBRmceUO4KKmoZd7PlkE=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=YKqo6Au2bEMGZ4v8x1IOTHpILXd50WFoMgcD0nhdF+ub49axH0M6lR1XhRCAwwb8Pd g69c0orss+PQfTvD5RUQIPkNQlXJcLh5M0JunTJct+SSEAF0+3YZVjRQ59wfrps7gF/0 QjWJgonPyLJEpj/rZuupST2QLwYflnZxH1/79fle2SjPOd8qkmbl6tbLaAvmnIYgUpVp MCUOC71q54toxLB0H/IAAtv8j4LhukxUgrdrjj0KqiXdRyoOqjJ9UIueXkErzbpf7P4a 0yTNacAwGkOj8nm8CeNETO+cXQXlnRPlYo8tP8l1udIvYpCKmTHH+TVIT+tSxbJ1QAwJ NKbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b="HYs/QV7E"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w3-20020a634903000000b0054ff5fb84dfsi13800405pga.191.2023.07.01.06.39.06; Sat, 01 Jul 2023 06:39:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b="HYs/QV7E"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230175AbjGANhn (ORCPT + 99 others); Sat, 1 Jul 2023 09:37:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbjGANhk (ORCPT ); Sat, 1 Jul 2023 09:37:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFF873C21; Sat, 1 Jul 2023 06:37:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218653; x=1719754653; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SG9GcgXHxEAjLOgy9zCFtnTjBIS7anAQ+78nBSk9tVQ=; b=HYs/QV7ExIAkzkSJobKHNaRIWBD2FWr3aIXa/8xtR1N50AdIECzbYIH/ Mr8nr3WYAQ3yQhNJt0gHAl7a5QTYyElU9OHbLvnUecAMK+RaaJGHM97C0 p+hhsKF7A1mOJa9ng0ijQm0W/HEUXJrWM0waTt9lWqaYju/Hf6L9bSAUy HD0GYBx4k2r6qgkmHuov3b/JwsLdept20dSTQahJdwKuM4PgBtYvuAVrv oup+MMmMkd90j2ojyOPVRQmhoZMg+t16O905joHo26uE9d05mSZoLaazP KRebINMuuKOAB0wyuik+So4/CKMb3w901lGeiOSPcz0+keUSZBYJGHPYu A==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926075" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926075" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693864" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693864" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:24 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 02/13] KVM: arm64: selftests: Drop SVE cap check in print_reg Date: Sat, 1 Jul 2023 21:42:50 +0800 Message-Id: <66676d5e56b6e23e380e6182ab89b33d7c4bb18f.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225675700368836?= X-GMAIL-MSGID: =?utf-8?q?1770225675700368836?= From: Andrew Jones The check doesn't prove much anyway, as the reg lists could be messed up too. Just drop the check to simplify making print_reg more independent. Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- .../testing/selftests/kvm/aarch64/get-reg-list.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index c152523a5ed4..915272c342f9 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -100,16 +100,6 @@ static const char *config_name(struct vcpu_config *c) return c->name; } -static bool has_cap(struct vcpu_config *c, long capability) -{ - struct reg_sublist *s; - - for_each_sublist(c, s) - if (s->capability == capability) - return true; - return false; -} - static bool filter_reg(__u64 reg) { /* @@ -287,10 +277,7 @@ static void print_reg(struct vcpu_config *c, __u64 id) printf("\tKVM_REG_ARM_FW_FEAT_BMAP_REG(%lld),\n", id & 0xffff); break; case KVM_REG_ARM64_SVE: - if (has_cap(c, KVM_CAP_ARM_SVE)) - printf("\t%s,\n", sve_id_to_str(c, id)); - else - TEST_FAIL("%s: KVM_REG_ARM64_SVE is an unexpected coproc type in reg id: 0x%llx", config_name(c), id); + printf("\t%s,\n", sve_id_to_str(c, id)); break; default: TEST_FAIL("%s: Unexpected coproc type: 0x%llx in reg id: 0x%llx", From patchwork Sat Jul 1 13:42:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115033 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045153vqr; Sat, 1 Jul 2023 06:39:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ59iUlrMDbdUGgzHhWV8H9Pr/ztMLeXYPPj4Psxq3Q2DLVIqb0DsoXbGqJWq6QVeQtaJkcm X-Received: by 2002:a05:6808:6192:b0:3a1:f2fc:7e13 with SMTP id dn18-20020a056808619200b003a1f2fc7e13mr5260609oib.57.1688218765057; Sat, 01 Jul 2023 06:39:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218765; cv=none; d=google.com; s=arc-20160816; b=YaCyeD++Blk3xm2vaqC92pAHg9zEUyNZjAqJ1TKOKQ3/LLeHW7ZqvNexWS0m+khUHb EJGWfHdd1F8GrHr/+SevdRn/UpZYoaNDViWGYdeef88fxJI41x1DIBjw/F9XFGkx8G0m IGULKp5r7yBMveJSO5aWAy6ubQci2FZoBo6bBoSi3akrtJFW62+S+Csk8Xu2gIYx4/qQ O2j2F4ld3VjkfwQoO+bwXca2XI/ILJzki1AhJ3vWbHr/aSgs6+6ulYdWAdQX09uFQbH8 Tpj3DvaWXWsHMDMFyGdmku3qTY0zPyeonlzoOzBg6NINK7l3OFXbA6xu+H7sHFeqsiwj 8U9g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=DIaiUCtXjcyuEdAM1VNsYDAmB6XoYG0R+Aj0XlN53nE=; fh=7EPU0yFgKq6DK54DdRhAOTNBSTrU9EP8Vi5A8HZNYZw=; b=qrX0xBApaSox5WIpn5hAw64kg/3dCPuDyOxBJPJq09607xoHM8nbPF/3jhwXCnV1Cw TJPiRnFiTyaRiNTVpwTxyvZCOK2t8p0MwDt/A9wuaxAONA4O4Oq+17wZ5WUKubGPj2pj krCkR9BtEZu6J0dekIu0BLx3coMf8NNPm7EgWek806oiDCntu8Ga4voX5B5xtttP9PYd 0m5WoCQ5rs1KqVqem0ZWrvHW1dYUOYMXjo9zSytaKxYN1yO/a6ujXXC28VBzi/g0j9E0 BqKk4yycoIqKQv5fjZB0tDNaZX/2Z8dCdYLj0XCUQFHJwn9k/xhE3aZ4beHDZMn2fGMa a+pA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=X76LLB+d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cg16-20020a056a00291000b0066879af7571si3917624pfb.130.2023.07.01.06.39.10; Sat, 01 Jul 2023 06:39:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=X76LLB+d; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229665AbjGANh7 (ORCPT + 99 others); Sat, 1 Jul 2023 09:37:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbjGANh6 (ORCPT ); Sat, 1 Jul 2023 09:37:58 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0626421A; Sat, 1 Jul 2023 06:37:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218665; x=1719754665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=quNRDcUJypPIG/seUf6i3WDOA80CN8cLhVMRoyeshgI=; b=X76LLB+dRZT7pY/5CLu/RCnVOZXzDUzdkzwOABPMUh0N7YQMzsLdBgLp IDjc9t2YC7g2IerxLBzqovAa6Tpo/zuIJHBLdSzV2aRQX0RAd+rbFP7G7 dPrKdNTwXgSSjlUcTIso/ZFCOqLkJ7SD8WQpQxOoDCvnpRsnrViiWGDNy WRfEOoFIYA6oUGdPWuTMRtiZKZfaYDnMXkjj4EGszdJLZJkSgJsuAjYtp TGczojgHVAhck1VUAG1rW3c3qTRsBD0I3nzXHW4lwUM87BtnSOWbIoNYH kc/rMWrHt0vKUCNRqE5X8+F3b+JYmY98lWHN1Z7K2Bpcs+BbB44YKe2pI g==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926098" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926098" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693890" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693890" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:38 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , David Matlack , Vipin Sharma , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 03/13] KVM: arm64: selftests: Remove print_reg's dependency on vcpu_config Date: Sat, 1 Jul 2023 21:42:51 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225679957021326?= X-GMAIL-MSGID: =?utf-8?q?1770225679957021326?= From: Andrew Jones print_reg() and its helpers only use the vcpu_config pointer for config_name(). So just pass the config name in instead, which is used as a prefix in asserts. print_reg() can now be compiled independently of config_name(). Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- .../selftests/kvm/aarch64/get-reg-list.c | 52 +++++++++---------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index 915272c342f9..424285d39965 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -128,7 +128,7 @@ static bool find_reg(__u64 regs[], __u64 nr_regs, __u64 reg) #define CORE_SPSR_XX_NR_WORDS 2 #define CORE_FPREGS_XX_NR_WORDS 4 -static const char *core_id_to_str(struct vcpu_config *c, __u64 id) +static const char *core_id_to_str(const char *prefix, __u64 id) { __u64 core_off = id & ~REG_MASK, idx; @@ -139,7 +139,7 @@ static const char *core_id_to_str(struct vcpu_config *c, __u64 id) case KVM_REG_ARM_CORE_REG(regs.regs[0]) ... KVM_REG_ARM_CORE_REG(regs.regs[30]): idx = (core_off - KVM_REG_ARM_CORE_REG(regs.regs[0])) / CORE_REGS_XX_NR_WORDS; - TEST_ASSERT(idx < 31, "%s: Unexpected regs.regs index: %lld", config_name(c), idx); + TEST_ASSERT(idx < 31, "%s: Unexpected regs.regs index: %lld", prefix, idx); return strdup_printf("KVM_REG_ARM_CORE_REG(regs.regs[%lld])", idx); case KVM_REG_ARM_CORE_REG(regs.sp): return "KVM_REG_ARM_CORE_REG(regs.sp)"; @@ -154,12 +154,12 @@ static const char *core_id_to_str(struct vcpu_config *c, __u64 id) case KVM_REG_ARM_CORE_REG(spsr[0]) ... KVM_REG_ARM_CORE_REG(spsr[KVM_NR_SPSR - 1]): idx = (core_off - KVM_REG_ARM_CORE_REG(spsr[0])) / CORE_SPSR_XX_NR_WORDS; - TEST_ASSERT(idx < KVM_NR_SPSR, "%s: Unexpected spsr index: %lld", config_name(c), idx); + TEST_ASSERT(idx < KVM_NR_SPSR, "%s: Unexpected spsr index: %lld", prefix, idx); return strdup_printf("KVM_REG_ARM_CORE_REG(spsr[%lld])", idx); case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ... KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]): idx = (core_off - KVM_REG_ARM_CORE_REG(fp_regs.vregs[0])) / CORE_FPREGS_XX_NR_WORDS; - TEST_ASSERT(idx < 32, "%s: Unexpected fp_regs.vregs index: %lld", config_name(c), idx); + TEST_ASSERT(idx < 32, "%s: Unexpected fp_regs.vregs index: %lld", prefix, idx); return strdup_printf("KVM_REG_ARM_CORE_REG(fp_regs.vregs[%lld])", idx); case KVM_REG_ARM_CORE_REG(fp_regs.fpsr): return "KVM_REG_ARM_CORE_REG(fp_regs.fpsr)"; @@ -167,11 +167,11 @@ static const char *core_id_to_str(struct vcpu_config *c, __u64 id) return "KVM_REG_ARM_CORE_REG(fp_regs.fpcr)"; } - TEST_FAIL("%s: Unknown core reg id: 0x%llx", config_name(c), id); + TEST_FAIL("%s: Unknown core reg id: 0x%llx", prefix, id); return NULL; } -static const char *sve_id_to_str(struct vcpu_config *c, __u64 id) +static const char *sve_id_to_str(const char *prefix, __u64 id) { __u64 sve_off, n, i; @@ -181,37 +181,37 @@ static const char *sve_id_to_str(struct vcpu_config *c, __u64 id) sve_off = id & ~(REG_MASK | ((1ULL << 5) - 1)); i = id & (KVM_ARM64_SVE_MAX_SLICES - 1); - TEST_ASSERT(i == 0, "%s: Currently we don't expect slice > 0, reg id 0x%llx", config_name(c), id); + TEST_ASSERT(i == 0, "%s: Currently we don't expect slice > 0, reg id 0x%llx", prefix, id); switch (sve_off) { case KVM_REG_ARM64_SVE_ZREG_BASE ... KVM_REG_ARM64_SVE_ZREG_BASE + (1ULL << 5) * KVM_ARM64_SVE_NUM_ZREGS - 1: n = (id >> 5) & (KVM_ARM64_SVE_NUM_ZREGS - 1); TEST_ASSERT(id == KVM_REG_ARM64_SVE_ZREG(n, 0), - "%s: Unexpected bits set in SVE ZREG id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in SVE ZREG id: 0x%llx", prefix, id); return strdup_printf("KVM_REG_ARM64_SVE_ZREG(%lld, 0)", n); case KVM_REG_ARM64_SVE_PREG_BASE ... KVM_REG_ARM64_SVE_PREG_BASE + (1ULL << 5) * KVM_ARM64_SVE_NUM_PREGS - 1: n = (id >> 5) & (KVM_ARM64_SVE_NUM_PREGS - 1); TEST_ASSERT(id == KVM_REG_ARM64_SVE_PREG(n, 0), - "%s: Unexpected bits set in SVE PREG id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in SVE PREG id: 0x%llx", prefix, id); return strdup_printf("KVM_REG_ARM64_SVE_PREG(%lld, 0)", n); case KVM_REG_ARM64_SVE_FFR_BASE: TEST_ASSERT(id == KVM_REG_ARM64_SVE_FFR(0), - "%s: Unexpected bits set in SVE FFR id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in SVE FFR id: 0x%llx", prefix, id); return "KVM_REG_ARM64_SVE_FFR(0)"; } return NULL; } -static void print_reg(struct vcpu_config *c, __u64 id) +static void print_reg(const char *prefix, __u64 id) { unsigned op0, op1, crn, crm, op2; const char *reg_size = NULL; TEST_ASSERT((id & KVM_REG_ARCH_MASK) == KVM_REG_ARM64, - "%s: KVM_REG_ARM64 missing in reg id: 0x%llx", config_name(c), id); + "%s: KVM_REG_ARM64 missing in reg id: 0x%llx", prefix, id); switch (id & KVM_REG_SIZE_MASK) { case KVM_REG_SIZE_U8: @@ -243,16 +243,16 @@ static void print_reg(struct vcpu_config *c, __u64 id) break; default: TEST_FAIL("%s: Unexpected reg size: 0x%llx in reg id: 0x%llx", - config_name(c), (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id); + prefix, (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id); } switch (id & KVM_REG_ARM_COPROC_MASK) { case KVM_REG_ARM_CORE: - printf("\tKVM_REG_ARM64 | %s | KVM_REG_ARM_CORE | %s,\n", reg_size, core_id_to_str(c, id)); + printf("\tKVM_REG_ARM64 | %s | KVM_REG_ARM_CORE | %s,\n", reg_size, core_id_to_str(prefix, id)); break; case KVM_REG_ARM_DEMUX: TEST_ASSERT(!(id & ~(REG_MASK | KVM_REG_ARM_DEMUX_ID_MASK | KVM_REG_ARM_DEMUX_VAL_MASK)), - "%s: Unexpected bits set in DEMUX reg id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in DEMUX reg id: 0x%llx", prefix, id); printf("\tKVM_REG_ARM64 | %s | KVM_REG_ARM_DEMUX | KVM_REG_ARM_DEMUX_ID_CCSIDR | %lld,\n", reg_size, id & KVM_REG_ARM_DEMUX_VAL_MASK); break; @@ -263,25 +263,25 @@ static void print_reg(struct vcpu_config *c, __u64 id) crm = (id & KVM_REG_ARM64_SYSREG_CRM_MASK) >> KVM_REG_ARM64_SYSREG_CRM_SHIFT; op2 = (id & KVM_REG_ARM64_SYSREG_OP2_MASK) >> KVM_REG_ARM64_SYSREG_OP2_SHIFT; TEST_ASSERT(id == ARM64_SYS_REG(op0, op1, crn, crm, op2), - "%s: Unexpected bits set in SYSREG reg id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in SYSREG reg id: 0x%llx", prefix, id); printf("\tARM64_SYS_REG(%d, %d, %d, %d, %d),\n", op0, op1, crn, crm, op2); break; case KVM_REG_ARM_FW: TEST_ASSERT(id == KVM_REG_ARM_FW_REG(id & 0xffff), - "%s: Unexpected bits set in FW reg id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in FW reg id: 0x%llx", prefix, id); printf("\tKVM_REG_ARM_FW_REG(%lld),\n", id & 0xffff); break; case KVM_REG_ARM_FW_FEAT_BMAP: TEST_ASSERT(id == KVM_REG_ARM_FW_FEAT_BMAP_REG(id & 0xffff), - "%s: Unexpected bits set in the bitmap feature FW reg id: 0x%llx", config_name(c), id); + "%s: Unexpected bits set in the bitmap feature FW reg id: 0x%llx", prefix, id); printf("\tKVM_REG_ARM_FW_FEAT_BMAP_REG(%lld),\n", id & 0xffff); break; case KVM_REG_ARM64_SVE: - printf("\t%s,\n", sve_id_to_str(c, id)); + printf("\t%s,\n", sve_id_to_str(prefix, id)); break; default: TEST_FAIL("%s: Unexpected coproc type: 0x%llx in reg id: 0x%llx", - config_name(c), (id & KVM_REG_ARM_COPROC_MASK) >> KVM_REG_ARM_COPROC_SHIFT, id); + prefix, (id & KVM_REG_ARM_COPROC_MASK) >> KVM_REG_ARM_COPROC_SHIFT, id); } } @@ -410,7 +410,7 @@ static void run_test(struct vcpu_config *c) __u64 id = reg_list->reg[i]; if ((print_list && !filter_reg(id)) || (print_filtered && filter_reg(id))) - print_reg(c, id); + print_reg(config_name(c), id); } putchar('\n'); return; @@ -438,7 +438,7 @@ static void run_test(struct vcpu_config *c) ret = __vcpu_get_reg(vcpu, reg_list->reg[i], &addr); if (ret) { printf("%s: Failed to get ", config_name(c)); - print_reg(c, reg.id); + print_reg(config_name(c), reg.id); putchar('\n'); ++failed_get; } @@ -450,7 +450,7 @@ static void run_test(struct vcpu_config *c) ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); if (ret != -1 || errno != EPERM) { printf("%s: Failed to reject (ret=%d, errno=%d) ", config_name(c), ret, errno); - print_reg(c, reg.id); + print_reg(config_name(c), reg.id); putchar('\n'); ++failed_reject; } @@ -462,7 +462,7 @@ static void run_test(struct vcpu_config *c) ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); if (ret) { printf("%s: Failed to set ", config_name(c)); - print_reg(c, reg.id); + print_reg(config_name(c), reg.id); putchar('\n'); ++failed_set; } @@ -500,7 +500,7 @@ static void run_test(struct vcpu_config *c) "Consider adding them to the blessed reg " "list with the following lines:\n\n", config_name(c), new_regs); for_each_new_reg(i) - print_reg(c, reg_list->reg[i]); + print_reg(config_name(c), reg_list->reg[i]); putchar('\n'); } @@ -508,7 +508,7 @@ static void run_test(struct vcpu_config *c) printf("\n%s: There are %d missing registers.\n" "The following lines are missing registers:\n\n", config_name(c), missing_regs); for_each_missing_reg(i) - print_reg(c, blessed_reg[i]); + print_reg(config_name(c), blessed_reg[i]); putchar('\n'); } From patchwork Sat Jul 1 13:42:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115034 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045227vqr; Sat, 1 Jul 2023 06:39:34 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6odc+hXdo/I+/UK8Sre9ju//MnN+vJ3FnSdS8B0xPAFR6suo1gWyvdM62qP4aJ4BS2tOjB X-Received: by 2002:a05:6a21:3290:b0:12c:3973:800d with SMTP id yt16-20020a056a21329000b0012c3973800dmr13921549pzb.6.1688218774098; Sat, 01 Jul 2023 06:39:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218774; cv=none; d=google.com; s=arc-20160816; b=yN18t1uzTbn8SfnWhtM9ml2BJ8NwyOckt8FvzqRQ83rR7iU3HrMzq9Kdqru024jI7R 4L6IRA0PGQb+so4yC2ANVZgjqORrpTiJotDl72OpIzCvyRUZifv6A5/HU9r2MUtx9yE4 FoRWC50cvHL0C8pljJWAXqs3BCcYRshfVol5EvvrchIFG+RUsXFiBzg9hQkQcP4BVR+A hC4H15W9Sqn/KXenrl+lyARBfRmMXgJtHsh+CkO7lvlwyjiBm5/4ogkfQrREVGcWKWYV cnf8srfrwveY1iWNePoSjpCoqnDN75sChhcsipHSdEsZPziX9bq8UinxqCwcIcK7y/hB S+1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=wWFgwQEaa4eCcWANHW590Azo9yTpQ9+mhbNhUaYU37k=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=Ft3GoTYHOPizIY1f8g3AhvDL5fn7EsDOCgxYjuTZNZZKyglu236d0XrNjEETuQEe4P qG2bdSHQ+26vrgeepNiEvc+PGcS4xtso0WNUZg90u26rlnLS6dvJHhaqOQnbXTlLlgsB elMW3Mf/YRtpDNurvdTuLbSdBivwPr//eBfMU1/HfJwUOntLVlbGUKvKXWlm+l3ut+yU 8lTQnaPGMMDCQQcEzRphF+vKewz5J3eebcbdcY908vb26UQGU8dAUtQMZFCG0Rq4YrV0 GjmHt2rBa2WIXjY46lCmtN/ikZ/vegvU5780b5G8D+Sn9SS+qq5koWFbhuiXb1eDy5td fgjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=fjQddrq3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n37-20020a634d65000000b005406e76606csi14056760pgl.900.2023.07.01.06.39.19; Sat, 01 Jul 2023 06:39:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=fjQddrq3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230120AbjGANiQ (ORCPT + 99 others); Sat, 1 Jul 2023 09:38:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230334AbjGANiN (ORCPT ); Sat, 1 Jul 2023 09:38:13 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A59363C3F; Sat, 1 Jul 2023 06:38:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218680; x=1719754680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6K9gSvU7bcUbXrx95jbbHrZ95LKmm3Z9olehgFYx8q4=; b=fjQddrq3XeVsobN32PnbPZ4QV5NmXKYesgGK7cq09CAyXg8hvVhym+Nl FphQf1JgPj7Jt7k8HoV3Wta/VL0F7E87XbKGOEksXhlGeka/30Q4dxlCJ hoh7+50z2438ym0mgyiQMQ2o8pPd5tHqbnryFBIVAlXIRA1XlG9jWXVUd pqOg6SaZ1W3NPjHbsTqU+ArPcdKoQJkbQx8gzhqQciynKbOBgKgoRDUXU JGv4NR7a2Gb5sOv/z4LXPvsUKZH5lMQTHDOlyC3vok3yOvVYDoBnQwBgS jyZNIXhMMSdNPyKVPEQ7WwekoQv1FeG7+mssaK/UtnPa96PUWFhrPblHq Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926131" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926131" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693901" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693901" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:37:52 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 04/13] KVM: arm64: selftests: Rename vcpu_config and add to kvm_util.h Date: Sat, 1 Jul 2023 21:42:52 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225689237881307?= X-GMAIL-MSGID: =?utf-8?q?1770225689237881307?= From: Andrew Jones Rename vcpu_config to vcpu_reg_list to be more specific and add it to kvm_util.h. While it may not get used outside get-reg-list tests, exporting it doesn't hurt, as long as it has a unique enough name. This is a step in the direction of sharing most of the get- reg-list test code between architectures. Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- .../selftests/kvm/aarch64/get-reg-list.c | 60 +++++++------------ .../selftests/kvm/include/kvm_util_base.h | 16 +++++ 2 files changed, 38 insertions(+), 38 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index 424285d39965..aae2056379f7 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -37,23 +37,7 @@ static struct kvm_reg_list *reg_list; static __u64 *blessed_reg, blessed_n; -struct reg_sublist { - const char *name; - long capability; - int feature; - bool finalize; - __u64 *regs; - __u64 regs_n; - __u64 *rejects_set; - __u64 rejects_set_n; -}; - -struct vcpu_config { - char *name; - struct reg_sublist sublists[]; -}; - -static struct vcpu_config *vcpu_configs[]; +static struct vcpu_reg_list *vcpu_configs[]; static int vcpu_configs_n; #define for_each_sublist(c, s) \ @@ -74,9 +58,9 @@ static int vcpu_configs_n; for_each_reg_filtered(i) \ if (!find_reg(blessed_reg, blessed_n, reg_list->reg[i])) -static const char *config_name(struct vcpu_config *c) +static const char *config_name(struct vcpu_reg_list *c) { - struct reg_sublist *s; + struct vcpu_reg_sublist *s; int len = 0; if (c->name) @@ -342,18 +326,18 @@ static void core_reg_fixup(void) reg_list = tmp; } -static void prepare_vcpu_init(struct vcpu_config *c, struct kvm_vcpu_init *init) +static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) { - struct reg_sublist *s; + struct vcpu_reg_sublist *s; for_each_sublist(c, s) if (s->capability) init->features[s->feature / 32] |= 1 << (s->feature % 32); } -static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_config *c) +static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) { - struct reg_sublist *s; + struct vcpu_reg_sublist *s; int feature; for_each_sublist(c, s) { @@ -364,9 +348,9 @@ static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_config *c) } } -static void check_supported(struct vcpu_config *c) +static void check_supported(struct vcpu_reg_list *c) { - struct reg_sublist *s; + struct vcpu_reg_sublist *s; for_each_sublist(c, s) { if (!s->capability) @@ -382,14 +366,14 @@ static bool print_list; static bool print_filtered; static bool fixup_core_regs; -static void run_test(struct vcpu_config *c) +static void run_test(struct vcpu_reg_list *c) { struct kvm_vcpu_init init = { .target = -1, }; int new_regs = 0, missing_regs = 0, i, n; int failed_get = 0, failed_set = 0, failed_reject = 0; struct kvm_vcpu *vcpu; struct kvm_vm *vm; - struct reg_sublist *s; + struct vcpu_reg_sublist *s; check_supported(c); @@ -526,7 +510,7 @@ static void run_test(struct vcpu_config *c) static void help(void) { - struct vcpu_config *c; + struct vcpu_reg_list *c; int i; printf( @@ -550,9 +534,9 @@ static void help(void) ); } -static struct vcpu_config *parse_config(const char *config) +static struct vcpu_reg_list *parse_config(const char *config) { - struct vcpu_config *c; + struct vcpu_reg_list *c; int i; if (config[8] != '=') @@ -572,7 +556,7 @@ static struct vcpu_config *parse_config(const char *config) int main(int ac, char **av) { - struct vcpu_config *c, *sel = NULL; + struct vcpu_reg_list *c, *sel = NULL; int i, ret = 0; pid_t pid; @@ -1053,14 +1037,14 @@ static __u64 pauth_generic_regs[] = { .regs_n = ARRAY_SIZE(pauth_generic_regs), \ } -static struct vcpu_config vregs_config = { +static struct vcpu_reg_list vregs_config = { .sublists = { BASE_SUBLIST, VREGS_SUBLIST, {0}, }, }; -static struct vcpu_config vregs_pmu_config = { +static struct vcpu_reg_list vregs_pmu_config = { .sublists = { BASE_SUBLIST, VREGS_SUBLIST, @@ -1068,14 +1052,14 @@ static struct vcpu_config vregs_pmu_config = { {0}, }, }; -static struct vcpu_config sve_config = { +static struct vcpu_reg_list sve_config = { .sublists = { BASE_SUBLIST, SVE_SUBLIST, {0}, }, }; -static struct vcpu_config sve_pmu_config = { +static struct vcpu_reg_list sve_pmu_config = { .sublists = { BASE_SUBLIST, SVE_SUBLIST, @@ -1083,7 +1067,7 @@ static struct vcpu_config sve_pmu_config = { {0}, }, }; -static struct vcpu_config pauth_config = { +static struct vcpu_reg_list pauth_config = { .sublists = { BASE_SUBLIST, VREGS_SUBLIST, @@ -1091,7 +1075,7 @@ static struct vcpu_config pauth_config = { {0}, }, }; -static struct vcpu_config pauth_pmu_config = { +static struct vcpu_reg_list pauth_pmu_config = { .sublists = { BASE_SUBLIST, VREGS_SUBLIST, @@ -1101,7 +1085,7 @@ static struct vcpu_config pauth_pmu_config = { }, }; -static struct vcpu_config *vcpu_configs[] = { +static struct vcpu_reg_list *vcpu_configs[] = { &vregs_config, &vregs_pmu_config, &sve_config, diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index a089c356f354..ac4aaa21deee 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -15,6 +15,7 @@ #include #include #include "linux/rbtree.h" +#include #include @@ -124,6 +125,21 @@ struct kvm_vm { uint32_t memslots[NR_MEM_REGIONS]; }; +struct vcpu_reg_sublist { + const char *name; + long capability; + int feature; + bool finalize; + __u64 *regs; + __u64 regs_n; + __u64 *rejects_set; + __u64 rejects_set_n; +}; + +struct vcpu_reg_list { + char *name; + struct vcpu_reg_sublist sublists[]; +}; #define kvm_for_each_vcpu(vm, i, vcpu) \ for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \ From patchwork Sat Jul 1 13:42:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115035 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045330vqr; Sat, 1 Jul 2023 06:39:43 -0700 (PDT) X-Google-Smtp-Source: APBJJlFvzsMd8OtVH/QR+FXwfIQ+qdRrWOaKY3YjfaRkQ6WlSHCfB3nVdcICMySAyJiynN02Tk/9 X-Received: by 2002:a17:903:120c:b0:1b8:2132:a7fd with SMTP id l12-20020a170903120c00b001b82132a7fdmr3457626plh.13.1688218783064; Sat, 01 Jul 2023 06:39:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218783; cv=none; d=google.com; s=arc-20160816; b=CPS+qT2TZ0gIj/VqAd1L8hgtT89MK3IEgiGH1uZ+OcFz6Oe8FI5YKNAI6k2e4VzioK E1kbJgc2h20Eq+o2hmXYofd7KJ3tKOc5tvA3gdNeoKTcvh03y8ui/7VSNMFYNXQ+1rOk Ycl6+0uGHPuQVVlrH7gu2wmFUdWMLhqh3+ut8BHeHSj23aQ/CT0Mweg8NNzG4UCDUYEH uQJCEbzstu4GiUKIrh+kTz31jynzu9QQy6LPgFoM72XNe2rZ4apVFh4eanTJWtSQPqR8 CH8l+Rbs1GgyMltxvFBEUyuV47U+izjmXQIukhHF7zCJ6GpljqFgsgQZeDr37jhlGAcl Oi/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=ZInWhliwUZFkhplpCZXdeUEPIE8eJhNyW0lStM5XhSo=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=BIhFFwj1cqTFWFAeNHNRTkfXCBHPGMlI4uVWcR1cqITdQJwkpW7yAgwPgfEbyxR+Pr QpglmhiQdVF0W49cMpgvOzU318TICcXxKB5+ExWoH6OFYy7ElG4TUNY6zXeypREbseSi eLK5hZEE543xzQLSn639i/9Kh5hAdZS4sI15X1nocwdNQQVzgDS/9PdpSIUSfZ0vNb4a ynPHs5GqQvEx2u1c1uXC2kPz+F2A9xKT0zFcCbDyIcPAP9+hlqgJSW4W3wvIWj/JI/bW m4mWSRKp9utXgJGqETvnrOTbFevenysZnb4se1mu7UtiKh9t5Sn/dKpBCDdpW/fCg5/S hlZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=Edfik3mq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020a170902e80b00b001b844856970si6306456plg.10.2023.07.01.06.39.28; Sat, 01 Jul 2023 06:39:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=Edfik3mq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230212AbjGANi0 (ORCPT + 99 others); Sat, 1 Jul 2023 09:38:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229582AbjGANiX (ORCPT ); Sat, 1 Jul 2023 09:38:23 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04A033C29; Sat, 1 Jul 2023 06:38:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218695; x=1719754695; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rBFu+kBGGrTvYe0HgskYar8sc+5gymxxNHT6tyryTg0=; b=Edfik3mqJw2F32Dc/6rs7Q+wHRPP34twWwegHwbBAQTlb3Tcuvo/MWj1 3b9TeCZ310wyzBY9Z0tw8SnjPmBGZ+lWNIA+VkZ5do9C+Zj5k1PsN54bQ dfyjb3gRLv37yRRirnzyTTJXnjk9OwdETJStJwbMWZYMLfBHsMq3NNKJO h8eNsl+oEHty4n5vT61Al4EbFX8fUMEchfrKlHSQ0+nLkfnJdtMMLYfg6 oGfsrSYX763vJKE6CwIBWqraB2H0sy0QLsuJ9mlj+NNV0ZWlb2a5R+NEy MiZMX+OVs7zhJash+WnB3k6OByusWTS9otnR59fmSEnIBBhwxYyVPbGPF g==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926144" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926144" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693921" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693921" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:07 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 05/13] KVM: arm64: selftests: Delete core_reg_fixup Date: Sat, 1 Jul 2023 21:42:53 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225698545079200?= X-GMAIL-MSGID: =?utf-8?q?1770225698545079200?= From: Andrew Jones core_reg_fixup() complicates sharing the get-reg-list test with other architectures. Rather than work at keeping it, with plenty of #ifdeffery, just delete it, as it's unlikely to test a kernel based on anything older than v5.2 with the get-reg-list test, which is a test meant to check for regressions in new kernels. (And, an older version of the test can still be used for older kernels if necessary.) Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- .../selftests/kvm/aarch64/get-reg-list.c | 83 +++---------------- 1 file changed, 10 insertions(+), 73 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index aae2056379f7..c8b44389d2ee 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -17,12 +17,10 @@ * by running the test with the --list command line argument. * * Note, the blessed list should be created from the oldest possible - * kernel. We can't go older than v4.15, though, because that's the first - * release to expose the ID system registers in KVM_GET_REG_LIST, see - * commit 93390c0a1b20 ("arm64: KVM: Hide unsupported AArch64 CPU features - * from guests"). Also, one must use the --core-reg-fixup command line - * option when running on an older kernel that doesn't include df205b5c6328 - * ("KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST") + * kernel. We can't go older than v5.2, though, because that's the first + * release which includes df205b5c6328 ("KVM: arm64: Filter out invalid + * core register IDs in KVM_GET_REG_LIST"). Without that commit the core + * registers won't match expectations. */ #include #include @@ -269,63 +267,6 @@ static void print_reg(const char *prefix, __u64 id) } } -/* - * Older kernels listed each 32-bit word of CORE registers separately. - * For 64 and 128-bit registers we need to ignore the extra words. We - * also need to fixup the sizes, because the older kernels stated all - * registers were 64-bit, even when they weren't. - */ -static void core_reg_fixup(void) -{ - struct kvm_reg_list *tmp; - __u64 id, core_off; - int i; - - tmp = calloc(1, sizeof(*tmp) + reg_list->n * sizeof(__u64)); - - for (i = 0; i < reg_list->n; ++i) { - id = reg_list->reg[i]; - - if ((id & KVM_REG_ARM_COPROC_MASK) != KVM_REG_ARM_CORE) { - tmp->reg[tmp->n++] = id; - continue; - } - - core_off = id & ~REG_MASK; - - switch (core_off) { - case 0x52: case 0xd2: case 0xd6: - /* - * These offsets are pointing at padding. - * We need to ignore them too. - */ - continue; - case KVM_REG_ARM_CORE_REG(fp_regs.vregs[0]) ... - KVM_REG_ARM_CORE_REG(fp_regs.vregs[31]): - if (core_off & 3) - continue; - id &= ~KVM_REG_SIZE_MASK; - id |= KVM_REG_SIZE_U128; - tmp->reg[tmp->n++] = id; - continue; - case KVM_REG_ARM_CORE_REG(fp_regs.fpsr): - case KVM_REG_ARM_CORE_REG(fp_regs.fpcr): - id &= ~KVM_REG_SIZE_MASK; - id |= KVM_REG_SIZE_U32; - tmp->reg[tmp->n++] = id; - continue; - default: - if (core_off & 1) - continue; - tmp->reg[tmp->n++] = id; - break; - } - } - - free(reg_list); - reg_list = tmp; -} - static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) { struct vcpu_reg_sublist *s; @@ -364,7 +305,6 @@ static void check_supported(struct vcpu_reg_list *c) static bool print_list; static bool print_filtered; -static bool fixup_core_regs; static void run_test(struct vcpu_reg_list *c) { @@ -385,9 +325,6 @@ static void run_test(struct vcpu_reg_list *c) reg_list = vcpu_get_reg_list(vcpu); - if (fixup_core_regs) - core_reg_fixup(); - if (print_list || print_filtered) { putchar('\n'); for_each_reg(i) { @@ -515,7 +452,7 @@ static void help(void) printf( "\n" - "usage: get-reg-list [--config=] [--list] [--list-filtered] [--core-reg-fixup]\n\n" + "usage: get-reg-list [--config=] [--list] [--list-filtered]\n\n" " --config= Used to select a specific vcpu configuration for the test/listing\n" " '' may be\n"); @@ -529,7 +466,6 @@ static void help(void) "\n" " --list Print the register list rather than test it (requires --config)\n" " --list-filtered Print registers that would normally be filtered out (requires --config)\n" - " --core-reg-fixup Needed when running on old kernels with broken core reg listings\n" "\n" ); } @@ -561,9 +497,7 @@ int main(int ac, char **av) pid_t pid; for (i = 1; i < ac; ++i) { - if (strcmp(av[i], "--core-reg-fixup") == 0) - fixup_core_regs = true; - else if (strncmp(av[i], "--config", 8) == 0) + if (strncmp(av[i], "--config", 8) == 0) sel = parse_config(av[i]); else if (strcmp(av[i], "--list") == 0) print_list = true; @@ -606,8 +540,11 @@ int main(int ac, char **av) } /* - * The current blessed list was primed with the output of kernel version + * The original blessed list was primed with the output of kernel version * v4.15 with --core-reg-fixup and then later updated with new registers. + * (The --core-reg-fixup option and it's fixup function have been removed + * from the test, as it's unlikely to use this type of test on a kernel + * older than v5.2.) * * The blessed list is up to date with kernel version v6.4 (or so we hope) */ From patchwork Sat Jul 1 13:42:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115036 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045472vqr; Sat, 1 Jul 2023 06:39:59 -0700 (PDT) X-Google-Smtp-Source: APBJJlHsdzKwk+/T4wmMofs2HlLyOGym19zrMXTzKg7mBc+r/sl4REQwqV4cVTLAV5nBy4IwuUMp X-Received: by 2002:a05:6a00:1956:b0:66a:2ff1:dee2 with SMTP id s22-20020a056a00195600b0066a2ff1dee2mr4947224pfk.10.1688218798854; Sat, 01 Jul 2023 06:39:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218798; cv=none; d=google.com; s=arc-20160816; b=sz2Qbpx0AYqHNp11DluwOOGoTxiOeU7pBziPuLCxtX55rk8gq1xir90qnhvbYmGbje XYb+nYM0IDP5aBDfThHWIR+2a27j8C8X5CXMKap2Um24KKRCHhtBgQofOHG5N0eAdNne QbG0XNJg1V7j6cliZ+2blNN3c/lwmWlIa0EJmOnLCPhAY1wETQTsqJ/HrmKdcGRhSQkZ zyjwUKLyr8vNVnZiZlq187sXu/5caLfyCEDq/h7Z4AdXC8Y0Veh0XdwOcT100MHp9gU2 EPaxwY2ree55EBBfKBREe3/gZNzDlIsVcIEK94nMdGGLpBCtOcFTaUWX5x2Tq1f2Dp7n LP9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=6+G/ABwKxyzQmD/JcF2iom3TmHScIR24Evx4nS3c/+U=; fh=Zf7QZEv2XPne2Sp1Sp6Sap90vv6zKHVBiSoBhmqoeYw=; b=TzlCJck2yDue3J8vcdnXBtbGwp8/SWBGwQDJcjyhPz0GCzfkoc8gag2gsKOF+dv/J/ ki9XtBVrwVAv2zl+2pZvT7DcGS7sHfKBfqQkw0ZAW+lqxZ7ygEFX5Hj1mlc0hgxe9+Ff JJxTjevZhtEO0/Qu6KunpFDcUBQilrtHgXuXVqvigma41DNJjgPixxidb8cQwMr9LF7j 3LlvanXT96g93rq635bf59doc6uEhLze5+HyMIVxRh6yXnTx8/6z46h/W5dT1CtfrcL4 dzgGGs/5TSCdVKLGO7l5rJg943HLuX0Gg6Zc6pp1A/8X5Br6qYVwTdWBRF843JTkHbr0 TSPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=mOx6kLWu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q11-20020a635c0b000000b00553ebb05d13si14913893pgb.65.2023.07.01.06.39.44; Sat, 01 Jul 2023 06:39:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=mOx6kLWu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230282AbjGANiu (ORCPT + 99 others); Sat, 1 Jul 2023 09:38:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230292AbjGANir (ORCPT ); Sat, 1 Jul 2023 09:38:47 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98A284213; Sat, 1 Jul 2023 06:38:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218710; x=1719754710; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SkgRiEJV0A4sONxaKTrnMR8QmVnfMrm0hUNtPETEcYk=; b=mOx6kLWusibi7ph2oHBWzDcDrJ6haZ2ACfopZd8USYKEdqB4Su7Fdgno dJ69/TjiynAlIdwqYzyhm7PrClCuu1MrOHf1Nlt3P7Q7tuYWhca6MHcK8 3WVl7XZGQjcgJDFsckh9W/p9X+/TrAia17MprgudILr2RXz9Il5IXLrY7 PvxutTp+jY0L40HstK/xngLRdCW7s3dDhrvZGA0lyIMbLhB96qFg/U40x y/L9v1mIWfHejA/A/dN7ccgFoxEC1N0AW0QxkJUwSlapjGUy1xZ+4O04v NkFZfJwOl4hwNYJOlXfvqka8t2LCRxzDp88/qUin1W/jdsHTVno+U9KO+ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926174" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926174" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747693938" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747693938" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:21 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Like Xu , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 06/13] KVM: arm64: selftests: Split get-reg-list test code Date: Sat, 1 Jul 2023 21:42:54 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225715410842620?= X-GMAIL-MSGID: =?utf-8?q?1770225715410842620?= From: Andrew Jones Split the arch-neutral test code out of aarch64/get-reg-list.c into get-reg-list.c. To do this we invent a new make variable $(SPLIT_TESTS) which expects common parts to be in the KVM selftests root and the counterparts to have the same name, but be in $(ARCH_DIR). There's still some work to be done to de-aarch64 the common get-reg-list.c, but we leave that to the next patch to avoid modifying too much code while moving it. Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/Makefile | 10 +- .../selftests/kvm/aarch64/get-reg-list.c | 361 +---------------- tools/testing/selftests/kvm/get-reg-list.c | 371 ++++++++++++++++++ 3 files changed, 385 insertions(+), 357 deletions(-) create mode 100644 tools/testing/selftests/kvm/get-reg-list.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 4761b768b773..d90cad19c9ee 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -139,7 +139,6 @@ TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test TEST_GEN_PROGS_aarch64 += aarch64/aarch32_id_regs TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions -TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/hypercalls TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test TEST_GEN_PROGS_aarch64 += aarch64/psci_test @@ -151,6 +150,7 @@ TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test +TEST_GEN_PROGS_aarch64 += get-reg-list TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus TEST_GEN_PROGS_aarch64 += kvm_page_table_test TEST_GEN_PROGS_aarch64 += memslot_modification_stress_test @@ -179,6 +179,8 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test +SPLIT_TESTS += get-reg-list + TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR)) TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR)) TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR)) @@ -224,8 +226,12 @@ LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_S_OBJ := $(patsubst %.S, $(OUTPUT)/%.o, $(LIBKVM_S)) LIBKVM_STRING_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_STRING)) LIBKVM_OBJS = $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ) $(LIBKVM_STRING_OBJ) +SPLIT_TESTS_TARGETS := $(patsubst %, $(OUTPUT)/%, $(SPLIT_TESTS)) +SPLIT_TESTS_OBJS := $(patsubst %, $(ARCH_DIR)/%.o, $(SPLIT_TESTS)) + +EXTRA_CLEAN += $(LIBKVM_OBJS) $(SPLIT_TESTS_OBJS) cscope.* -EXTRA_CLEAN += $(LIBKVM_OBJS) cscope.* +$(SPLIT_TESTS_TARGETS): $(SPLIT_TESTS_OBJS) x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index c8b44389d2ee..aaf035c969ec 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -4,85 +4,18 @@ * * Copyright (C) 2020, Red Hat, Inc. * - * When attempting to migrate from a host with an older kernel to a host - * with a newer kernel we allow the newer kernel on the destination to - * list new registers with get-reg-list. We assume they'll be unused, at - * least until the guest reboots, and so they're relatively harmless. - * However, if the destination host with the newer kernel is missing - * registers which the source host with the older kernel has, then that's - * a regression in get-reg-list. This test checks for that regression by - * checking the current list against a blessed list. We should never have - * missing registers, but if new ones appear then they can probably be - * added to the blessed list. A completely new blessed list can be created - * by running the test with the --list command line argument. - * - * Note, the blessed list should be created from the oldest possible - * kernel. We can't go older than v5.2, though, because that's the first + * While the blessed list should be created from the oldest possible + * kernel, we can't go older than v5.2, though, because that's the first * release which includes df205b5c6328 ("KVM: arm64: Filter out invalid * core register IDs in KVM_GET_REG_LIST"). Without that commit the core * registers won't match expectations. */ #include -#include -#include -#include -#include -#include #include "kvm_util.h" #include "test_util.h" #include "processor.h" -static struct kvm_reg_list *reg_list; -static __u64 *blessed_reg, blessed_n; - -static struct vcpu_reg_list *vcpu_configs[]; -static int vcpu_configs_n; - -#define for_each_sublist(c, s) \ - for ((s) = &(c)->sublists[0]; (s)->regs; ++(s)) - -#define for_each_reg(i) \ - for ((i) = 0; (i) < reg_list->n; ++(i)) - -#define for_each_reg_filtered(i) \ - for_each_reg(i) \ - if (!filter_reg(reg_list->reg[i])) - -#define for_each_missing_reg(i) \ - for ((i) = 0; (i) < blessed_n; ++(i)) \ - if (!find_reg(reg_list->reg, reg_list->n, blessed_reg[i])) - -#define for_each_new_reg(i) \ - for_each_reg_filtered(i) \ - if (!find_reg(blessed_reg, blessed_n, reg_list->reg[i])) - -static const char *config_name(struct vcpu_reg_list *c) -{ - struct vcpu_reg_sublist *s; - int len = 0; - - if (c->name) - return c->name; - - for_each_sublist(c, s) - len += strlen(s->name) + 1; - - c->name = malloc(len); - - len = 0; - for_each_sublist(c, s) { - if (!strcmp(s->name, "base")) - continue; - strcat(c->name + len, s->name); - len += strlen(s->name) + 1; - c->name[len - 1] = '+'; - } - c->name[len - 1] = '\0'; - - return c->name; -} - -static bool filter_reg(__u64 reg) +bool filter_reg(__u64 reg) { /* * DEMUX register presence depends on the host's CLIDR_EL1. @@ -94,16 +27,6 @@ static bool filter_reg(__u64 reg) return false; } -static bool find_reg(__u64 regs[], __u64 nr_regs, __u64 reg) -{ - int i; - - for (i = 0; i < nr_regs; ++i) - if (reg == regs[i]) - return true; - return false; -} - #define REG_MASK (KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_COPROC_MASK) #define CORE_REGS_XX_NR_WORDS 2 @@ -187,7 +110,7 @@ static const char *sve_id_to_str(const char *prefix, __u64 id) return NULL; } -static void print_reg(const char *prefix, __u64 id) +void print_reg(const char *prefix, __u64 id) { unsigned op0, op1, crn, crm, op2; const char *reg_size = NULL; @@ -267,278 +190,6 @@ static void print_reg(const char *prefix, __u64 id) } } -static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) -{ - struct vcpu_reg_sublist *s; - - for_each_sublist(c, s) - if (s->capability) - init->features[s->feature / 32] |= 1 << (s->feature % 32); -} - -static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) -{ - struct vcpu_reg_sublist *s; - int feature; - - for_each_sublist(c, s) { - if (s->finalize) { - feature = s->feature; - vcpu_ioctl(vcpu, KVM_ARM_VCPU_FINALIZE, &feature); - } - } -} - -static void check_supported(struct vcpu_reg_list *c) -{ - struct vcpu_reg_sublist *s; - - for_each_sublist(c, s) { - if (!s->capability) - continue; - - __TEST_REQUIRE(kvm_has_cap(s->capability), - "%s: %s not available, skipping tests\n", - config_name(c), s->name); - } -} - -static bool print_list; -static bool print_filtered; - -static void run_test(struct vcpu_reg_list *c) -{ - struct kvm_vcpu_init init = { .target = -1, }; - int new_regs = 0, missing_regs = 0, i, n; - int failed_get = 0, failed_set = 0, failed_reject = 0; - struct kvm_vcpu *vcpu; - struct kvm_vm *vm; - struct vcpu_reg_sublist *s; - - check_supported(c); - - vm = vm_create_barebones(); - prepare_vcpu_init(c, &init); - vcpu = __vm_vcpu_add(vm, 0); - aarch64_vcpu_setup(vcpu, &init); - finalize_vcpu(vcpu, c); - - reg_list = vcpu_get_reg_list(vcpu); - - if (print_list || print_filtered) { - putchar('\n'); - for_each_reg(i) { - __u64 id = reg_list->reg[i]; - if ((print_list && !filter_reg(id)) || - (print_filtered && filter_reg(id))) - print_reg(config_name(c), id); - } - putchar('\n'); - return; - } - - /* - * We only test that we can get the register and then write back the - * same value. Some registers may allow other values to be written - * back, but others only allow some bits to be changed, and at least - * for ID registers set will fail if the value does not exactly match - * what was returned by get. If registers that allow other values to - * be written need to have the other values tested, then we should - * create a new set of tests for those in a new independent test - * executable. - */ - for_each_reg(i) { - uint8_t addr[2048 / 8]; - struct kvm_one_reg reg = { - .id = reg_list->reg[i], - .addr = (__u64)&addr, - }; - bool reject_reg = false; - int ret; - - ret = __vcpu_get_reg(vcpu, reg_list->reg[i], &addr); - if (ret) { - printf("%s: Failed to get ", config_name(c)); - print_reg(config_name(c), reg.id); - putchar('\n'); - ++failed_get; - } - - /* rejects_set registers are rejected after KVM_ARM_VCPU_FINALIZE */ - for_each_sublist(c, s) { - if (s->rejects_set && find_reg(s->rejects_set, s->rejects_set_n, reg.id)) { - reject_reg = true; - ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); - if (ret != -1 || errno != EPERM) { - printf("%s: Failed to reject (ret=%d, errno=%d) ", config_name(c), ret, errno); - print_reg(config_name(c), reg.id); - putchar('\n'); - ++failed_reject; - } - break; - } - } - - if (!reject_reg) { - ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); - if (ret) { - printf("%s: Failed to set ", config_name(c)); - print_reg(config_name(c), reg.id); - putchar('\n'); - ++failed_set; - } - } - } - - for_each_sublist(c, s) - blessed_n += s->regs_n; - blessed_reg = calloc(blessed_n, sizeof(__u64)); - - n = 0; - for_each_sublist(c, s) { - for (i = 0; i < s->regs_n; ++i) - blessed_reg[n++] = s->regs[i]; - } - - for_each_new_reg(i) - ++new_regs; - - for_each_missing_reg(i) - ++missing_regs; - - if (new_regs || missing_regs) { - n = 0; - for_each_reg_filtered(i) - ++n; - - printf("%s: Number blessed registers: %5lld\n", config_name(c), blessed_n); - printf("%s: Number registers: %5lld (includes %lld filtered registers)\n", - config_name(c), reg_list->n, reg_list->n - n); - } - - if (new_regs) { - printf("\n%s: There are %d new registers.\n" - "Consider adding them to the blessed reg " - "list with the following lines:\n\n", config_name(c), new_regs); - for_each_new_reg(i) - print_reg(config_name(c), reg_list->reg[i]); - putchar('\n'); - } - - if (missing_regs) { - printf("\n%s: There are %d missing registers.\n" - "The following lines are missing registers:\n\n", config_name(c), missing_regs); - for_each_missing_reg(i) - print_reg(config_name(c), blessed_reg[i]); - putchar('\n'); - } - - TEST_ASSERT(!missing_regs && !failed_get && !failed_set && !failed_reject, - "%s: There are %d missing registers; " - "%d registers failed get; %d registers failed set; %d registers failed reject", - config_name(c), missing_regs, failed_get, failed_set, failed_reject); - - pr_info("%s: PASS\n", config_name(c)); - blessed_n = 0; - free(blessed_reg); - free(reg_list); - kvm_vm_free(vm); -} - -static void help(void) -{ - struct vcpu_reg_list *c; - int i; - - printf( - "\n" - "usage: get-reg-list [--config=] [--list] [--list-filtered]\n\n" - " --config= Used to select a specific vcpu configuration for the test/listing\n" - " '' may be\n"); - - for (i = 0; i < vcpu_configs_n; ++i) { - c = vcpu_configs[i]; - printf( - " '%s'\n", config_name(c)); - } - - printf( - "\n" - " --list Print the register list rather than test it (requires --config)\n" - " --list-filtered Print registers that would normally be filtered out (requires --config)\n" - "\n" - ); -} - -static struct vcpu_reg_list *parse_config(const char *config) -{ - struct vcpu_reg_list *c; - int i; - - if (config[8] != '=') - help(), exit(1); - - for (i = 0; i < vcpu_configs_n; ++i) { - c = vcpu_configs[i]; - if (strcmp(config_name(c), &config[9]) == 0) - break; - } - - if (i == vcpu_configs_n) - help(), exit(1); - - return c; -} - -int main(int ac, char **av) -{ - struct vcpu_reg_list *c, *sel = NULL; - int i, ret = 0; - pid_t pid; - - for (i = 1; i < ac; ++i) { - if (strncmp(av[i], "--config", 8) == 0) - sel = parse_config(av[i]); - else if (strcmp(av[i], "--list") == 0) - print_list = true; - else if (strcmp(av[i], "--list-filtered") == 0) - print_filtered = true; - else if (strcmp(av[i], "--help") == 0 || strcmp(av[1], "-h") == 0) - help(), exit(0); - else - help(), exit(1); - } - - if (print_list || print_filtered) { - /* - * We only want to print the register list of a single config. - */ - if (!sel) - help(), exit(1); - } - - for (i = 0; i < vcpu_configs_n; ++i) { - c = vcpu_configs[i]; - if (sel && c != sel) - continue; - - pid = fork(); - - if (!pid) { - run_test(c); - exit(0); - } else { - int wstatus; - pid_t wpid = wait(&wstatus); - TEST_ASSERT(wpid == pid && WIFEXITED(wstatus), "wait: Unexpected return"); - if (WEXITSTATUS(wstatus) && WEXITSTATUS(wstatus) != KSFT_SKIP) - ret = KSFT_FAIL; - } - } - - return ret; -} - /* * The original blessed list was primed with the output of kernel version * v4.15 with --core-reg-fixup and then later updated with new registers. @@ -1022,7 +673,7 @@ static struct vcpu_reg_list pauth_pmu_config = { }, }; -static struct vcpu_reg_list *vcpu_configs[] = { +struct vcpu_reg_list *vcpu_configs[] = { &vregs_config, &vregs_pmu_config, &sve_config, @@ -1030,4 +681,4 @@ static struct vcpu_reg_list *vcpu_configs[] = { &pauth_config, &pauth_pmu_config, }; -static int vcpu_configs_n = ARRAY_SIZE(vcpu_configs); +int vcpu_configs_n = ARRAY_SIZE(vcpu_configs); diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c new file mode 100644 index 000000000000..69bb91087081 --- /dev/null +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -0,0 +1,371 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Check for KVM_GET_REG_LIST regressions. + * + * Copyright (C) 2020, Red Hat, Inc. + * + * When attempting to migrate from a host with an older kernel to a host + * with a newer kernel we allow the newer kernel on the destination to + * list new registers with get-reg-list. We assume they'll be unused, at + * least until the guest reboots, and so they're relatively harmless. + * However, if the destination host with the newer kernel is missing + * registers which the source host with the older kernel has, then that's + * a regression in get-reg-list. This test checks for that regression by + * checking the current list against a blessed list. We should never have + * missing registers, but if new ones appear then they can probably be + * added to the blessed list. A completely new blessed list can be created + * by running the test with the --list command line argument. + * + * The blessed list should be created from the oldest possible kernel. + */ +#include +#include +#include +#include +#include +#include +#include "kvm_util.h" +#include "test_util.h" +#include "processor.h" + +static struct kvm_reg_list *reg_list; +static __u64 *blessed_reg, blessed_n; + +extern struct vcpu_reg_list *vcpu_configs[]; +extern int vcpu_configs_n; + +#define for_each_sublist(c, s) \ + for ((s) = &(c)->sublists[0]; (s)->regs; ++(s)) + +#define for_each_reg(i) \ + for ((i) = 0; (i) < reg_list->n; ++(i)) + +#define for_each_reg_filtered(i) \ + for_each_reg(i) \ + if (!filter_reg(reg_list->reg[i])) + +#define for_each_missing_reg(i) \ + for ((i) = 0; (i) < blessed_n; ++(i)) \ + if (!find_reg(reg_list->reg, reg_list->n, blessed_reg[i])) + +#define for_each_new_reg(i) \ + for_each_reg_filtered(i) \ + if (!find_reg(blessed_reg, blessed_n, reg_list->reg[i])) + +static const char *config_name(struct vcpu_reg_list *c) +{ + struct vcpu_reg_sublist *s; + int len = 0; + + if (c->name) + return c->name; + + for_each_sublist(c, s) + len += strlen(s->name) + 1; + + c->name = malloc(len); + + len = 0; + for_each_sublist(c, s) { + if (!strcmp(s->name, "base")) + continue; + strcat(c->name + len, s->name); + len += strlen(s->name) + 1; + c->name[len - 1] = '+'; + } + c->name[len - 1] = '\0'; + + return c->name; +} + +bool __weak filter_reg(__u64 reg) +{ + return false; +} + +static bool find_reg(__u64 regs[], __u64 nr_regs, __u64 reg) +{ + int i; + + for (i = 0; i < nr_regs; ++i) + if (reg == regs[i]) + return true; + return false; +} + +void __weak print_reg(const char *prefix, __u64 id) +{ + printf("\t0x%llx,\n", id); +} + +static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) +{ + struct vcpu_reg_sublist *s; + + for_each_sublist(c, s) + if (s->capability) + init->features[s->feature / 32] |= 1 << (s->feature % 32); +} + +static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) +{ + struct vcpu_reg_sublist *s; + int feature; + + for_each_sublist(c, s) { + if (s->finalize) { + feature = s->feature; + vcpu_ioctl(vcpu, KVM_ARM_VCPU_FINALIZE, &feature); + } + } +} + +static void check_supported(struct vcpu_reg_list *c) +{ + struct vcpu_reg_sublist *s; + + for_each_sublist(c, s) { + if (!s->capability) + continue; + + __TEST_REQUIRE(kvm_has_cap(s->capability), + "%s: %s not available, skipping tests\n", + config_name(c), s->name); + } +} + +static bool print_list; +static bool print_filtered; + +static void run_test(struct vcpu_reg_list *c) +{ + struct kvm_vcpu_init init = { .target = -1, }; + int new_regs = 0, missing_regs = 0, i, n; + int failed_get = 0, failed_set = 0, failed_reject = 0; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + struct vcpu_reg_sublist *s; + + check_supported(c); + + vm = vm_create_barebones(); + prepare_vcpu_init(c, &init); + vcpu = __vm_vcpu_add(vm, 0); + aarch64_vcpu_setup(vcpu, &init); + finalize_vcpu(vcpu, c); + + reg_list = vcpu_get_reg_list(vcpu); + + if (print_list || print_filtered) { + putchar('\n'); + for_each_reg(i) { + __u64 id = reg_list->reg[i]; + if ((print_list && !filter_reg(id)) || + (print_filtered && filter_reg(id))) + print_reg(config_name(c), id); + } + putchar('\n'); + return; + } + + /* + * We only test that we can get the register and then write back the + * same value. Some registers may allow other values to be written + * back, but others only allow some bits to be changed, and at least + * for ID registers set will fail if the value does not exactly match + * what was returned by get. If registers that allow other values to + * be written need to have the other values tested, then we should + * create a new set of tests for those in a new independent test + * executable. + */ + for_each_reg(i) { + uint8_t addr[2048 / 8]; + struct kvm_one_reg reg = { + .id = reg_list->reg[i], + .addr = (__u64)&addr, + }; + bool reject_reg = false; + int ret; + + ret = __vcpu_get_reg(vcpu, reg_list->reg[i], &addr); + if (ret) { + printf("%s: Failed to get ", config_name(c)); + print_reg(config_name(c), reg.id); + putchar('\n'); + ++failed_get; + } + + /* rejects_set registers are rejected after KVM_ARM_VCPU_FINALIZE */ + for_each_sublist(c, s) { + if (s->rejects_set && find_reg(s->rejects_set, s->rejects_set_n, reg.id)) { + reject_reg = true; + ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); + if (ret != -1 || errno != EPERM) { + printf("%s: Failed to reject (ret=%d, errno=%d) ", config_name(c), ret, errno); + print_reg(config_name(c), reg.id); + putchar('\n'); + ++failed_reject; + } + break; + } + } + + if (!reject_reg) { + ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); + if (ret) { + printf("%s: Failed to set ", config_name(c)); + print_reg(config_name(c), reg.id); + putchar('\n'); + ++failed_set; + } + } + } + + for_each_sublist(c, s) + blessed_n += s->regs_n; + blessed_reg = calloc(blessed_n, sizeof(__u64)); + + n = 0; + for_each_sublist(c, s) { + for (i = 0; i < s->regs_n; ++i) + blessed_reg[n++] = s->regs[i]; + } + + for_each_new_reg(i) + ++new_regs; + + for_each_missing_reg(i) + ++missing_regs; + + if (new_regs || missing_regs) { + n = 0; + for_each_reg_filtered(i) + ++n; + + printf("%s: Number blessed registers: %5lld\n", config_name(c), blessed_n); + printf("%s: Number registers: %5lld (includes %lld filtered registers)\n", + config_name(c), reg_list->n, reg_list->n - n); + } + + if (new_regs) { + printf("\n%s: There are %d new registers.\n" + "Consider adding them to the blessed reg " + "list with the following lines:\n\n", config_name(c), new_regs); + for_each_new_reg(i) + print_reg(config_name(c), reg_list->reg[i]); + putchar('\n'); + } + + if (missing_regs) { + printf("\n%s: There are %d missing registers.\n" + "The following lines are missing registers:\n\n", config_name(c), missing_regs); + for_each_missing_reg(i) + print_reg(config_name(c), blessed_reg[i]); + putchar('\n'); + } + + TEST_ASSERT(!missing_regs && !failed_get && !failed_set && !failed_reject, + "%s: There are %d missing registers; " + "%d registers failed get; %d registers failed set; %d registers failed reject", + config_name(c), missing_regs, failed_get, failed_set, failed_reject); + + pr_info("%s: PASS\n", config_name(c)); + blessed_n = 0; + free(blessed_reg); + free(reg_list); + kvm_vm_free(vm); +} + +static void help(void) +{ + struct vcpu_reg_list *c; + int i; + + printf( + "\n" + "usage: get-reg-list [--config=] [--list] [--list-filtered]\n\n" + " --config= Used to select a specific vcpu configuration for the test/listing\n" + " '' may be\n"); + + for (i = 0; i < vcpu_configs_n; ++i) { + c = vcpu_configs[i]; + printf( + " '%s'\n", config_name(c)); + } + + printf( + "\n" + " --list Print the register list rather than test it (requires --config)\n" + " --list-filtered Print registers that would normally be filtered out (requires --config)\n" + "\n" + ); +} + +static struct vcpu_reg_list *parse_config(const char *config) +{ + struct vcpu_reg_list *c = NULL; + int i; + + if (config[8] != '=') + help(), exit(1); + + for (i = 0; i < vcpu_configs_n; ++i) { + c = vcpu_configs[i]; + if (strcmp(config_name(c), &config[9]) == 0) + break; + } + + if (i == vcpu_configs_n) + help(), exit(1); + + return c; +} + +int main(int ac, char **av) +{ + struct vcpu_reg_list *c, *sel = NULL; + int i, ret = 0; + pid_t pid; + + for (i = 1; i < ac; ++i) { + if (strncmp(av[i], "--config", 8) == 0) + sel = parse_config(av[i]); + else if (strcmp(av[i], "--list") == 0) + print_list = true; + else if (strcmp(av[i], "--list-filtered") == 0) + print_filtered = true; + else if (strcmp(av[i], "--help") == 0 || strcmp(av[1], "-h") == 0) + help(), exit(0); + else + help(), exit(1); + } + + if (print_list || print_filtered) { + /* + * We only want to print the register list of a single config. + */ + if (!sel) + help(), exit(1); + } + + for (i = 0; i < vcpu_configs_n; ++i) { + c = vcpu_configs[i]; + if (sel && c != sel) + continue; + + pid = fork(); + + if (!pid) { + run_test(c); + exit(0); + } else { + int wstatus; + pid_t wpid = wait(&wstatus); + TEST_ASSERT(wpid == pid && WIFEXITED(wstatus), "wait: Unexpected return"); + if (WEXITSTATUS(wstatus) && WEXITSTATUS(wstatus) != KSFT_SKIP) + ret = KSFT_FAIL; + } + } + + return ret; +} From patchwork Sat Jul 1 13:42:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115037 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11045584vqr; Sat, 1 Jul 2023 06:40:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5pVGiSrG67x9j0CAo+QeyqSp5yfQGHoArIN399bSWh3Vlr8GdLVYXikdwywb78dN5kcDyT X-Received: by 2002:a5e:8901:0:b0:783:6455:cc9c with SMTP id k1-20020a5e8901000000b007836455cc9cmr5997477ioj.3.1688218811482; Sat, 01 Jul 2023 06:40:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688218811; cv=none; d=google.com; s=arc-20160816; b=d+ge5vV2yp9VlAITfGHOCr7DC75OxJlqkOfH4lGdnxXsJ2d7Bbvns2erhO+8k7+JRu A4I2+gyxZig5E8m1jJILjnL4N6bl8yLBq+KTPWHvu8NQtCVpJYVNjb/KtP2jBMj0cZTy q4c2wfWVWkA32UNJRFZUOTm61mlJJnMMNfYqSvnLZhBJWQtMB1uL3Q28NV8zlKSbvIOo ix8EDeb7dpiRnFNPnJH7bDz/AWTOo2DeaAJq+plZ9QPxkC3H92HSEq3/sjJg9v6rbHBU sW2pwEFzDFhZsX7x17DrFGB4cGerMjPp5GCK2/fCiiM3NjChRA+OpY4psjP+YzvNpRNY 7r7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=8iQCFUkdMVcDHUs3T92Z5mYKFBVT++z4+lSbnH0chs4=; fh=7EPU0yFgKq6DK54DdRhAOTNBSTrU9EP8Vi5A8HZNYZw=; b=0P8lQvfc3Ipxnth8OHM7HaoJjI/8KyOflCmBbT1PQGX/m4DDeGg+2scVHV6HL3OFON KiwesZe7oTctvwjTWNXrZQqWGN3fC7lWQfkVJT9iaD/MJhaIEb6dglOkZhkw5uDGzDlc JgWrpHtHMuuXihzjU9xYYulWd7rpGtVWhpGLF+EyabzjXGrhsQWUCUNPkxVgrE7wS/lB J0wjludVVKrYxCSMwzlVV7+dn6Bz8ogCR8Ct26EMenIv6m9zxfy2n1wjlwUdBDuDaS/j dGc2j7+6JqPtNg+DVN0Ib0erWJwIXQzgLkL0toyW7BIi3CWrkBtlu8CB4iNpot5BQnp1 4ZKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=RCAR2Nkz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m24-20020a63ed58000000b0055b0dcca8b7si8592111pgk.749.2023.07.01.06.39.57; Sat, 01 Jul 2023 06:40:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=RCAR2Nkz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230322AbjGANjK (ORCPT + 99 others); Sat, 1 Jul 2023 09:39:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbjGANjI (ORCPT ); Sat, 1 Jul 2023 09:39:08 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9ECB244BF; Sat, 1 Jul 2023 06:38:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218724; x=1719754724; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EXZQIePHrnC+4oyrdo52RNlpn5u8QhJ9eHGJ6BBZ17s=; b=RCAR2Nkz6gecoqoUoox0iT0DdMGgjZlaRr8oTBJELp1RO03MPmW7mFCZ /qjdqWcmdzsrwlEV7U6+6osEOn7am0DNOg2EC2822r1aq7MXR08FQmsM5 /AM0dgkUaercf4D3Bw2nznC+WyjRe01JBpjCJNQ8DKpmUrtPKbQuzG7Wi F9oLIcR9NPR3JOT99k9usufQQv8juv7icMboxwiNpgdShUEWtFVH6P32+ goKmDvV2QTaR+V+DPAPcQZXLvJxCygqnNZ2sMQOoSMS+pefdO6uyw8Qyv hwTtwI8Gs4c9tbaLEMYc5GPCG2s9+R8W0+teRR80pqetVyJcDEKiZyiAs w==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926202" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926202" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694018" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694018" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:37 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , David Matlack , Vipin Sharma , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 07/13] KVM: arm64: selftests: Finish generalizing get-reg-list Date: Sat, 1 Jul 2023 21:42:55 +0800 Message-Id: <0d81728cf8b4fd931b495ac4c86d6a74e55a5230.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770225728861658834?= X-GMAIL-MSGID: =?utf-8?q?1770225728861658834?= From: Andrew Jones Add some unfortunate #ifdeffery to ensure the common get-reg-list.c can be compiled and run with other architectures. The next architecture to support get-reg-list should now only need to provide $(ARCH_DIR)/get-reg-list.c where arch-specific print_reg() and vcpu_configs[] get defined. Signed-off-by: Andrew Jones Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/get-reg-list.c | 26 +++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c index 69bb91087081..f6ad7991a812 100644 --- a/tools/testing/selftests/kvm/get-reg-list.c +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -98,6 +98,7 @@ void __weak print_reg(const char *prefix, __u64 id) printf("\t0x%llx,\n", id); } +#ifdef __aarch64__ static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) { struct vcpu_reg_sublist *s; @@ -120,6 +121,25 @@ static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) } } +static struct kvm_vcpu *vcpu_config_get_vcpu(struct vcpu_reg_list *c, struct kvm_vm *vm) +{ + struct kvm_vcpu_init init = { .target = -1, }; + struct kvm_vcpu *vcpu; + + prepare_vcpu_init(c, &init); + vcpu = __vm_vcpu_add(vm, 0); + aarch64_vcpu_setup(vcpu, &init); + finalize_vcpu(vcpu, c); + + return vcpu; +} +#else +static struct kvm_vcpu *vcpu_config_get_vcpu(struct vcpu_reg_list *c, struct kvm_vm *vm) +{ + return __vm_vcpu_add(vm, 0); +} +#endif + static void check_supported(struct vcpu_reg_list *c) { struct vcpu_reg_sublist *s; @@ -139,7 +159,6 @@ static bool print_filtered; static void run_test(struct vcpu_reg_list *c) { - struct kvm_vcpu_init init = { .target = -1, }; int new_regs = 0, missing_regs = 0, i, n; int failed_get = 0, failed_set = 0, failed_reject = 0; struct kvm_vcpu *vcpu; @@ -149,10 +168,7 @@ static void run_test(struct vcpu_reg_list *c) check_supported(c); vm = vm_create_barebones(); - prepare_vcpu_init(c, &init); - vcpu = __vm_vcpu_add(vm, 0); - aarch64_vcpu_setup(vcpu, &init); - finalize_vcpu(vcpu, c); + vcpu = vcpu_config_get_vcpu(c, vm); reg_list = vcpu_get_reg_list(vcpu); From patchwork Sat Jul 1 13:42:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115043 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11076061vqr; Sat, 1 Jul 2023 07:34:40 -0700 (PDT) X-Google-Smtp-Source: APBJJlEw3a9xXN9nKev3j7EsX5oE7E+IJ+czYElDBntHow64RUipcxnK6cMHusjZFQPmNz5U19dC X-Received: by 2002:a17:90a:5147:b0:25e:886b:c6b with SMTP id k7-20020a17090a514700b0025e886b0c6bmr4750738pjm.48.1688222080325; Sat, 01 Jul 2023 07:34:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688222080; cv=none; d=google.com; s=arc-20160816; b=CnVOiB92OKrabr6MWi0G0IsBTiVlw2ye3Y2/MnbAY+j7XWKxpCNy15NDT2rvUWyXwv 1p/esBybMv1Bqptglbkl+IehMgNugg69f1nvvy/2iIIzd0bfSRA2tCx2Ztbn+5xN3xYw O5+9bQa6SG0kGT84Cj72rF3Nm9KnxDiHGCYeNvrTiNYgxCa1yDmbuxnNgJBsYJnFtX+W OEtBZxj9hSkUpDH2GL+c/wR39mnioh3LVdDhSNKg6ff+pFr7edtnKQk/wxkwWm/4V+A0 rBsNP2fsUaFJ0X7sKpsbmxlI67FMY4o1gR8AHObloiaqlhvCkTbOhewBpdn2JgmvXmr3 Xkew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=Wb/MGjWqkXyJATI/TrpjAVnedE+FTJ5VpA83NYdC/A0=; fh=7EPU0yFgKq6DK54DdRhAOTNBSTrU9EP8Vi5A8HZNYZw=; b=uhjjByzIeL1hwl6qNLgLpNt2D+rtJwZj322brmOt2rr5CGsYBNNe+tY7+87StjZd5G YdDkeZvhQi0EAQ+8uhJMCpwVmxdWod+14VsRDT3UOJBDoygl2TaKUnkLO8dZx6GiI44g 8aEXQKFcJtRk9PS91rza9flixZKsxgg2yPD+BcioIboU8NfaraOKQq01uVmhuDajCdaE ah4si9IyKNBJoPzeEYHgMV094iNypItCpKa3phBhhWM1e0s7L3VarFbyBRJOQfiU2hlK Rlx2F2hZgl/5fc5krDyC5f5+Wa2/bCLUvlI+U/odAVSq1NAIC7CIopTBW/M07zBZY1jw +bAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=CoS17CGH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d7-20020a656207000000b00524ecfa05d8si15437355pgv.15.2023.07.01.07.34.27; Sat, 01 Jul 2023 07:34:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=CoS17CGH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230374AbjGANjc (ORCPT + 99 others); Sat, 1 Jul 2023 09:39:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230337AbjGANja (ORCPT ); Sat, 1 Jul 2023 09:39:30 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E06E9423A; Sat, 1 Jul 2023 06:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218744; x=1719754744; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r/aRBwjLRDnUy2X8CTuWO0qSvTCCqephwitvcdu1Lbs=; b=CoS17CGHC7qzxuwg0waelF1ohOi1HiGYiqwueka05LPzghEALKeokQeY qanGVrgw1u0EXPkjgVY5MNn0gnQg+U+OAJM0x7Wh/dzAjom/6Qyv6LrrH jLlJ2ZJ8v6KI1T93/CPIYTRrWvKnL6iVkLvKPHTrKsBbFjzx25WfHkEpk 7jh/elU24H+f6hofYYN4whcffhy85XI7IJ9qB+G+mdUqxVvy3M0SrvcP5 mUncEL+G9BU41ddkfBtEirt11tqUoEmqV1dLi2Gl9TIctOgBp7kW5VNzw 5gtUWDGCiPSxKHgZVeWlLa5dZdJyRv3JE7dxQ8/y9Kc0Csm2kbvKdaC8c Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926235" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926235" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694085" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694085" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:38:51 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , David Matlack , Vipin Sharma , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 08/13] KVM: arm64: selftests: Move reject_set check logic to a function Date: Sat, 1 Jul 2023 21:42:56 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770229156171294018?= X-GMAIL-MSGID: =?utf-8?q?1770229156171294018?= No functional changes. Just move the reject_set check logic to a function so we can check for specific errno for specific register. This is a preparation for support reject_set in riscv. Suggested-by: Andrew Jones Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/aarch64/get-reg-list.c | 5 +++++ tools/testing/selftests/kvm/get-reg-list.c | 7 ++++++- 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index aaf035c969ec..4aa58f1aebe3 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -27,6 +27,11 @@ bool filter_reg(__u64 reg) return false; } +bool check_reject_set(int err) +{ + return err == EPERM; +} + #define REG_MASK (KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_COPROC_MASK) #define CORE_REGS_XX_NR_WORDS 2 diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c index f6ad7991a812..79e198968860 100644 --- a/tools/testing/selftests/kvm/get-reg-list.c +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -98,6 +98,11 @@ void __weak print_reg(const char *prefix, __u64 id) printf("\t0x%llx,\n", id); } +bool __weak check_reject_set(int err) +{ + return true; +} + #ifdef __aarch64__ static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) { @@ -216,7 +221,7 @@ static void run_test(struct vcpu_reg_list *c) if (s->rejects_set && find_reg(s->rejects_set, s->rejects_set_n, reg.id)) { reject_reg = true; ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); - if (ret != -1 || errno != EPERM) { + if (ret != -1 || !check_reject_set(errno)) { printf("%s: Failed to reject (ret=%d, errno=%d) ", config_name(c), ret, errno); print_reg(config_name(c), reg.id); putchar('\n'); From patchwork Sat Jul 1 13:42:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115040 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11048673vqr; Sat, 1 Jul 2023 06:46:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ40i1BfgysTT02wqgdPRJlsE/3yzws3SrS4PRRs4sO6g/p7vp3ZAmL47BIKp8dnhJXN5gOH X-Received: by 2002:a17:90b:1c04:b0:262:f550:6413 with SMTP id oc4-20020a17090b1c0400b00262f5506413mr13889421pjb.6.1688219190175; Sat, 01 Jul 2023 06:46:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688219190; cv=none; d=google.com; s=arc-20160816; b=Y9dkixSstFNwqR4hUVz0Di5jFc4+gkQA4Z2yNEpaE/bz0IrZkSHwND3TvI2NpLKAWR um+v9l+aKMXw/LN0xZlU9bVia9mGvaHeh5ej6XZ60Ncf83hNP7Kz+y17nVt6eqBKqH2v 5/5b2uB3Wnay4SJUFc5Ve9pt8UGoCClb01n2KPGd72o4ZelOKPCKKB1ICSKIh/TUkc6r t6/byLoUyDTvY/8arjA8PnheyPo0bOySFN73MACA31FPfPJ9y0eoRn5f0Rm7rouusRxa kKE8M9nl3KTKFoWXFgMSngCpEQ5Wl10grap1pkfUa7QkDcEruoAsvs8BhSMGfLpXnSbC ZL1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=spud5C6gJw/NBDalmdtZ14DHy3OfO5g52dfnh1NfIEg=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=CTF4NaVAsQCFFRAojPbsVyMyNbriVZ8vttOpwQ8DMjK61SEdxJBUlY9BghkCPwIo0/ 4xdEOUZAhG72UvIyJ1UN9xJm7F14GCnJ0kPrprpJMNM6mziYJSrnf+z9m/hlsdyQ54m7 Hj1gcpWWRNyN26cHN57IuZMvGuk4TMF7VpmjhjrF8NAnO4WNLLcmiw3Z0lxEn5k1+Hc4 qfRZpvyjY1JBdwrhzC82v7pFIm7dgvmjA+gBqUTDk8mu3d8dDdqP9s9XKDbIAmiUdM// pm/Rq33vdUCIT/0O7WCDl2SrtKgPOV0ehK6YkIgLebAHYfqpt8Emomz0TzPunOzunO0k Gqwg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b="WGFTvQ/X"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x1-20020a17090a294100b00262dd23b209si12883751pjf.78.2023.07.01.06.46.17; Sat, 01 Jul 2023 06:46:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b="WGFTvQ/X"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230442AbjGANjm (ORCPT + 99 others); Sat, 1 Jul 2023 09:39:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230337AbjGANjk (ORCPT ); Sat, 1 Jul 2023 09:39:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7064944BC; Sat, 1 Jul 2023 06:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218753; x=1719754753; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=p+pM/1SW3Dz1j5w3JJ6iST8Otp6SsrAaa6Aco4Iz16w=; b=WGFTvQ/X8ZjK2sdRZOqXeFLAlED05TJ/iUlL2LfiL1Lv3GtRw5tGNnjQ T/2GYM3SQ6iiWOdh/QUS8Q9rk1gyYy3jpmKAQey5n3VYR6FhAMRy5MNIq txxM1UiK331IOm2nknEYqATc5Uv2pNKTxzk6LSQntR1OOlJcHWC/9o2Sg HRxmFDz4LgaTYD+NY73UmNmsDh7fJZ3omKBU+K2P2iKDcNjPIQ4hidmGH g7zS8CMZLE1pm2fYG5rt/jvjfdBmJ6jhRArP1ryVbcgIg1Zc3Nposn0be 4TFwsgTvwgyf71DGPtXyBDpLsu9Yco+FiABkQhNTJTvQZVpYTWYhbVcxF w==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926257" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926257" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694133" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694133" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:06 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 09/13] KVM: arm64: selftests: Move finalize_vcpu back to run_test Date: Sat, 1 Jul 2023 21:42:57 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770226125355464776?= X-GMAIL-MSGID: =?utf-8?q?1770226125355464776?= No functional changes. Just move the finalize_vcpu call back to run_test and do weak function trick to prepare for the opration in riscv. Suggested-by: Andrew Jones Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- .../selftests/kvm/aarch64/get-reg-list.c | 13 +++++++++++ tools/testing/selftests/kvm/get-reg-list.c | 22 +++++-------------- .../selftests/kvm/include/kvm_util_base.h | 3 +++ 3 files changed, 21 insertions(+), 17 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index 4aa58f1aebe3..6a7f9de21640 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -32,6 +32,19 @@ bool check_reject_set(int err) return err == EPERM; } +void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) +{ + struct vcpu_reg_sublist *s; + int feature; + + for_each_sublist(c, s) { + if (s->finalize) { + feature = s->feature; + vcpu_ioctl(vcpu, KVM_ARM_VCPU_FINALIZE, &feature); + } + } +} + #define REG_MASK (KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_COPROC_MASK) #define CORE_REGS_XX_NR_WORDS 2 diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c index 79e198968860..c61090806007 100644 --- a/tools/testing/selftests/kvm/get-reg-list.c +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -34,9 +34,6 @@ static __u64 *blessed_reg, blessed_n; extern struct vcpu_reg_list *vcpu_configs[]; extern int vcpu_configs_n; -#define for_each_sublist(c, s) \ - for ((s) = &(c)->sublists[0]; (s)->regs; ++(s)) - #define for_each_reg(i) \ for ((i) = 0; (i) < reg_list->n; ++(i)) @@ -103,6 +100,10 @@ bool __weak check_reject_set(int err) return true; } +void __weak finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) +{ +} + #ifdef __aarch64__ static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *init) { @@ -113,19 +114,6 @@ static void prepare_vcpu_init(struct vcpu_reg_list *c, struct kvm_vcpu_init *ini init->features[s->feature / 32] |= 1 << (s->feature % 32); } -static void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) -{ - struct vcpu_reg_sublist *s; - int feature; - - for_each_sublist(c, s) { - if (s->finalize) { - feature = s->feature; - vcpu_ioctl(vcpu, KVM_ARM_VCPU_FINALIZE, &feature); - } - } -} - static struct kvm_vcpu *vcpu_config_get_vcpu(struct vcpu_reg_list *c, struct kvm_vm *vm) { struct kvm_vcpu_init init = { .target = -1, }; @@ -134,7 +122,6 @@ static struct kvm_vcpu *vcpu_config_get_vcpu(struct vcpu_reg_list *c, struct kvm prepare_vcpu_init(c, &init); vcpu = __vm_vcpu_add(vm, 0); aarch64_vcpu_setup(vcpu, &init); - finalize_vcpu(vcpu, c); return vcpu; } @@ -174,6 +161,7 @@ static void run_test(struct vcpu_reg_list *c) vm = vm_create_barebones(); vcpu = vcpu_config_get_vcpu(c, vm); + finalize_vcpu(vcpu, c); reg_list = vcpu_get_reg_list(vcpu); diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index ac4aaa21deee..e4480049000d 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -141,6 +141,9 @@ struct vcpu_reg_list { struct vcpu_reg_sublist sublists[]; }; +#define for_each_sublist(c, s) \ + for ((s) = &(c)->sublists[0]; (s)->regs; ++(s)) + #define kvm_for_each_vcpu(vm, i, vcpu) \ for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \ if (!((vcpu) = vm->vcpus[i])) \ From patchwork Sat Jul 1 13:42:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115045 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11078773vqr; Sat, 1 Jul 2023 07:40:30 -0700 (PDT) X-Google-Smtp-Source: APBJJlHo/foJyR+mOXzC0DxOxVbPlIXnTj9WlrrvTsyklf+fGSF9WBQqI+1OMsLHj5Dj8u9vgTRz X-Received: by 2002:a17:903:40c6:b0:1b6:6812:4ede with SMTP id t6-20020a17090340c600b001b668124edemr5094697pld.26.1688222430461; Sat, 01 Jul 2023 07:40:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688222430; cv=none; d=google.com; s=arc-20160816; b=YKcOc/KRaH/mKCOv6j4hY8lPrzQ4VCwzoUL2ffPT6ELMD9rdA575wdi5MpUbfm5VlF +zVUILAnCApeT/1qTRv8ooemgKAKLr+9CQc3+RFBxgCR6A8SJ4tfByGnw8zOonhns3JV Ytx+uLw4FU7hpTpH6Fmi8Vo1VrLgnfMOuMTw+xMgoA7kUjnkgPhlcE2pmc2qdr06A+sr HRtztRz0epJxxWYwuobkWdRGFNSk+YPqkJQnKy6w6A7cniby/wEzUbnX0fhcSFvSgWXj 0Hr1tQ4WPuhsy1XvTBUck8lsVP115CZFgwiRqmiIQuCOOY8tHo9iIKjDMSJGKKmaEiG+ S8mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=eE836Fm4w+ggTLR70hkszVLixutbeZQ5Ewp4AFb4C3A=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=wpBNMkoWFNh4S+Bl7YqKIJApAlk7bG++otv2G82oadgJlGZkjT32EX17tFXDedJz8h e6nhmgorxlWVbT3IphnKddEI69bkxMSY4ZLeG+c6LyMrjhGNOzZesOLBamvnL0eaAudW sWcWSBSqIL4CVPFxkFPNabAqDIHD5ImTL92Pim7bvThckUrHiaJjUuwLErfBDyMN80ON qc77GIYwpiqFKqhT1oPmcQlGEQx+lb+s1UUEcAHbR0KFl2jFZBY04bwblkFHDa2rPVzI uWcg+QULMgMhhL0Cd0vnf1ck+6Vijn+pecg8SKmrHWnc6rzChrvGBwzX1/KuPczslqoO b0+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=NIoHOX+D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y23-20020a17090264d700b001b87df35cccsi1723181pli.363.2023.07.01.07.40.15; Sat, 01 Jul 2023 07:40:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=NIoHOX+D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbjGANkB (ORCPT + 99 others); Sat, 1 Jul 2023 09:40:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230337AbjGANj7 (ORCPT ); Sat, 1 Jul 2023 09:39:59 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 580424207; Sat, 1 Jul 2023 06:39:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218770; x=1719754770; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I+7CmZbE8smFi2s/kidYxguksFY8+4Lb0ZPsMAWD3Cc=; b=NIoHOX+D2vpVQpMVNncDGkxHq703DmUo1o/QrzbXs38AIY6KbmKqnOaj M1WUNRkPQoGt0NjFvX0qNC6GiULv+R3/WS4kCAvel89KgE0RV18AOs3RA Sj1M+yRy9KjURDvWFYdP4o3DrZXxGIRbb58dKLJOFMaYE92Emj8C9B7c8 jkQ96fQukQ1s9GcayviKPJOMQ+kTCmDxfcxerw/dT5ay51I+pjG6EvHQ5 Ez2aWw1f58oML8AwfymRHCiKUnI9uT16CNGhoSltx5n8c9HFSEnWIKp2c E3nN+MGyOYaRXUJcUVrJm3J05p2wigJ9iCVt/8MIz1MJL3CViDLBTCK4j A==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926281" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926281" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694214" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694214" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:20 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 10/13] KVM: selftests: Only do get/set tests on present blessed list Date: Sat, 1 Jul 2023 21:42:58 +0800 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770229523389811311?= X-GMAIL-MSGID: =?utf-8?q?1770229523389811311?= Only do the get/set tests on present and blessed registers since we don't know the capabilities of any new ones. Suggested-by: Andrew Jones Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/get-reg-list.c | 29 ++++++++++++++-------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c index c61090806007..74fb6f6fdd09 100644 --- a/tools/testing/selftests/kvm/get-reg-list.c +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -49,6 +49,10 @@ extern int vcpu_configs_n; for_each_reg_filtered(i) \ if (!find_reg(blessed_reg, blessed_n, reg_list->reg[i])) +#define for_each_present_blessed_reg(i) \ + for ((i) = 0; (i) < blessed_n; ++(i)) \ + if (find_reg(reg_list->reg, reg_list->n, blessed_reg[i])) + static const char *config_name(struct vcpu_reg_list *c) { struct vcpu_reg_sublist *s; @@ -177,6 +181,16 @@ static void run_test(struct vcpu_reg_list *c) return; } + for_each_sublist(c, s) + blessed_n += s->regs_n; + blessed_reg = calloc(blessed_n, sizeof(__u64)); + + n = 0; + for_each_sublist(c, s) { + for (i = 0; i < s->regs_n; ++i) + blessed_reg[n++] = s->regs[i]; + } + /* * We only test that we can get the register and then write back the * same value. Some registers may allow other values to be written @@ -186,8 +200,11 @@ static void run_test(struct vcpu_reg_list *c) * be written need to have the other values tested, then we should * create a new set of tests for those in a new independent test * executable. + * + * Only do the get/set tests on present, blessed list registers, + * since we don't know the capabilities of any new registers. */ - for_each_reg(i) { + for_each_present_blessed_reg(i) { uint8_t addr[2048 / 8]; struct kvm_one_reg reg = { .id = reg_list->reg[i], @@ -230,16 +247,6 @@ static void run_test(struct vcpu_reg_list *c) } } - for_each_sublist(c, s) - blessed_n += s->regs_n; - blessed_reg = calloc(blessed_n, sizeof(__u64)); - - n = 0; - for_each_sublist(c, s) { - for (i = 0; i < s->regs_n; ++i) - blessed_reg[n++] = s->regs[i]; - } - for_each_new_reg(i) ++new_regs; From patchwork Sat Jul 1 13:42:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115046 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11093695vqr; Sat, 1 Jul 2023 08:09:11 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7AInY6wNKvjX7n59lOmkiTFNRaQMTG9y0z/dUyjEJTE0nYfHtoOKYajlpSz8uteJnPSnJY X-Received: by 2002:a05:6a20:1443:b0:12b:e9a0:f04a with SMTP id a3-20020a056a20144300b0012be9a0f04amr4676897pzi.22.1688224151399; Sat, 01 Jul 2023 08:09:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688224151; cv=none; d=google.com; s=arc-20160816; b=F0fzvDqFZAMm2MwRuV+KImgLAmsYf1l+vYzOQ+YzyPpU3ogKD5GY/38PuSXaJ5a4RO KLDOinuUZd3AE4VJwTX2thSwV03wOOFECzUN9SK9SoV5k1njbLaUhyqERj5rsYzi6/cd eGi4LAQF3JSw8p4dvweZNBXXZWbHj5EKbS4G5FGh010KUk5+PP/J8mnpqQG+EaNPcB9T g1n+h3kMO5uFgQmhL35UN0290r9yrO9cnoLsHnI83ZFfQZc8hPg/GeQxQdnkE0cTBn95 +40y3CUkSaQELH3r7EPOYDqdMgfW/pulcKgp7OMB4ttEZ0YcVhn0PID6g6o1e0D6nYui hHnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=GL/dEoq06rwFfHdn3uK/sDGoFyuzTW14xp2Aqi64P1w=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=AH/L50cVYkyAqZCzBYWe+NU95XC/ag4RZCTeArWsxKw17LegNv7t7HzfkvYPLx7YVW CPnVh3gygg6C7WspEfKQ5ODhXbl1d32V80t5maeq2IDTYnjtgzZirJGPxopd3hJX+uf/ +imO7BNWEzEL9XTSSDwL7S5KhDW5aDptVSISUSUfteAH9Cm0uQe7yyUkbQev8Mfnkk3M /cDvf4L50ZnY5b7JV3j5kNnE72aoauqi+UWw+4NaumDeWaidUBIYEtUX7bVy9yz7vboT WNXlQ9wWg47MdUfQB2EGgqrOQAE2Du8pvXxK47i5P3PfTPBw48QD55fLKduElGtTddRo OJwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=FWBmVC60; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a11-20020a170902eccb00b001b69fe9a45asi5933952plh.575.2023.07.01.08.08.59; Sat, 01 Jul 2023 08:09:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=FWBmVC60; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbjGANkO (ORCPT + 99 others); Sat, 1 Jul 2023 09:40:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230499AbjGANkM (ORCPT ); Sat, 1 Jul 2023 09:40:12 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ADFE44B0; Sat, 1 Jul 2023 06:39:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218782; x=1719754782; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tib4u7zV9bvH0+hgUwQFMOHqdNc+6PnMdDY7xbrMXoM=; b=FWBmVC60w/si5dyPOXZ2/9A4mMSKz9L8aV8NgFZ9Wgj0ZP0IeesLQWWS ZPq9uCInRJT1+xmGFSusD4NnsSdZ+jUPHigfiKdlT3RqeDdL4ZJYLYD0U ZVbR6Rc8PvlN8YC7/KXmj7FjdmaDoXcRGhFzL0Yeur9zH5c2aH2KrTZHc HppscNOunMGlxQLJE1KMrbTcWoxh2Za1uiTtMg5hlh/f7WvGikv5x4ccp 45PI2H2iQvJl/swpATmSQa8U6/HPzqsoyfQu26don5lQta/WNpxnB5mfP m64mCj+BQPFDsHLeUojOBZxJcqrleD96InVAfjK6TPLvfxmYgogFXR0Df Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926300" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926300" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694279" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694279" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:34 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 11/13] KVM: selftests: Add skip_set facility to get_reg_list test Date: Sat, 1 Jul 2023 21:42:59 +0800 Message-Id: <0a418f26388e744b6ae2f17639bea08a05643549.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770229431617755350?= X-GMAIL-MSGID: =?utf-8?q?1770231327678330811?= Add new skips_set members to vcpu_reg_sublist so as to skip set operation on some registers. Suggested-by: Andrew Jones Signed-off-by: Haibo Xu --- tools/testing/selftests/kvm/get-reg-list.c | 20 +++++++++++++------ .../selftests/kvm/include/kvm_util_base.h | 2 ++ 2 files changed, 16 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c index 74fb6f6fdd09..1a32a900aeea 100644 --- a/tools/testing/selftests/kvm/get-reg-list.c +++ b/tools/testing/selftests/kvm/get-reg-list.c @@ -157,6 +157,7 @@ static void run_test(struct vcpu_reg_list *c) { int new_regs = 0, missing_regs = 0, i, n; int failed_get = 0, failed_set = 0, failed_reject = 0; + int skipped_set = 0; struct kvm_vcpu *vcpu; struct kvm_vm *vm; struct vcpu_reg_sublist *s; @@ -210,7 +211,7 @@ static void run_test(struct vcpu_reg_list *c) .id = reg_list->reg[i], .addr = (__u64)&addr, }; - bool reject_reg = false; + bool reject_reg = false, skip_reg = false; int ret; ret = __vcpu_get_reg(vcpu, reg_list->reg[i], &addr); @@ -221,8 +222,8 @@ static void run_test(struct vcpu_reg_list *c) ++failed_get; } - /* rejects_set registers are rejected after KVM_ARM_VCPU_FINALIZE */ for_each_sublist(c, s) { + /* rejects_set registers are rejected for set operation */ if (s->rejects_set && find_reg(s->rejects_set, s->rejects_set_n, reg.id)) { reject_reg = true; ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); @@ -234,9 +235,16 @@ static void run_test(struct vcpu_reg_list *c) } break; } + + /* skips_set registers are skipped for set operation */ + if (s->skips_set && find_reg(s->skips_set, s->skips_set_n, reg.id)) { + skip_reg = true; + ++skipped_set; + break; + } } - if (!reject_reg) { + if (!reject_reg && !skip_reg) { ret = __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®); if (ret) { printf("%s: Failed to set ", config_name(c)); @@ -281,9 +289,9 @@ static void run_test(struct vcpu_reg_list *c) } TEST_ASSERT(!missing_regs && !failed_get && !failed_set && !failed_reject, - "%s: There are %d missing registers; " - "%d registers failed get; %d registers failed set; %d registers failed reject", - config_name(c), missing_regs, failed_get, failed_set, failed_reject); + "%s: There are %d missing registers; %d registers failed get; " + "%d registers failed set; %d registers failed reject; %d registers skipped set", + config_name(c), missing_regs, failed_get, failed_set, failed_reject, skipped_set); pr_info("%s: PASS\n", config_name(c)); blessed_n = 0; diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index e4480049000d..67c031fe89a1 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -134,6 +134,8 @@ struct vcpu_reg_sublist { __u64 regs_n; __u64 *rejects_set; __u64 rejects_set_n; + __u64 *skips_set; + __u64 skips_set_n; }; struct vcpu_reg_list { From patchwork Sat Jul 1 13:43:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115038 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11048199vqr; Sat, 1 Jul 2023 06:45:32 -0700 (PDT) X-Google-Smtp-Source: APBJJlHvtNEdm3p/6MQvkqZ8KWNkaUqQ+akUfXJMODOw0NjR9i3b/v06av9J8xu8kYRQsffZS5vH X-Received: by 2002:a17:902:efc1:b0:1b3:d8ac:8db5 with SMTP id ja1-20020a170902efc100b001b3d8ac8db5mr3438124plb.40.1688219132077; Sat, 01 Jul 2023 06:45:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688219132; cv=none; d=google.com; s=arc-20160816; b=Een5ZTozzmMN7m6x8dLcFpW8io3bHSbiTHcPGBnWJltPAe0MKmMLiUD3QERmVIwYCF Wnwi0SR+p7s6S41Yd+0KP3p9XI///qt3OaSKxW9p/XOuQYCGEI/1fQfD4fq0YpGNaQ3F 8ZKl01Pm8WtPBsCniTT83BHpLv/66dg/D5uJlOBAvKW33X38pwKmECx7oKdhtUt5SXlF UuG884KwTS8ItDlLiVgGMxoIZ2nDGV3ZwEWISWYnRBv8qvcy+1zDyjtOpgva2FmLLj3E 19Emt0v4h3D/f4+sElTKIwFP1OYSYaThoe5yPMBFVrIasB21zzNYX/5E7VeVbLIJdEws if3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=5SBaCtTngRXsH9MP3ruWXAuhVxT2WWe5mKjVIzwhYo8=; fh=5QP9mxv2gpL+1vP/8gjX6ebdL+i0nfVCKaIG9Q+a6SY=; b=erNJgn3U6qkr/DsEUbisVtK57mKfUAIwgJNCYKpZvf3lnC50ZPxtgvdkTQtE4YORoN AuJfUxY1ZCKGzjmHDi/x54q6zGG8Z9BRmNN/vmt6o2FYCevfTgprvX8fJrEKsHH+Zxhb /mHpMmrnfLfrRv97ntUWXaH5aaMnRpUJtHiuSBBz9lgsbtecHEep1UVsFm921GOwnMRe vpquaw9e530yMbuE4s5LLd5IS6MVF/p2tY6baeJM/WOjPJi6DWGuswIg+AJ6uoag42h1 ylZrWFgbzLt+snN5iAIW+FHvOZ2Deu+D96EzHg3SEo+FVN7uR04MUAnODBlgfuRr2O6z LxbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=YvtHtuDk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk13-20020a17090308cd00b001b815039794si9324459plb.222.2023.07.01.06.45.19; Sat, 01 Jul 2023 06:45:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=YvtHtuDk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231238AbjGANka (ORCPT + 99 others); Sat, 1 Jul 2023 09:40:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230499AbjGANkY (ORCPT ); Sat, 1 Jul 2023 09:40:24 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B069423E; Sat, 1 Jul 2023 06:39:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218799; x=1719754799; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=04wGA9n9pWRWu8KiiczvUSfLNUDBFz5lAJubM4bYVWk=; b=YvtHtuDkRm6lBMNnUVCLuS3JobOCzGXhg5ocZMqkCKzsG0Hx4OXaLiQC 14mktOz80dqJVO2RYTBpTmrKlIV1D+x/5DGL/muYcFHbhMJvNfKWiSSv9 7IMJ7EzNTVIBxi1qVNZsadJxnDAYRkZE540rRcppw5fVikdDXGctUfDsK +YSJ/OXWOoGwgnemERJB9KSFaAIIWbUOFamxwiH+WS+s3aFC8V9WfrmUJ I5cVeMcGcYRauedReFT3DXROza/eXI4EAhrQDekgt+QThlKOyxTbpmMcP ovNh5m31NO34D1ONYuyggN0khvCx8V5VYYdnT0ADfIR6Boi26kf7jJk/Z g==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926324" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926324" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694290" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694290" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:39:49 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vitaly Kuznetsov , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 12/13] KVM: riscv: Add KVM_GET_REG_LIST API support Date: Sat, 1 Jul 2023 21:43:00 +0800 Message-Id: <1674ba5898e86766264df720602cf9a086206ad5.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770226064834370592?= X-GMAIL-MSGID: =?utf-8?q?1770226064834370592?= KVM_GET_REG_LIST API will return all registers that are available to KVM_GET/SET_ONE_REG APIs. It's very useful to identify some platform regression issue during VM migration. Since this API was already supported on arm64, it is straightforward to enable it on riscv with similar code structure. Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- Documentation/virt/kvm/api.rst | 2 +- arch/riscv/kvm/vcpu.c | 375 +++++++++++++++++++++++++++++++++ 2 files changed, 376 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index add067793b90..280e89abd004 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -3499,7 +3499,7 @@ VCPU matching underlying host. --------------------- :Capability: basic -:Architectures: arm64, mips +:Architectures: arm64, mips, riscv :Type: vcpu ioctl :Parameters: struct kvm_reg_list (in/out) :Returns: 0 on success; -1 on error diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8bd9f2a8a0b9..ad420b8676ab 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -657,6 +657,363 @@ static int kvm_riscv_vcpu_set_reg_isa_ext(struct kvm_vcpu *vcpu, return 0; } +static int copy_config_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + int n = 0; + + for (int i = 0; i < sizeof(struct kvm_riscv_config)/sizeof(unsigned long); + i++) { + u64 size; + u64 reg; + + /* + * Avoid reporting config reg if the corresponding extension + * was not available. + */ + if (i == KVM_REG_RISCV_CONFIG_REG(zicbom_block_size) && + !riscv_isa_extension_available(vcpu->arch.isa, ZICBOM)) + continue; + else if (i == KVM_REG_RISCV_CONFIG_REG(zicboz_block_size) && + !riscv_isa_extension_available(vcpu->arch.isa, ZICBOZ)) + continue; + + size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CONFIG | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + + n++; + } + + return n; +} + +static unsigned long num_config_regs(const struct kvm_vcpu *vcpu) +{ + return copy_config_reg_indices(vcpu, NULL); +} + +static inline unsigned long num_core_regs(void) +{ + return sizeof(struct kvm_riscv_core) / sizeof(unsigned long); +} + +static int copy_core_reg_indices(u64 __user *uindices) +{ + int n = num_core_regs(); + + for (int i = 0; i < n; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CORE | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + return n; +} + +static inline unsigned long num_csr_regs(const struct kvm_vcpu *vcpu) +{ + unsigned long n = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long); + + if (riscv_isa_extension_available(vcpu->arch.isa, SSAIA)) + n += sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long); + + return n; +} + +static int copy_csr_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + int n1 = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long); + int n2 = 0; + + /* copy general csr regs */ + for (int i = 0; i < n1; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CSR | + KVM_REG_RISCV_CSR_GENERAL | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + /* copy AIA csr regs */ + if (riscv_isa_extension_available(vcpu->arch.isa, SSAIA)) { + n2 = sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long); + + for (int i = 0; i < n2; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CSR | + KVM_REG_RISCV_CSR_AIA | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + } + + return n1 + n2; +} + +static inline unsigned long num_timer_regs(void) +{ + return sizeof(struct kvm_riscv_timer) / sizeof(u64); +} + +static int copy_timer_reg_indices(u64 __user *uindices) +{ + int n = num_timer_regs(); + + for (int i = 0; i < n; i++) { + u64 reg = KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + return n; +} + +static inline unsigned long num_fp_f_regs(const struct kvm_vcpu *vcpu) +{ + const struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + + if (riscv_isa_extension_available(vcpu->arch.isa, f)) + return sizeof(cntx->fp.f) / sizeof(u32); + else + return 0; +} + +static int copy_fp_f_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + int n = num_fp_f_regs(vcpu); + + for (int i = 0; i < n; i++) { + u64 reg = KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + return n; +} + +static inline unsigned long num_fp_d_regs(const struct kvm_vcpu *vcpu) +{ + const struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + + if (riscv_isa_extension_available(vcpu->arch.isa, d)) + return sizeof(cntx->fp.d.f) / sizeof(u64) + 1; + else + return 0; +} + +static int copy_fp_d_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + int i; + int n = num_fp_d_regs(vcpu); + u64 reg; + + /* copy fp.d.f indices */ + for (i = 0; i < n-1; i++) { + reg = KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + /* copy fp.d.fcsr indices */ + reg = KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_D | i; + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + + return n; +} + +static int copy_isa_ext_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + unsigned int n = 0; + unsigned long isa_ext; + + for (int i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_ISA_EXT | i; + + isa_ext = kvm_isa_ext_arr[i]; + if (!__riscv_isa_extension_available(vcpu->arch.isa, isa_ext)) + continue; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + + n++; + } + + return n; +} + +static inline unsigned long num_isa_ext_regs(const struct kvm_vcpu *vcpu) +{ + return copy_isa_ext_reg_indices(vcpu, NULL);; +} + +static inline unsigned long num_sbi_ext_regs(void) +{ + /* + * number of KVM_REG_RISCV_SBI_SINGLE + + * 2 x (number of KVM_REG_RISCV_SBI_MULTI) + */ + return KVM_RISCV_SBI_EXT_MAX + 2*(KVM_REG_RISCV_SBI_MULTI_REG_LAST+1); +} + +static int copy_sbi_ext_reg_indices(u64 __user *uindices) +{ + int n; + + /* copy KVM_REG_RISCV_SBI_SINGLE */ + n = KVM_RISCV_SBI_EXT_MAX; + for (int i = 0; i < n; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_SBI_EXT | + KVM_REG_RISCV_SBI_SINGLE | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + /* copy KVM_REG_RISCV_SBI_MULTI */ + n = KVM_REG_RISCV_SBI_MULTI_REG_LAST + 1; + for (int i = 0; i < n; i++) { + u64 size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_SBI_EXT | + KVM_REG_RISCV_SBI_MULTI_EN | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + + reg = KVM_REG_RISCV | size | KVM_REG_RISCV_SBI_EXT | + KVM_REG_RISCV_SBI_MULTI_DIS | i; + + if (uindices) { + if (put_user(reg, uindices)) + return -EFAULT; + uindices++; + } + } + + return num_sbi_ext_regs(); +} + +/* + * kvm_riscv_vcpu_num_regs - how many registers do we present via KVM_GET/SET_ONE_REG + * + * This is for all registers. + */ +static unsigned long kvm_riscv_vcpu_num_regs(struct kvm_vcpu *vcpu) +{ + unsigned long res = 0; + + res += num_config_regs(vcpu); + res += num_core_regs(); + res += num_csr_regs(vcpu); + res += num_timer_regs(); + res += num_fp_f_regs(vcpu); + res += num_fp_d_regs(vcpu); + res += num_isa_ext_regs(vcpu); + res += num_sbi_ext_regs(); + + return res; +} + +/* + * kvm_riscv_vcpu_copy_reg_indices - get indices of all registers. + */ +static int kvm_riscv_vcpu_copy_reg_indices(struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + int ret; + + ret = copy_config_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_core_reg_indices(uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_csr_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_timer_reg_indices(uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_fp_f_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_fp_d_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_isa_ext_reg_indices(vcpu, uindices); + if (ret < 0) + return ret; + uindices += ret; + + ret = copy_sbi_ext_reg_indices(uindices); + if (ret < 0) + return ret; + + return 0; +} + static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -758,6 +1115,24 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = kvm_riscv_vcpu_get_reg(vcpu, ®); break; } + case KVM_GET_REG_LIST: { + struct kvm_reg_list __user *user_list = argp; + struct kvm_reg_list reg_list; + unsigned int n; + + r = -EFAULT; + if (copy_from_user(®_list, user_list, sizeof(reg_list))) + break; + n = reg_list.n; + reg_list.n = kvm_riscv_vcpu_num_regs(vcpu); + if (copy_to_user(user_list, ®_list, sizeof(reg_list))) + break; + r = -E2BIG; + if (n < reg_list.n) + break; + r = kvm_riscv_vcpu_copy_reg_indices(vcpu, user_list->reg); + break; + } default: break; } From patchwork Sat Jul 1 13:43:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haibo Xu X-Patchwork-Id: 115044 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp11078458vqr; Sat, 1 Jul 2023 07:39:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5F5UiUsNlc/XhEpgumlT574a2hWKJRC6SheABUZYKVWZdvRvFfTNXDLtKWxOHTpxJ+BuVJ X-Received: by 2002:a05:6a20:9383:b0:122:958:9fc7 with SMTP id x3-20020a056a20938300b0012209589fc7mr6855814pzh.40.1688222387090; Sat, 01 Jul 2023 07:39:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688222387; cv=none; d=google.com; s=arc-20160816; b=L80vzT1Fn8Ij+32Hi1l4zIjEpAD+medHpZvIFrN55WtxtiZz51h9x0Vl4tKPjzmovs mUpfY5b2X0Cf1w8v1u/RSzwYoECHoIk2dN1gzUZ4WSPxHS3dfAmEzSTkq20yoFA1V1FV anFGWzIZL4a56mQsyjEuu6zkpl9dV8qftC+tam/nU9ZFUd/S3O8hHiA0GGzo5wmzPnQC L4D/X1biueikIruUwXpuaTTYM3AQjzLhKgp7UVZXnnNTxQGaA8YVglEq7YxQWV9rKDN4 PGOlwsInjTsBwa8nBrhmYKjMfExJ6Kk9nBkNw3eSYHGADyDe4LocC/WPJxamX/scW7Yz qxFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:to:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:from :dkim-signature; bh=x1vujzXDM3Ue0pESbRF/6fapIw9Qi4EwJCcjkIhuQts=; fh=t5Ra4m2YxRKhU4WbIwyb+raTVIOmXdZcxHU0Z8Y+pgE=; b=c/vM96l9HFb2aIRUS98p+zC6qewJRZZ8NOXqFL2Q2SvBIOo68xb/3auYmu1XHArsW3 Vl/mQSY7API7Yeh6FpWw5pUtM8kPVy/4VqzvQ0Gn1U7kR4hQC56b/YMhZLIaBbEEXVKX bBNHGRfOOMEmgfm0Ah7ZF0NL9gaIdBHvOfQqEN2iAuO3z3zzHiMFmjKYcA70ix7plhGu GOG+VW/KT9psMjgfHzn/8Q21LT5XK8RTvKJGRFRzt21s3qvsXWgi99ThH9IRxlS1xL0f mI0gmZ2zM0iaOvXlvRacK+8Bhv+20abxjF3duWYPDtMJZer2mtIwjXg6nMij9g/J8F0x Llyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=bivRVrnp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x19-20020a056a00189300b00681930af5a0si6636679pfh.23.2023.07.01.07.39.32; Sat, 01 Jul 2023 07:39:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=bivRVrnp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231258AbjGANkj (ORCPT + 99 others); Sat, 1 Jul 2023 09:40:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbjGANkh (ORCPT ); Sat, 1 Jul 2023 09:40:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDE8744A6; Sat, 1 Jul 2023 06:40:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688218812; x=1719754812; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7+bI8ZpCX/Q5v5nDarKQmjQsZW7sCDBKTlR8pLOjeM0=; b=bivRVrnpLil/6TTT35fnbFdzQWWU50avJSLTpXku8pL12mopZL/7wedM PNUc9UWTJwTIGbLj2owSE1NOnuMojZvnaKg+Y+cNsK+nmNfvXDPzezp4y hKPKo70/FUJRSseFljwKpdaoF6m84OweA8cbda/mRWPH5jtF+1lv0nB5G a2fGAJiRXURFO4E75KPtSdSSgYA5d1yy6Q5EGLUZ3eJqJH3bGJ81Td5U1 kEgf0kAhGohj5a+DcuJ2FhtPwKA+NoZP6IscBdpGPp1/HmpI0Eie5mJeN fifeGw8UAFe3Z6/ekXAdP97vb6LiFufg2gYJovD5nrTY4UV0NKLx9kWmI w==; X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="342926350" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="342926350" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:40:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10758"; a="747694337" X-IronPort-AV: E=Sophos;i="6.01,173,1684825200"; d="scan'208";a="747694337" Received: from haibo-optiplex-7090.sh.intel.com ([10.239.159.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jul 2023 06:40:04 -0700 From: Haibo Xu Cc: xiaobo55x@gmail.com, haibo1.xu@intel.com, ajones@ventanamicro.com, maz@kernel.org, oliver.upton@linux.dev, seanjc@google.com, Paolo Bonzini , Jonathan Corbet , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Shuah Khan , James Morse , Suzuki K Poulose , Zenghui Yu , Ricardo Koller , Vishal Annapurve , Vipin Sharma , David Matlack , Colton Lewis , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Subject: [PATCH v5 13/13] KVM: riscv: selftests: Add get-reg-list test Date: Sat, 1 Jul 2023 21:43:01 +0800 Message-Id: <125734468d58f4d9780ea43ab09ad5789d395369.1688010022.git.haibo1.xu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,UPPERCASE_50_75, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770229477348448382?= X-GMAIL-MSGID: =?utf-8?q?1770229477348448382?= get-reg-list test is used to check for KVM registers regressions during VM migration which happens when destination host kernel missing registers that the source host kernel has. The blessed list registers was created by running on v6.4 Signed-off-by: Haibo Xu Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/riscv/processor.h | 3 + .../selftests/kvm/riscv/get-reg-list.c | 780 ++++++++++++++++++ 3 files changed, 784 insertions(+) create mode 100644 tools/testing/selftests/kvm/riscv/get-reg-list.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index d90cad19c9ee..f7bcda903dd9 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -174,6 +174,7 @@ TEST_GEN_PROGS_s390x += kvm_binary_stats_test TEST_GEN_PROGS_riscv += demand_paging_test TEST_GEN_PROGS_riscv += dirty_log_test +TEST_GEN_PROGS_riscv += get-reg-list TEST_GEN_PROGS_riscv += kvm_create_max_vcpus TEST_GEN_PROGS_riscv += kvm_page_table_test TEST_GEN_PROGS_riscv += set_memory_region_test diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index d00d213c3805..5b62a3d2aa9b 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -38,6 +38,9 @@ static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t idx, KVM_REG_RISCV_TIMER_REG(name), \ KVM_REG_SIZE_U64) +#define RISCV_ISA_EXT_REG(idx) __kvm_reg_id(KVM_REG_RISCV_ISA_EXT, \ + idx, KVM_REG_SIZE_ULONG) + /* L3 index Bit[47:39] */ #define PGTBL_L3_INDEX_MASK 0x0000FF8000000000ULL #define PGTBL_L3_INDEX_SHIFT 39 diff --git a/tools/testing/selftests/kvm/riscv/get-reg-list.c b/tools/testing/selftests/kvm/riscv/get-reg-list.c new file mode 100644 index 000000000000..dff24870e393 --- /dev/null +++ b/tools/testing/selftests/kvm/riscv/get-reg-list.c @@ -0,0 +1,780 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Check for KVM_GET_REG_LIST regressions. + * + * Copyright (c) 2023 Intel Corporation + * + */ +#include +#include "kvm_util.h" +#include "test_util.h" +#include "processor.h" + +#define REG_MASK (KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK) + +bool filter_reg(__u64 reg) +{ + /* + * Some ISA extensions are optional and not present on all host, + * but they can't be disabled through ISA_EXT registers when present. + * So, to make life easy, just filtering out these kind of registers. + */ + switch (reg & ~REG_MASK) { + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSTC: + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVINVAL: + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIHINTPAUSE: + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBB: + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSAIA: + return true; + default: + break; + } + + return false; +} + +bool check_reject_set(int err) +{ + return err == EOPNOTSUPP; +} + +static inline bool vcpu_has_ext(struct kvm_vcpu *vcpu, int ext) +{ + int ret; + unsigned long value; + + ret = __vcpu_get_reg(vcpu, RISCV_ISA_EXT_REG(ext), &value); + if (ret) { + printf("Failed to get ext %d", ext); + return false; + } + + return !!value; +} + +void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) +{ + struct vcpu_reg_sublist *s; + + /* + * Disable all extensions which were enabled by default + * if they were available in the risc-v host. + */ + for (int i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) + __vcpu_set_reg(vcpu, RISCV_ISA_EXT_REG(i), 0); + + for_each_sublist(c, s) { + if (!s->feature) + continue; + + /* Try to enable the desired extension */ + __vcpu_set_reg(vcpu, RISCV_ISA_EXT_REG(s->feature), 1); + + /* Double check whether the desired extension was enabled */ + __TEST_REQUIRE(vcpu_has_ext(vcpu, s->feature), + "%s not available, skipping tests\n", s->name); + } +} + +static const char *config_id_to_str(__u64 id) +{ + /* reg_off is the offset into struct kvm_riscv_config */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CONFIG); + + switch (reg_off) { + case KVM_REG_RISCV_CONFIG_REG(isa): + return "KVM_REG_RISCV_CONFIG_REG(isa)"; + case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size): + return "KVM_REG_RISCV_CONFIG_REG(zicbom_block_size)"; + case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size): + return "KVM_REG_RISCV_CONFIG_REG(zicboz_block_size)"; + case KVM_REG_RISCV_CONFIG_REG(mvendorid): + return "KVM_REG_RISCV_CONFIG_REG(mvendorid)"; + case KVM_REG_RISCV_CONFIG_REG(marchid): + return "KVM_REG_RISCV_CONFIG_REG(marchid)"; + case KVM_REG_RISCV_CONFIG_REG(mimpid): + return "KVM_REG_RISCV_CONFIG_REG(mimpid)"; + } + + /* + * Config regs would grow regularly with new pseudo reg added, so + * just show raw id to indicate a new pseudo config reg. + */ + return strdup_printf("KVM_REG_RISCV_CONFIG_REG(%lld) /* UNKNOWN */", reg_off); +} + +static const char *core_id_to_str(const char *prefix, __u64 id) +{ + /* reg_off is the offset into struct kvm_riscv_core */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CORE); + + switch (reg_off) { + case KVM_REG_RISCV_CORE_REG(regs.pc): + return "KVM_REG_RISCV_CORE_REG(regs.pc)"; + case KVM_REG_RISCV_CORE_REG(regs.ra): + return "KVM_REG_RISCV_CORE_REG(regs.ra)"; + case KVM_REG_RISCV_CORE_REG(regs.sp): + return "KVM_REG_RISCV_CORE_REG(regs.sp)"; + case KVM_REG_RISCV_CORE_REG(regs.gp): + return "KVM_REG_RISCV_CORE_REG(regs.gp)"; + case KVM_REG_RISCV_CORE_REG(regs.tp): + return "KVM_REG_RISCV_CORE_REG(regs.tp)"; + case KVM_REG_RISCV_CORE_REG(regs.t0) ... KVM_REG_RISCV_CORE_REG(regs.t2): + return strdup_printf("KVM_REG_RISCV_CORE_REG(regs.t%lld)", + reg_off - KVM_REG_RISCV_CORE_REG(regs.t0)); + case KVM_REG_RISCV_CORE_REG(regs.s0) ... KVM_REG_RISCV_CORE_REG(regs.s1): + return strdup_printf("KVM_REG_RISCV_CORE_REG(regs.s%lld)", + reg_off - KVM_REG_RISCV_CORE_REG(regs.s0)); + case KVM_REG_RISCV_CORE_REG(regs.a0) ... KVM_REG_RISCV_CORE_REG(regs.a7): + return strdup_printf("KVM_REG_RISCV_CORE_REG(regs.a%lld)", + reg_off - KVM_REG_RISCV_CORE_REG(regs.a0)); + case KVM_REG_RISCV_CORE_REG(regs.s2) ... KVM_REG_RISCV_CORE_REG(regs.s11): + return strdup_printf("KVM_REG_RISCV_CORE_REG(regs.s%lld)", + reg_off - KVM_REG_RISCV_CORE_REG(regs.s2) + 2); + case KVM_REG_RISCV_CORE_REG(regs.t3) ... KVM_REG_RISCV_CORE_REG(regs.t6): + return strdup_printf("KVM_REG_RISCV_CORE_REG(regs.t%lld)", + reg_off - KVM_REG_RISCV_CORE_REG(regs.t3) + 3); + case KVM_REG_RISCV_CORE_REG(mode): + return "KVM_REG_RISCV_CORE_REG(mode)"; + } + + TEST_FAIL("%s: Unknown core reg id: 0x%llx", prefix, id); + return NULL; +} + +#define RISCV_CSR_GENERAL(csr) \ + "KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(" #csr ")" +#define RISCV_CSR_AIA(csr) \ + "KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_REG(" #csr ")" + +static const char *general_csr_id_to_str(__u64 reg_off) +{ + /* reg_off is the offset into struct kvm_riscv_csr */ + switch (reg_off) { + case KVM_REG_RISCV_CSR_REG(sstatus): + return RISCV_CSR_GENERAL(sstatus); + case KVM_REG_RISCV_CSR_REG(sie): + return RISCV_CSR_GENERAL(sie); + case KVM_REG_RISCV_CSR_REG(stvec): + return RISCV_CSR_GENERAL(stvec); + case KVM_REG_RISCV_CSR_REG(sscratch): + return RISCV_CSR_GENERAL(sscratch); + case KVM_REG_RISCV_CSR_REG(sepc): + return RISCV_CSR_GENERAL(sepc); + case KVM_REG_RISCV_CSR_REG(scause): + return RISCV_CSR_GENERAL(scause); + case KVM_REG_RISCV_CSR_REG(stval): + return RISCV_CSR_GENERAL(stval); + case KVM_REG_RISCV_CSR_REG(sip): + return RISCV_CSR_GENERAL(sip); + case KVM_REG_RISCV_CSR_REG(satp): + return RISCV_CSR_GENERAL(satp); + case KVM_REG_RISCV_CSR_REG(scounteren): + return RISCV_CSR_GENERAL(scounteren); + } + + TEST_FAIL("Unknown general csr reg: 0x%llx", reg_off); + return NULL; +} + +static const char *aia_csr_id_to_str(__u64 reg_off) +{ + /* reg_off is the offset into struct kvm_riscv_aia_csr */ + switch (reg_off) { + case KVM_REG_RISCV_CSR_AIA_REG(siselect): + return RISCV_CSR_AIA(siselect); + case KVM_REG_RISCV_CSR_AIA_REG(iprio1): + return RISCV_CSR_AIA(iprio1); + case KVM_REG_RISCV_CSR_AIA_REG(iprio2): + return RISCV_CSR_AIA(iprio2); + case KVM_REG_RISCV_CSR_AIA_REG(sieh): + return RISCV_CSR_AIA(sieh); + case KVM_REG_RISCV_CSR_AIA_REG(siph): + return RISCV_CSR_AIA(siph); + case KVM_REG_RISCV_CSR_AIA_REG(iprio1h): + return RISCV_CSR_AIA(iprio1h); + case KVM_REG_RISCV_CSR_AIA_REG(iprio2h): + return RISCV_CSR_AIA(iprio2h); + } + + TEST_FAIL("Unknown aia csr reg: 0x%llx", reg_off); + return NULL; +} + +static const char *csr_id_to_str(const char *prefix, __u64 id) +{ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CSR); + __u64 reg_subtype = reg_off & KVM_REG_RISCV_SUBTYPE_MASK; + + reg_off &= ~KVM_REG_RISCV_SUBTYPE_MASK; + + switch (reg_subtype) { + case KVM_REG_RISCV_CSR_GENERAL: + return general_csr_id_to_str(reg_off); + case KVM_REG_RISCV_CSR_AIA: + return aia_csr_id_to_str(reg_off); + } + + TEST_FAIL("%s: Unknown csr subtype: 0x%llx", prefix, reg_subtype); + return NULL; +} + +static const char *timer_id_to_str(const char *prefix, __u64 id) +{ + /* reg_off is the offset into struct kvm_riscv_timer */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_TIMER); + + switch (reg_off) { + case KVM_REG_RISCV_TIMER_REG(frequency): + return "KVM_REG_RISCV_TIMER_REG(frequency)"; + case KVM_REG_RISCV_TIMER_REG(time): + return "KVM_REG_RISCV_TIMER_REG(time)"; + case KVM_REG_RISCV_TIMER_REG(compare): + return "KVM_REG_RISCV_TIMER_REG(compare)"; + case KVM_REG_RISCV_TIMER_REG(state): + return "KVM_REG_RISCV_TIMER_REG(state)"; + } + + TEST_FAIL("%s: Unknown timer reg id: 0x%llx", prefix, id); + return NULL; +} + +static const char *fp_f_id_to_str(const char *prefix, __u64 id) +{ + /* reg_off is the offset into struct __riscv_f_ext_state */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_FP_F); + + switch (reg_off) { + case KVM_REG_RISCV_FP_F_REG(f[0]) ... + KVM_REG_RISCV_FP_F_REG(f[31]): + return strdup_printf("KVM_REG_RISCV_FP_F_REG(f[%lld])", reg_off); + case KVM_REG_RISCV_FP_F_REG(fcsr): + return "KVM_REG_RISCV_FP_F_REG(fcsr)"; + } + + TEST_FAIL("%s: Unknown fp_f reg id: 0x%llx", prefix, id); + return NULL; +} + +static const char *fp_d_id_to_str(const char *prefix, __u64 id) +{ + /* reg_off is the offset into struct __riscv_d_ext_state */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_FP_D); + + switch (reg_off) { + case KVM_REG_RISCV_FP_D_REG(f[0]) ... + KVM_REG_RISCV_FP_D_REG(f[31]): + return strdup_printf("KVM_REG_RISCV_FP_D_REG(f[%lld])", reg_off); + case KVM_REG_RISCV_FP_D_REG(fcsr): + return "KVM_REG_RISCV_FP_D_REG(fcsr)"; + } + + TEST_FAIL("%s: Unknown fp_d reg id: 0x%llx", prefix, id); + return NULL; +} + +static const char *isa_ext_id_to_str(__u64 id) +{ + /* reg_off is the offset into unsigned long kvm_isa_ext_arr[] */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_ISA_EXT); + + static const char * const kvm_isa_ext_reg_name[] = { + "KVM_RISCV_ISA_EXT_A", + "KVM_RISCV_ISA_EXT_C", + "KVM_RISCV_ISA_EXT_D", + "KVM_RISCV_ISA_EXT_F", + "KVM_RISCV_ISA_EXT_H", + "KVM_RISCV_ISA_EXT_I", + "KVM_RISCV_ISA_EXT_M", + "KVM_RISCV_ISA_EXT_SVPBMT", + "KVM_RISCV_ISA_EXT_SSTC", + "KVM_RISCV_ISA_EXT_SVINVAL", + "KVM_RISCV_ISA_EXT_ZIHINTPAUSE", + "KVM_RISCV_ISA_EXT_ZICBOM", + "KVM_RISCV_ISA_EXT_ZICBOZ", + "KVM_RISCV_ISA_EXT_ZBB", + "KVM_RISCV_ISA_EXT_SSAIA", + }; + + if (reg_off >= ARRAY_SIZE(kvm_isa_ext_reg_name)) { + /* + * isa_ext regs would grow regularly with new isa extension added, so + * just show "reg" to indicate a new extension. + */ + return strdup_printf("%lld /* UNKNOWN */", reg_off); + } + + return kvm_isa_ext_reg_name[reg_off]; +} + +static const char *sbi_ext_single_id_to_str(__u64 reg_off) +{ + /* reg_off is KVM_RISCV_SBI_EXT_ID */ + static const char * const kvm_sbi_ext_reg_name[] = { + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_V01", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_TIME", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_IPI", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_RFENCE", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_SRST", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_HSM", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_PMU", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_EXPERIMENTAL", + "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_VENDOR", + }; + + if (reg_off >= ARRAY_SIZE(kvm_sbi_ext_reg_name)) { + /* + * sbi_ext regs would grow regularly with new sbi extension added, so + * just show "reg" to indicate a new extension. + */ + return strdup_printf("KVM_REG_RISCV_SBI_SINGLE | %lld /* UNKNOWN */", reg_off); + } + + return kvm_sbi_ext_reg_name[reg_off]; +} + +static const char *sbi_ext_multi_id_to_str(__u64 reg_subtype, __u64 reg_off) +{ + if (reg_off > KVM_REG_RISCV_SBI_MULTI_REG_LAST) { + /* + * sbi_ext regs would grow regularly with new sbi extension added, so + * just show "reg" to indicate a new extension. + */ + return strdup_printf("%lld /* UNKNOWN */", reg_off); + } + + switch (reg_subtype) { + case KVM_REG_RISCV_SBI_MULTI_EN: + return strdup_printf("KVM_REG_RISCV_SBI_MULTI_EN | %lld", reg_off); + case KVM_REG_RISCV_SBI_MULTI_DIS: + return strdup_printf("KVM_REG_RISCV_SBI_MULTI_DIS | %lld", reg_off); + } + + return NULL; +} + +static const char *sbi_ext_id_to_str(const char *prefix, __u64 id) +{ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_SBI_EXT); + __u64 reg_subtype = reg_off & KVM_REG_RISCV_SUBTYPE_MASK; + + reg_off &= ~KVM_REG_RISCV_SUBTYPE_MASK; + + switch (reg_subtype) { + case KVM_REG_RISCV_SBI_SINGLE: + return sbi_ext_single_id_to_str(reg_off); + case KVM_REG_RISCV_SBI_MULTI_EN: + case KVM_REG_RISCV_SBI_MULTI_DIS: + return sbi_ext_multi_id_to_str(reg_subtype, reg_off); + } + + TEST_FAIL("%s: Unknown sbi ext subtype: 0x%llx", prefix, reg_subtype); + return NULL; +} + +void print_reg(const char *prefix, __u64 id) +{ + const char *reg_size = NULL; + + TEST_ASSERT((id & KVM_REG_ARCH_MASK) == KVM_REG_RISCV, + "%s: KVM_REG_RISCV missing in reg id: 0x%llx", prefix, id); + + switch (id & KVM_REG_SIZE_MASK) { + case KVM_REG_SIZE_U32: + reg_size = "KVM_REG_SIZE_U32"; + break; + case KVM_REG_SIZE_U64: + reg_size = "KVM_REG_SIZE_U64"; + break; + case KVM_REG_SIZE_U128: + reg_size = "KVM_REG_SIZE_U128"; + break; + default: + TEST_FAIL("%s: Unexpected reg size: 0x%llx in reg id: 0x%llx", + prefix, (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id); + } + + switch (id & KVM_REG_RISCV_TYPE_MASK) { + case KVM_REG_RISCV_CONFIG: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_CONFIG | %s,\n", + reg_size, config_id_to_str(id)); + break; + case KVM_REG_RISCV_CORE: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_CORE | %s,\n", + reg_size, core_id_to_str(prefix, id)); + break; + case KVM_REG_RISCV_CSR: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_CSR | %s,\n", + reg_size, csr_id_to_str(prefix, id)); + break; + case KVM_REG_RISCV_TIMER: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_TIMER | %s,\n", + reg_size, timer_id_to_str(prefix, id)); + break; + case KVM_REG_RISCV_FP_F: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_FP_F | %s,\n", + reg_size, fp_f_id_to_str(prefix, id)); + break; + case KVM_REG_RISCV_FP_D: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_FP_D | %s,\n", + reg_size, fp_d_id_to_str(prefix, id)); + break; + case KVM_REG_RISCV_ISA_EXT: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_ISA_EXT | %s,\n", + reg_size, isa_ext_id_to_str(id)); + break; + case KVM_REG_RISCV_SBI_EXT: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_SBI_EXT | %s,\n", + reg_size, sbi_ext_id_to_str(prefix, id)); + break; + default: + TEST_FAIL("%s: Unexpected reg type: 0x%llx in reg id: 0x%llx", prefix, + (id & KVM_REG_RISCV_TYPE_MASK) >> KVM_REG_RISCV_TYPE_SHIFT, id); + } +} + +/* + * The current blessed list was primed with the output of kernel version + * v6.4 and then later updated with new registers. + */ +static __u64 base_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(isa), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(mvendorid), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(marchid), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(mimpid), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.pc), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.ra), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.sp), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.gp), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.tp), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t0), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t1), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t2), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s0), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s1), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a0), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a1), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a2), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a3), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a4), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a5), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a6), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.a7), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s2), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s3), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s4), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s5), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s6), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s7), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s8), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s9), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s10), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.s11), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t3), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t4), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t5), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t6), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(mode), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sstatus), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sie), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(stvec), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sscratch), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sepc), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(scause), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(stval), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sip), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(satp), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(scounteren), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(frequency), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(time), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(compare), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(state), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_A, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_C, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_I, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_M, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_V01, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_TIME, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_IPI, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_RFENCE, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_SRST, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_HSM, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_PMU, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_EXPERIMENTAL, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_VENDOR, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_MULTI_EN | 0, + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_MULTI_DIS | 0, +}; + +static __u64 base_rejects_set[] = { + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(frequency), +}; + +/* + * The skips_set list registers that should skip set test. + * - KVM_REG_RISCV_TIMER_REG(state): set would fail if it was not initialized properly. + */ +static __u64 base_skips_set[] = { + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(state), +}; + +static __u64 h_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_H, +}; + +static __u64 zicbom_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbom_block_size), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICBOM, +}; + +static __u64 zicboz_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicboz_block_size), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICBOZ, +}; + +static __u64 zicbom_rejects_set[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicbom_block_size), +}; + +static __u64 zicboz_rejects_set[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CONFIG | KVM_REG_RISCV_CONFIG_REG(zicboz_block_size), +}; + +static __u64 svpbmt_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVPBMT, +}; + +static __u64 sstc_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSTC, +}; + +static __u64 svinval_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVINVAL, +}; + +static __u64 zihintpause_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIHINTPAUSE, +}; + +static __u64 zbb_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBB, +}; + +static __u64 aia_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(siselect), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio1), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio2), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(sieh), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(siph), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio1h), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio2h), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSAIA, +}; + +static __u64 fp_f_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[0]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[1]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[2]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[3]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[4]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[5]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[6]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[7]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[8]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[9]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[10]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[11]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[12]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[13]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[14]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[15]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[16]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[17]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[18]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[19]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[20]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[21]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[22]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[23]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[24]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[25]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[26]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[27]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[28]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[29]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[30]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[31]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(fcsr), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_F, +}; + +static __u64 fp_d_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[0]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[1]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[2]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[3]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[4]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[5]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[6]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[7]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[8]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[9]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[10]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[11]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[12]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[13]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[14]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[15]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[16]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[17]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[18]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[19]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[20]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[21]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[22]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[23]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[24]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[25]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[26]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[27]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[28]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[29]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[30]), + KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(f[31]), + KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_D | KVM_REG_RISCV_FP_D_REG(fcsr), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_D, +}; + +#define BASE_SUBLIST \ + {"base", .regs = base_regs, .regs_n = ARRAY_SIZE(base_regs), \ + .rejects_set = base_rejects_set, .rejects_set_n = ARRAY_SIZE(base_rejects_set), \ + .skips_set = base_skips_set, .skips_set_n = ARRAY_SIZE(base_skips_set),} +#define H_REGS_SUBLIST \ + {"h", .feature = KVM_RISCV_ISA_EXT_H, .regs = h_regs, .regs_n = ARRAY_SIZE(h_regs),} +#define ZICBOM_REGS_SUBLIST \ + {"zicbom", .feature = KVM_RISCV_ISA_EXT_ZICBOM, .regs = zicbom_regs, .regs_n = ARRAY_SIZE(zicbom_regs), \ + .rejects_set = zicbom_rejects_set, .rejects_set_n = ARRAY_SIZE(zicbom_rejects_set),} +#define ZICBOZ_REGS_SUBLIST \ + {"zicboz", .feature = KVM_RISCV_ISA_EXT_ZICBOZ, .regs = zicboz_regs, .regs_n = ARRAY_SIZE(zicboz_regs), \ + .rejects_set = zicboz_rejects_set, .rejects_set_n = ARRAY_SIZE(zicboz_rejects_set),} +#define SVPBMT_REGS_SUBLIST \ + {"svpbmt", .feature = KVM_RISCV_ISA_EXT_SVPBMT, .regs = svpbmt_regs, .regs_n = ARRAY_SIZE(svpbmt_regs),} +#define SSTC_REGS_SUBLIST \ + {"sstc", .feature = KVM_RISCV_ISA_EXT_SSTC, .regs = sstc_regs, .regs_n = ARRAY_SIZE(sstc_regs),} +#define SVINVAL_REGS_SUBLIST \ + {"svinval", .feature = KVM_RISCV_ISA_EXT_SVINVAL, .regs = svinval_regs, .regs_n = ARRAY_SIZE(svinval_regs),} +#define ZIHINTPAUSE_REGS_SUBLIST \ + {"zihintpause", .feature = KVM_RISCV_ISA_EXT_ZIHINTPAUSE, .regs = zihintpause_regs, .regs_n = ARRAY_SIZE(zihintpause_regs),} +#define ZBB_REGS_SUBLIST \ + {"zbb", .feature = KVM_RISCV_ISA_EXT_ZBB, .regs = zbb_regs, .regs_n = ARRAY_SIZE(zbb_regs),} +#define AIA_REGS_SUBLIST \ + {"aia", .feature = KVM_RISCV_ISA_EXT_SSAIA, .regs = aia_regs, .regs_n = ARRAY_SIZE(aia_regs),} +#define FP_F_REGS_SUBLIST \ + {"fp_f", .feature = KVM_RISCV_ISA_EXT_F, .regs = fp_f_regs, \ + .regs_n = ARRAY_SIZE(fp_f_regs),} +#define FP_D_REGS_SUBLIST \ + {"fp_d", .feature = KVM_RISCV_ISA_EXT_D, .regs = fp_d_regs, \ + .regs_n = ARRAY_SIZE(fp_d_regs),} + +static struct vcpu_reg_list h_config = { + .sublists = { + BASE_SUBLIST, + H_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list zicbom_config = { + .sublists = { + BASE_SUBLIST, + ZICBOM_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list zicboz_config = { + .sublists = { + BASE_SUBLIST, + ZICBOZ_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list svpbmt_config = { + .sublists = { + BASE_SUBLIST, + SVPBMT_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list sstc_config = { + .sublists = { + BASE_SUBLIST, + SSTC_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list svinval_config = { + .sublists = { + BASE_SUBLIST, + SVINVAL_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list zihintpause_config = { + .sublists = { + BASE_SUBLIST, + ZIHINTPAUSE_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list zbb_config = { + .sublists = { + BASE_SUBLIST, + ZBB_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list aia_config = { + .sublists = { + BASE_SUBLIST, + AIA_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list fp_f_config = { + .sublists = { + BASE_SUBLIST, + FP_F_REGS_SUBLIST, + {0}, + }, +}; + +static struct vcpu_reg_list fp_d_config = { + .sublists = { + BASE_SUBLIST, + FP_D_REGS_SUBLIST, + {0}, + }, +}; + +struct vcpu_reg_list *vcpu_configs[] = { + &h_config, + &zicbom_config, + &zicboz_config, + &svpbmt_config, + &sstc_config, + &svinval_config, + &zihintpause_config, + &zbb_config, + &aia_config, + &fp_f_config, + &fp_d_config, +}; +int vcpu_configs_n = ARRAY_SIZE(vcpu_configs);