From patchwork Wed May 24 16:57:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98632 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp64562vqr; Wed, 24 May 2023 10:20:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ559KqfWTaB+kjUlAcjlNCVeQ745Kc/3VtW3iVMQnLGtXnBZckyo4R6/nIpzX4jQQf7hGmo X-Received: by 2002:a05:6a20:6f07:b0:109:c477:540 with SMTP id gt7-20020a056a206f0700b00109c4770540mr16666096pzb.45.1684948850323; Wed, 24 May 2023 10:20:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948850; cv=none; d=google.com; s=arc-20160816; b=QzVZxPVdOWnfcf+bbVoX8GBC6KdjQ1odlgLAOHKDcaCjbMuidKept+FBJEkMe/lUXa GzyfRNPZ/5PtZukEirnK7lQ/EsUcogLcvBfUuaemsOpAeymAqB9U1WBAdlEOmq6XEDvr 7uUGeMddBt8zZF797ViJFZddAuxFKNhvGjLe9sGbP5+ZAZfAl7spIYAeQ+5FPvzYCxbL NNvT3N2EWObn8b1wIk6O2aovtyttj4zF1S/K1OJ2+z3YYUQFB/OalTAo2P5cXsg7pdaY 46WGMnnkR7aLD4I92yxh57BsKionGCfLQC10w1lz+tGVpzLlQ8qRLUfCJsSHCz41nqBw glbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=+zLLvgde7PF6WJoU3ZUzBSDU4toT4/9rvT0Xh4yKbR0=; b=yl60h4xfWH/Gui8WAMde61eBVXtI2dixx+a75Cg20QhOiZ1pdSZAMDQWMT8j/1xVmM GPjNk7dEW1v5gYG+ILJ96+plc6UZRv4mBa/vjj7q4WhsKop2QcAN6k4TjGQ7QrW28mZF dyPMTx7/+jtlQBl6kodDLjGJlIGrAmIv9MS9U9T1y61baiIQIdFiNEumFHpERWr/a9xc kZ7cT8HHAQFRyVQit50Q5cHFF9q9prfNFEJ5ZN2xMxD6IZ38gEbHM/AJHG27ygj9H3hS oY8m3fE4FBR2G/BzX/s/YeC7AKb/1D3t0mO4yE7Ut7G6wJrUG18bXmuCN6bUrP8+fcBH tisg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CkC53btQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b6-20020a633406000000b0053efcd20010si668614pga.423.2023.05.24.10.20.35; Wed, 24 May 2023 10:20:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=CkC53btQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235540AbjEXRK0 (ORCPT + 99 others); Wed, 24 May 2023 13:10:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229547AbjEXRKW (ORCPT ); Wed, 24 May 2023 13:10:22 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88438FC; Wed, 24 May 2023 10:10:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948219; x=1716484219; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Rw+wxmjMySuTIW59NrPQ4xVEjyG0POQ4g40kzOQSZSk=; b=CkC53btQJ1KXVNoDRJcqGnDX08FmXnEXFUcTn4tB1MmTvK8ity39ghMh V/NYUp0bj2lchqxfTSQTpaIiF1ZkibVE5HfJYzzFlIHGE6xJ4WMRlhFyA VT5oupCjY78HXJWesSCkHhmeMcuUXPT2ZHl8c7EyfCkT5HyhZySo+ZJWW fbdQFdwUeQwF7zRF1uPzAUiFXQA6YE2H/DB8yrDetMRatZlpA7m8noqhU x2El2yD4OdIuuBJLfiGUnc9YJEveV98XzwM2ubiUnntJF4u2PxxURD0N9 E4hXnYTuUXnxOUHsFLwfgKUMBcxRjP43nLVwpCA+b0g8rhx+XZHmUwL7Q A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206650" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206650" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427333" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427333" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:50 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , Jonathan Corbet , linux-doc@vger.kernel.org Subject: [PATCH v7 01/12] Documentation/x86: Document Key Locker Date: Wed, 24 May 2023 09:57:06 -0700 Message-Id: <20230524165717.14062-2-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762832818291668582?= X-GMAIL-MSGID: =?utf-8?q?1766796925970889731?= Document the overview of the feature along with relevant consideration when provisioning dm-crypt volumes with AES-KL instead of AES-NI. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Jonathan Corbet Cc: x86@kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Rebase on the upstream -- commit ff61f0791ce9 ("docs: move x86 documentation into Documentation/arch/"). (Nathan Huckleberry) * Remove a duplicated sentence -- 'But there is no AES-KL instruction to process a 192-bit key.' * Update the text for clarity and readability: - Clarify the error code and exemplify the backup failure - Use 'wrapping key' instead of less readable 'IWKey' Changes from v5: * Fix a typo: 'feature feature' -> 'feature' Changes from RFC v2: * Add as a new patch. The preview is available here: https://htmlpreview.github.io/?https://github.com/intel-staging/keylocker/kdoc/arch/x86/keylocker.html --- Documentation/arch/x86/index.rst | 1 + Documentation/arch/x86/keylocker.rst | 97 ++++++++++++++++++++++++++++ 2 files changed, 98 insertions(+) create mode 100644 Documentation/arch/x86/keylocker.rst diff --git a/Documentation/arch/x86/index.rst b/Documentation/arch/x86/index.rst index c73d133fd37c..256359c24669 100644 --- a/Documentation/arch/x86/index.rst +++ b/Documentation/arch/x86/index.rst @@ -42,3 +42,4 @@ x86-specific Documentation features elf_auxvec xstate + keylocker diff --git a/Documentation/arch/x86/keylocker.rst b/Documentation/arch/x86/keylocker.rst new file mode 100644 index 000000000000..5557b8d0659a --- /dev/null +++ b/Documentation/arch/x86/keylocker.rst @@ -0,0 +1,97 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============== +x86 Key Locker +============== + +Introduction +============ + +Key Locker is a CPU feature to reduce key exfiltration opportunities +while maintaining a programming interface similar to AES-NI. It +converts the AES key into an encoded form, called the 'key handle'. +The key handle is a wrapped version of the clear-text key where the +wrapping key has limited exposure. Once converted, all subsequent data +encryption using new AES instructions (AES-KL) uses this key handle, +reducing the exposure of private key material in memory. + +CPU-internal Wrapping Key +========================= + +The CPU-internal wrapping key is an entity in a software-invisible CPU +state. On every system boot, a new key is loaded. So the key handle that +was encoded by the old wrapping key is no longer usable on system shutdown +or reboot. + +And the key may be lost on the following exceptional situation upon wakeup: + +Wrapping Key Restore Failure +---------------------------- + +The CPU state is volatile with the ACPI S3/4 sleep states. When the system +supports those states, the key has to be backed up so that it is restored +on wake up. The kernel saves the key in non-volatile media. + +The event of a wrapping key restore failure upon resume from suspend, all +established key handles become invalid. In flight dm-crypt operations +receive error results from pending operations. In the likely scenario that +dm-crypt is hosting the root filesystem the recovery is identical to if a +storage controller failed to resume from suspend, reboot. If the volume +impacted by a wrapping key restore failure is a data-volume then it is +possible that I/O errors on that volume do not bring down the rest of the +system. However, a reboot is still required because the kernel will have +soft-disabled Key Locker. Upon the failure, the crypto library code will +return -ENODEV on every AES-KL function call. The Key Locker implementation +only loads a new wrapping key at initial boot, not any time after like +resume from suspend. + +Use Case and Non-use Cases +========================== + +Bare metal disk encryption is the only intended use case. + +Userspace usage is not supported because there is no ABI provided to +communicate and coordinate wrapping-key restore failure to userspace. For +now, key restore failures are only coordinated with kernel users. But the +kernel can not prevent userspace from using the feature's AES instructions +('AES-KL') when the feature has been enabled. So, the lack of userspace +support is only documented, not actively enforced. + +Key Locker is not expected to be advertised to guest VMs and the kernel +implementation ignores it even if the VMM enumerates the capability. The +expectation is that a guest VM wants private wrapping key state, but the +architecture does not provide that. An emulation of that capability, by +caching per-VM wrapping keys in memory, defeats the purpose of Key Locker. +The backup / restore facility is also not performant enough to be suitable +for guest VM context switches. + +AES Instruction Set +=================== + +The feature accompanies a new AES instruction set. This instruction set is +analogous to AES-NI. A set of AES-NI instructions can be mapped to an +AES-KL instruction. For example, AESENC128KL is responsible for ten rounds +of transformation, which is equivalent to nine times AESENC and one +AESENCLAST in AES-NI. + +But they have some notable differences: + +* AES-KL provides a secure data transformation using an encrypted key. + +* If an invalid key handle is provided, e.g. a corrupted one or a handle + restriction failure, the instruction fails with setting RFLAGS.ZF. The + crypto library implementation includes the flag check to return -EINVAL. + Note that this flag is also set if the wrapping key is changed, e.g., + because of the backup error. + +* AES-KL implements support for 128-bit and 256-bit keys, but there is no + AES-KL instruction to process an 192-bit key. The AES-KL cipher + implementation logs a warning message with a 192-bit key and then falls + back to AES-NI. So, this 192-bit key-size limitation is only documented, + not enforced. It means the key will remain in clear-text in memory. This + is to meet Linux crypto-cipher expectation that each implementation must + support all the AES-compliant key sizes. + +* Some AES-KL hardware implementation may have noticeable performance + overhead when compared with AES-NI instructions. + From patchwork Wed May 24 16:57:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98636 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp67796vqr; Wed, 24 May 2023 10:27:07 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ZkT+0I4SYkNiUun3nVM7T6YyqTo9aPunwIq+uq+xLWhZrYJKRk9rfUHuPOkUlgInsGPKJ X-Received: by 2002:a17:902:b7c5:b0:1ae:4013:986c with SMTP id v5-20020a170902b7c500b001ae4013986cmr17334238plz.63.1684949227078; Wed, 24 May 2023 10:27:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684949227; cv=none; d=google.com; s=arc-20160816; b=VzDC0UHKSq5kGqXqWR8c1YyzJ5vRJT8IB2BspAfY5IIsjEaoPJpzRBKxqvqWnXVCzH FPt/NX4C4ivXu++lg4sztoaP/3WLUDGEUoMluX/ohyS8AbHO4t87Iq5z9MbDjTtvjmja HXOmOMwkzDSh8yF+VYIH38VeQv3c2/hHxILbnX6JsAqomSgGPq+9uaqR9+Cb3vF9eLsp 31QxIcV7k8N2W1QTIkKuTCOrcc7/pECLmL5bEAJmO8RxE9yekepeH/BjhCl804MPUQ/y X4G1SuWMoRqrbN9kAd8Tp0+u35mQtTuXW/OwGQixph+u/DIYjCGRWFh2+4AOiTGDKFHS N2Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=tjUbKMTAT8hy7WtvSusmKvDQXpy5s1qREOCS8T+etlo=; b=YoWPJhMctkziNkLMJk9rMUxd7xF+yaXpWO7veO/rXGUq59iYM2LZrmACVY2arECVr2 EQzzBfRLdOjE7Ag7/uwqVqrosH7/0Km04qP3tpjX3r3PR7Zg0QP2y1PrqDEStrYfeKMY A8kVTIJW4pacP2tOb9PPQQbjHmmqvuExdXTueBRTcz0HwJ+2RHe+LuK/gNU1NiauwYPI QaRuWdA3TYlJaa5tqujBHcZbK60neWRktTvApDGoAs6Bd0CkWxRoQDqd497bUwiTAqEi XdE7Qc+BGzY/XDv7xy2by7zm/QtsA8oUirbsD9rNZMdfd2AJuj1NfBUzXIKbtSBn6tcc y8ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IL0cchXp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z7-20020a170903018700b001a9a032386dsi1481915plg.349.2023.05.24.10.26.52; Wed, 24 May 2023 10:27:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=IL0cchXp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230478AbjEXRKb (ORCPT + 99 others); Wed, 24 May 2023 13:10:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235363AbjEXRKW (ORCPT ); Wed, 24 May 2023 13:10:22 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACF0113E; Wed, 24 May 2023 10:10:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948220; x=1716484220; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=EqZrJYyo/GziPMZxaJs9+u1mtCZh0+buRAvMpDKc0L8=; b=IL0cchXpaBMLR1XJRQi1/ZFvICeVWDfCKeQrvFoV3bqWsySDZ7s01vOU NsJQTcd08I2I/W7zoxHiogB9ZkQxn/OPdJHD8AR/L+fz+Rulm2jxRTqio K4rGeXDbKg9tlcLE7ig3xzH6J0gsyhAbMFOb0xPX0Kxz9XS7czeA3hoWQ QHcJWBKdZU6IQAMJ5wHxvQuZKWl+LYFDGcFqozbfkHj2zXzMFTagTkmx7 qnKq0ATnt9NLLobUu3TnERDmygxMIig77sc32ob4gxIjt1pHHClu67RTR 0z/MNMBox8+BJ+55xetAFRwfLyisP35U1yiIya5HgkHbS3bbcUTn2T2Qd A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206669" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206669" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427337" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427337" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:50 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Subject: [PATCH v7 02/12] x86/cpufeature: Enumerate Key Locker feature Date: Wed, 24 May 2023 09:57:07 -0700 Message-Id: <20230524165717.14062-3-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762832812507842270?= X-GMAIL-MSGID: =?utf-8?q?1766797320894982802?= Key Locker is a CPU feature to minimize exposure of clear-text key material. An encoded form, called 'key handle', is referenced for data encryption or decryption instead of accessing the clear text key. A wrapping key loaded in the CPU's software-inaccessible state is used to transform a user key into a key handle. On rarely unexpected hardware failure, the key could be lost. Here enumerate this hardware capability. It will not be shown up in /proc/cpuinfo as userspace usage is not supported. This is because there is no ABI to coordinate the wrapping-key failure. The feature supports Advanced Encryption Standard (AES) cipher algorithm with new SIMD instruction set like its predecessor (AES-NI). Mark the feature depending on XMM2. The new AES implementation will be in the crypto library. Add X86_FEATURE_KEYLOCKER to the disabled features mask at the moment. It will be enabled under a new config option. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Massage the changelog -- re-organize the change descriptions Changes from RFC v2: * Do not publish the feature flag to userspace. * Update the changelog. Changes from RFC v1: * Updated the changelog. --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 8 +++++++- arch/x86/include/uapi/asm/processor-flags.h | 2 ++ arch/x86/kernel/cpu/cpuid-deps.c | 1 + 4 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index cb8ca46213be..4a12673df7e9 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -389,6 +389,7 @@ #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ +#define X86_FEATURE_KEYLOCKER (16*32+23) /* "" Key Locker */ #define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */ #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index fafe9be7a6f4..eb841d694ed9 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -38,6 +38,12 @@ # define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31)) #endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ +#ifdef CONFIG_X86_KEYLOCKER +# define DISABLE_KEYLOCKER 0 +#else +# define DISABLE_KEYLOCKER (1<<(X86_FEATURE_KEYLOCKER & 31)) +#endif /* CONFIG_X86_KEYLOCKER */ + #ifdef CONFIG_X86_5LEVEL # define DISABLE_LA57 0 #else @@ -126,7 +132,7 @@ #define DISABLED_MASK14 0 #define DISABLED_MASK15 0 #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \ - DISABLE_ENQCMD) + DISABLE_ENQCMD|DISABLE_KEYLOCKER) #define DISABLED_MASK17 0 #define DISABLED_MASK18 0 #define DISABLED_MASK19 0 diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h index d898432947ff..262348aeaad1 100644 --- a/arch/x86/include/uapi/asm/processor-flags.h +++ b/arch/x86/include/uapi/asm/processor-flags.h @@ -128,6 +128,8 @@ #define X86_CR4_PCIDE _BITUL(X86_CR4_PCIDE_BIT) #define X86_CR4_OSXSAVE_BIT 18 /* enable xsave and xrestore */ #define X86_CR4_OSXSAVE _BITUL(X86_CR4_OSXSAVE_BIT) +#define X86_CR4_KEYLOCKER_BIT 19 /* enable Key Locker */ +#define X86_CR4_KEYLOCKER _BITUL(X86_CR4_KEYLOCKER_BIT) #define X86_CR4_SMEP_BIT 20 /* enable SMEP support */ #define X86_CR4_SMEP _BITUL(X86_CR4_SMEP_BIT) #define X86_CR4_SMAP_BIT 21 /* enable SMAP support */ diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c index f6748c8bd647..200c5e69f78c 100644 --- a/arch/x86/kernel/cpu/cpuid-deps.c +++ b/arch/x86/kernel/cpu/cpuid-deps.c @@ -81,6 +81,7 @@ static const struct cpuid_dep cpuid_deps[] = { { X86_FEATURE_XFD, X86_FEATURE_XSAVES }, { X86_FEATURE_XFD, X86_FEATURE_XGETBV1 }, { X86_FEATURE_AMX_TILE, X86_FEATURE_XFD }, + { X86_FEATURE_KEYLOCKER, X86_FEATURE_XMM2 }, {} }; From patchwork Wed May 24 16:57:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98625 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp59899vqr; Wed, 24 May 2023 10:13:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Bg2KtZLok/erBza+olw23S+VAl0G+6pbVnp9nvOACRcPgM88rX+OqXLdZU0hUPxd8AmNy X-Received: by 2002:a05:6a00:2195:b0:64a:f730:154c with SMTP id h21-20020a056a00219500b0064af730154cmr4171053pfi.13.1684948383595; Wed, 24 May 2023 10:13:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948383; cv=none; d=google.com; s=arc-20160816; b=E0/qasHh2MkYwr+nY6KAqJv4lWkrFSHP1RRLXV3QWRnu3LR4dQBVgIbFUDlKJLM+cN Wjg08kLI4uyLknjIzDx87tMviFn12F4c8Txu0XFnefRIODDyX/HrSjs+Txfkm76gmGs1 CtUrRF1Tm0ggmmY+OxngfoXIJR7rRmTtyshx7tHDfTQJ6TKqa6SmcHkMB+ev+k6rBzn0 BXVi5pYHuKlHWt3m7RPzze0/GCU1IMsjYFgXmnybia7t3ui49PY5y/S98uz6EAoGhyVi oiV1IJno5dqQjEa3S9w5aP+ZeTCcAeTBkDqPXugGcFblob8PBYV9+bIrDeJYO2fvZgAz /EDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=/8w1L4QNqLcpzGhat5+6uzg9oK8ddjwOveuHLi1IN44=; b=zxCm5WhW4gPhkJtaTblDLceEoYxV1WHO6kC7m+x21gfFlKxKsRjfrza7FLwkfwiUUm rG4ZJxh8CcGGvsJdm9fdxs6EvQ49NsOXx7aOumN31ALSD61ZQQoIJoSKxpJ3U61HgPFq XHMD6G7R5Y6LI4NDsi4lJ1sDGoiq7CkESH8OfFsjNydVovgYOSV765cd+cm/gzuA3LqA A7b9ZfY6mrGh5j/Iuvjriq6DWhg/aDn/UsfwmL9q+1HgeUFZMgPAEK9Y+HVIOI6Dgj7j 4aVY3Y6rzkF5HlK3hETZb8Uzi6JZ4X/sFqUKgHgkwpbN8OB2IKcL/9HUjZp49m4jffIu wW9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RgmQVIRO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c62-20020a624e41000000b0064583a79521si8797465pfb.283.2023.05.24.10.12.50; Wed, 24 May 2023 10:13:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=RgmQVIRO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235862AbjEXRKn (ORCPT + 99 others); Wed, 24 May 2023 13:10:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235438AbjEXRKW (ORCPT ); Wed, 24 May 2023 13:10:22 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C4C0123; Wed, 24 May 2023 10:10:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948221; x=1716484221; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=F/SLedw3zCsrs8J8avtju/mMJX+xNkamKKwbSMdGWQY=; b=RgmQVIRO9W344dus/NWeDhVKmqzB4GPq+65HFUemXOthVSKbkpOQZVaE C2D/+O8VPVMSQRyCIxzQ8osNMXmki/fgDfSuDiFoDwPEzVDluLrNvPCPz 2+t3z7bSPmmh5OMhafVcrzhpiY3TnoWoAXntSIBhsFGW6TQUip7+g4JKf 2d7ZfEprsUz6Y7t78bk4/sqo+tdC2spWmBFvcZTVmpKBQv4I01wikxzSu ETSlDE06kmgr9Lu8qO+/P31JYvgq4cKegRzv+QZvgxaV+Z738US+q7ICM QtxLZW2fw2ruF+FfQLE/0LqRnxrFVBv4LgaTiRZVw2lp7wedJ1UkwIfcy Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206686" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206686" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427341" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427341" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:51 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" Subject: [PATCH v7 03/12] x86/insn: Add Key Locker instructions to the opcode map Date: Wed, 24 May 2023 09:57:08 -0700 Message-Id: <20230524165717.14062-4-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762832874570504473?= X-GMAIL-MSGID: =?utf-8?q?1766796435930398314?= The x86 instruction decoder needs to know these new instructions that are going to be used in the crypto library as well as the x86 core code. Add the following: LOADIWKEY: Load a CPU-internal wrapping key. ENCODEKEY128: Wrap a 128-bit AES key to a key handle. ENCODEKEY256: Wrap a 256-bit AES key to a key handle. AESENC128KL: Encrypt a 128-bit block of data using a 128-bit AES key indicated by a key handle. AESENC256KL: Encrypt a 128-bit block of data using a 256-bit AES key indicated by a key handle. AESDEC128KL: Decrypt a 128-bit block of data using a 128-bit AES key indicated by a key handle. AESDEC256KL: Decrypt a 128-bit block of data using a 256-bit AES key indicated by a key handle. AESENCWIDE128KL: Encrypt 8 128-bit blocks of data using a 128-bit AES key indicated by a key handle. AESENCWIDE256KL: Encrypt 8 128-bit blocks of data using a 256-bit AES key indicated by a key handle. AESDECWIDE128KL: Decrypt 8 128-bit blocks of data using a 128-bit AES key indicated by a key handle. AESDECWIDE256KL: Decrypt 8 128-bit blocks of data using a 256-bit AES key indicated by a key handle. The detail can be found in Intel Software Developer Manual. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Massage the changelog -- add the reason a bit. Changes from RFC v1: * Separated out the LOADIWKEY addition in a new patch. * Included AES instructions to avoid warning messages when the AES Key Locker module is built. --- arch/x86/lib/x86-opcode-map.txt | 11 +++++++---- tools/arch/x86/lib/x86-opcode-map.txt | 11 +++++++---- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt index 5168ee0360b2..87e9696c47d2 100644 --- a/arch/x86/lib/x86-opcode-map.txt +++ b/arch/x86/lib/x86-opcode-map.txt @@ -800,11 +800,12 @@ cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) cf: vgf2p8mulb Vx,Wx (66) +d8: AESENCWIDE128KL Qpi (F3),(000),(00B) | AESENCWIDE256KL Qpi (F3),(000),(10B) | AESDECWIDE128KL Qpi (F3),(000),(01B) | AESDECWIDE256KL Qpi (F3),(000),(11B) db: VAESIMC Vdq,Wdq (66),(v1) -dc: vaesenc Vx,Hx,Wx (66) -dd: vaesenclast Vx,Hx,Wx (66) -de: vaesdec Vx,Hx,Wx (66) -df: vaesdeclast Vx,Hx,Wx (66) +dc: vaesenc Vx,Hx,Wx (66) | LOADIWKEY Vx,Hx (F3) | AESENC128KL Vpd,Qpi (F3) +dd: vaesenclast Vx,Hx,Wx (66) | AESDEC128KL Vpd,Qpi (F3) +de: vaesdec Vx,Hx,Wx (66) | AESENC256KL Vpd,Qpi (F3) +df: vaesdeclast Vx,Hx,Wx (66) | AESDEC256KL Vpd,Qpi (F3) f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2) f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2) f2: ANDN Gy,By,Ey (v) @@ -814,6 +815,8 @@ f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My, f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v) f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) f9: MOVDIRI My,Gy +fa: ENCODEKEY128 Ew,Ew (F3) +fb: ENCODEKEY256 Ew,Ew (F3) EndTable Table: 3-byte opcode 2 (0x0f 0x3a) diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt index 5168ee0360b2..87e9696c47d2 100644 --- a/tools/arch/x86/lib/x86-opcode-map.txt +++ b/tools/arch/x86/lib/x86-opcode-map.txt @@ -800,11 +800,12 @@ cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) cf: vgf2p8mulb Vx,Wx (66) +d8: AESENCWIDE128KL Qpi (F3),(000),(00B) | AESENCWIDE256KL Qpi (F3),(000),(10B) | AESDECWIDE128KL Qpi (F3),(000),(01B) | AESDECWIDE256KL Qpi (F3),(000),(11B) db: VAESIMC Vdq,Wdq (66),(v1) -dc: vaesenc Vx,Hx,Wx (66) -dd: vaesenclast Vx,Hx,Wx (66) -de: vaesdec Vx,Hx,Wx (66) -df: vaesdeclast Vx,Hx,Wx (66) +dc: vaesenc Vx,Hx,Wx (66) | LOADIWKEY Vx,Hx (F3) | AESENC128KL Vpd,Qpi (F3) +dd: vaesenclast Vx,Hx,Wx (66) | AESDEC128KL Vpd,Qpi (F3) +de: vaesdec Vx,Hx,Wx (66) | AESENC256KL Vpd,Qpi (F3) +df: vaesdeclast Vx,Hx,Wx (66) | AESDEC256KL Vpd,Qpi (F3) f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2) f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2) f2: ANDN Gy,By,Ey (v) @@ -814,6 +815,8 @@ f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My, f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v) f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) f9: MOVDIRI My,Gy +fa: ENCODEKEY128 Ew,Ew (F3) +fb: ENCODEKEY256 Ew,Ew (F3) EndTable Table: 3-byte opcode 2 (0x0f 0x3a) From patchwork Wed May 24 16:57:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98626 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp60114vqr; Wed, 24 May 2023 10:13:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6lMo1XschKixSpHMjbRtKGqjc/GnA4NuOzxlsU/wcz7hpFJw9oOfI5HNXLNp1iRALm2Iws X-Received: by 2002:a05:6a20:430b:b0:10c:7676:73bd with SMTP id h11-20020a056a20430b00b0010c767673bdmr94500pzk.5.1684948404769; Wed, 24 May 2023 10:13:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948404; cv=none; d=google.com; s=arc-20160816; b=Zjw8tBgNml2CgQRzN+qgMSlQ3oKPsqxW9N6yBWJniueDH97fh9U14N7NJEEniEjVAW Frc5L2IiFtRtK5WrcZajuEmHGaTAqIu7gv9sHTqE0ueLSd2gWkbE/+N/ERuCNJXtFbU5 ABrD84Pz9tuv/rQP9Z3CvN1VX8cEjiLRtEXexiaNN9nUtdKTk9o2nS1ObiGcVSjBZtrE VAGrdjnKhwyo3voGbpUBcc6cUb+emY+jLRAQOBD0/TyiAOd56jfb/VAoBmu5NO9TTNyi tmdGHh1N+lklU6vMpcdbOFAXybZvdeoJSZFDLwq4ekOXkynxmin1eJCgpzrurGe4qGst LyPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=0UGlK3NRhda3tfxfOFzgQInbgF26UmfSUa0JnTt2wyc=; b=lRO7hW/zLvc0Dgy+y9uPsWc5AbSqVgOZ04pE30+AobGk6hhxqPRsw1x/4/VjvNJAna 60QaFp2/yu7WSYDmMCyfyKn02JofHK7bSzRlcbptY/IY81i12vaN01uRmY9YfrL/cHYJ C/jFm+jJNDCNEGkS+sKUcA5DRGEzkyAxWR9hWFInVW68E+Tlzl3zmqPH2mR6RX8VgNlm FAnjFzYbAAQsvtFh1U9oTK6FOWDpcp3hJ+m2eLxX0bOlNqs1tSXzeM1az6Cu3qE4MuZa V69ijXvXlCNrXldPzDvIZ9AhOsG48W4Rgai40lQfBzfE885067PQ5BUpeCTJl9biqtc0 ljwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Px9bjV8K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h185-20020a6383c2000000b00533fce755a8si1348130pge.471.2023.05.24.10.13.12; Wed, 24 May 2023 10:13:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Px9bjV8K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235933AbjEXRKs (ORCPT + 99 others); Wed, 24 May 2023 13:10:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235354AbjEXRKX (ORCPT ); Wed, 24 May 2023 13:10:23 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72014E9; Wed, 24 May 2023 10:10:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948222; x=1716484222; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ijlcQTB25JWDSKBgGP5PTsXN7m/nFm/uJsgOpwUsehM=; b=Px9bjV8KcPQp8HlylKwq9Ltagdu/1WgscYrrdRJzGGTiv6bJNJddONRC xCO0URfB1srE7vNMaeHaoeEmlygq3PKs/EZxuP+hTp57jGMLwKX4dsb5K mxZGQx0txQQRpAuZ4+mAazllr6MJ1MWY5IgurGc4tL4c2tkAqCXBnPNMJ EM0ELV0mLdKu8SNC4EHHR4SI9XNKsAWtZ0HENK6DyHjNm0eSasXdMxNeI 9ODPVHsbBZnZjZufBMrhD7/9SI7ziHMA4G3chWZj60pMWhTaE+CS3GO53 cnVoDyBuk5KYfBCca3vTqq9TvgQFK3XdFhJmJIFyu61vf98azwKGSTesc A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206700" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206700" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427344" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427344" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:52 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , Peter Zijlstra Subject: [PATCH v7 04/12] x86/asm: Add a wrapper function for the LOADIWKEY instruction Date: Wed, 24 May 2023 09:57:09 -0700 Message-Id: <20230524165717.14062-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762833403439461570?= X-GMAIL-MSGID: =?utf-8?q?1766796458624648361?= Key Locker introduces a CPU-internal wrapping key to encode a user key to a key handle. Then a key handle is referenced instead of the plain text key. LOADIWKEY loads a wrapping key in the software-inaccessible CPU state. It operates only in kernel mode. The kernel will use this to load a new key at boot time. Establish a wrapper to prepare for this use. Also, define struct iwkey to pass the key value to it. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Massage the changelog -- clarify the reason and the changes a bit. Changes from v5: * Fix a typo: kernel_cpu_begin() -> kernel_fpu_begin() Changes from RFC v2: * Separate out the code as a new patch. * Improve the usability with the new struct as an argument. (Dan Williams) Note, Dan wondered if: WARN_ON(!irq_fpu_usable()); would be appropriate in the load_xmm_iwkey() function. --- arch/x86/include/asm/keylocker.h | 25 ++++++++++++++++++++++ arch/x86/include/asm/special_insns.h | 32 ++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) create mode 100644 arch/x86/include/asm/keylocker.h diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h new file mode 100644 index 000000000000..9b3bec452b31 --- /dev/null +++ b/arch/x86/include/asm/keylocker.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_KEYLOCKER_H +#define _ASM_KEYLOCKER_H + +#ifndef __ASSEMBLY__ + +#include + +/** + * struct iwkey - A temporary wrapping key storage. + * @integrity_key: A 128-bit key to check that key handles have not + * been tampered with. + * @encryption_key: A 256-bit encryption key used in + * wrapping/unwrapping a clear text key. + * + * This storage should be flushed immediately after loaded. + */ +struct iwkey { + struct reg_128_bit integrity_key; + struct reg_128_bit encryption_key[2]; +}; + +#endif /*__ASSEMBLY__ */ +#endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index de48d1389936..dd2d8b40fce3 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -9,6 +9,7 @@ #include #include #include +#include /* * The compiler should not reorder volatile asm statements with respect to each @@ -283,6 +284,37 @@ static __always_inline void tile_release(void) asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0"); } +/** + * load_xmm_iwkey - Load a CPU-internal wrapping key + * @key: A struct iwkey pointer. + * + * Load @key to XMMs then do LOADIWKEY. After this, flush XMM + * registers. Caller is responsible for kernel_fpu_begin(). + */ +static inline void load_xmm_iwkey(struct iwkey *key) +{ + struct reg_128_bit zeros = { 0 }; + + asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;" + :: "m"(key->integrity_key), "m"(key->encryption_key[0]), + "m"(key->encryption_key[1])); + + /* + * LOADIWKEY %xmm1,%xmm2 + * + * EAX and XMM0 are implicit operands. Load a key value + * from XMM0-2 to a software-invisible CPU state. With zero + * in EAX, CPU does not do hardware randomization and the key + * backup is allowed. + * + * This instruction is supported by binutils >= 2.36. + */ + asm volatile (".byte 0xf3,0x0f,0x38,0xdc,0xd1" :: "a"(0)); + + asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;" + :: "m"(zeros)); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ From patchwork Wed May 24 16:57:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98628 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp60626vqr; Wed, 24 May 2023 10:14:19 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4Ny/IFaaj0ZnB66Hf2daR1vE6k38tRJXJ+s4i9UScwmT/PlmVRXmAJIzjSVgv6DamPLCB+ X-Received: by 2002:a05:6a20:2594:b0:109:38b4:a210 with SMTP id k20-20020a056a20259400b0010938b4a210mr18743117pzd.29.1684948458912; Wed, 24 May 2023 10:14:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948458; cv=none; d=google.com; s=arc-20160816; b=rq5ub9gzmIu4DS174jS5kJ3hPskxE+l/J6QjwT6tTcZYevOjh4Dj2k4f0UQxQQNkIy fryyawjpyC23Dori8kQFuIQqdq03LPQZMZdv9bq3TLj0HA404vucIcEEqwyKGXpcegDX RUkbSmeSFaqvHZlQl2lguEh1JTl5R/S1VOg42lxb1mW6kFJnkREzD1QDBJCNSaGbays7 RcPoocKx08aLmk08PknqoXQ9Ry+JRPCKOTE/oQz2Zl/pxpfHMXVHjtACzxZfTgZ05b95 SNAcfNf6mNPcL0oJ7Y0I9UW7dVcLE6DbNEDoKTex8zkEqwBwIDxRkLPUj/4jvOK1jAy1 QpRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=uzHCyItZW3LBSRfVWi4skKKs5K3MuNZzgpG5jmIy0Wc=; b=mgk3sHsI+HZvBvN9s8NtFGo0WmYPFLqH0bdLJEaMBYlOqzKTfjE50/A+BqtYtmTfPg XvXFg4ZEvjEGRkxAmGGBVRWSXcvbvouROokvWMlAtOWLl3DZ7RhuEgo6fRRxgnerrBDb e6Dm4OA3tn7nXhCxwA1wiyHFNxBFyR2dhSNFR61DwM7zYDEOyY7IwQ2CSUMQX5Wa43QF 05K1CUTN8K7qK/i0KrzFXyLmCQOfDI+nbtW3jYRSa4vujnL0DpZjK/DOhDdq0CJoqIZl 26v46rplejNTYLcnbcPWyzKA4CGS7++1LZeipqpQLtMcfzfrFF4imjIDjRo53Z6yG1qf OtRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WrlKRdic; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h185-20020a6383c2000000b00533fce755a8si1348130pge.471.2023.05.24.10.14.05; Wed, 24 May 2023 10:14:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WrlKRdic; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230191AbjEXRKy (ORCPT + 99 others); Wed, 24 May 2023 13:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235479AbjEXRKX (ORCPT ); Wed, 24 May 2023 13:10:23 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B312212E; Wed, 24 May 2023 10:10:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948222; x=1716484222; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=WDTSl+X1XPquNKqotXNY0s7uU9PS6vgLhf4WACna8JI=; b=WrlKRdicL43PumcC2o26eSslUaBSm0AYEGCDlCXP8q400wzmattc3dCt jHpMqFq6FJl0Tkte5iAnCNy98bpa6Qa+zpdEk+2JQNCSXrdUJNg+A8sMO 1NGo+1kdK4Q29RlELyPUucFdayd3LoGEOYXwVKtHG6rsgviV8ehaXpHLg 5AoE+DhgpCREP7dEpFCQoLxRtC3I+bh06Qoo9tmuA+wnku2UavW+9VnI/ 02CfwsnL08jkZ8D9/7BRIw48JhokZOtld5p0dnEXkkieeFzGWLylvqR2h k9V9HoqiZkhGCpamENpafujbQjPBigNcBkBCq7iZ13Rl5oAl5snfBfdWz A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206713" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206713" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427349" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427349" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:52 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Subject: [PATCH v7 05/12] x86/msr-index: Add MSRs for Key Locker wrapping key Date: Wed, 24 May 2023 09:57:10 -0700 Message-Id: <20230524165717.14062-6-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766796515076526635?= X-GMAIL-MSGID: =?utf-8?q?1766796515076526635?= The CPU state that contains the wrapping key is in the same power domain as the cache. So any sleep state that would invalidate the cache (like S3) also invalidates the state of the wrapping key. But, since the state is inaccessible to software, it needs a special mechanism to save and restore the key during deep sleep. A set of new MSRs are provided as an abstract interface to save and restore the wrapping key, and to check the key status. The wrapping key is saved in a platform-scoped state of non-volatile media. The backup itself and its path from the CPU are encrypted and integrity protected. Define those MSRs to be used to save and restore the key for S3/4 sleep states. But the backup storage's non-volatility is not architecturally guaranteed across off-states, such as S5 and G3. Then, the kernel may generate a new key on the next boot. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Tweak the changelog -- put the last for those about other sleep states Changes from RFC v2: * Update the changelog. (Dan Williams) * Rename the MSRs. (Dan Williams) --- arch/x86/include/asm/msr-index.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 3aedae61af4f..cd8555c0f3c2 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1117,4 +1117,10 @@ * a #GP */ +/* MSRs for managing a CPU-internal wrapping key for Key Locker. */ +#define MSR_IA32_IWKEY_COPY_STATUS 0x00000990 +#define MSR_IA32_IWKEY_BACKUP_STATUS 0x00000991 +#define MSR_IA32_BACKUP_IWKEY_TO_PLATFORM 0x00000d91 +#define MSR_IA32_COPY_IWKEY_TO_LOCAL 0x00000d92 + #endif /* _ASM_X86_MSR_INDEX_H */ From patchwork Wed May 24 16:57:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98627 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp60413vqr; Wed, 24 May 2023 10:13:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5f8coAsJSwmgOOXlI7jGIn9o3oUSpXSsHoCHT4wLrjcK8L+GcBMwavzg2tlUmm35ZSJgn4 X-Received: by 2002:a17:903:1208:b0:1aa:feca:b616 with SMTP id l8-20020a170903120800b001aafecab616mr19568151plh.65.1684948433234; Wed, 24 May 2023 10:13:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948433; cv=none; d=google.com; s=arc-20160816; b=EUT1U6PrGrFysq3c7zJE7+3tGNbyC2dub8Y+e5FC7UsqXt8hyBc8T8PNMvRAm0nJ0M E4ZeMdwYdFhjFKtCDj+IElom5aval/oRXwDMF/YTESq6mQv4d4ae4Vj/n5wxhx72T3KX q4hNjJFvRoTkIRdABJsX1fHadwGj6jeKLNIWTx5/CQWi25Hu4IjRWowq4SZ6dm2Vmtiq nvF6BhdpA7Ih0D0opQJx/B/f2Kj3trJt1NGTsnoiM/0Vaujh6yvbw4gyl2dWmUjLNzNA Z3t06CsZyd97yCtJvkv7eQ8SZr3VGxGJSeEKvyUuPeZW9UqJJBqg7qL5T3hAj1e/hNBf uSEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=sMXA8tJj5/J80f85Xp7o52yE+t2uclTA7hbewjYTG88=; b=zcehMmpdX58j7CuKs/jRM5ekUyd2L7e/OhB2jzz8zA0VkBo6b2q3DJCzS1FxesOj6V v1QmYGaWzkkShmDbsgpfUSjljS53k/ekTHFnK0KO1xvooJCCCSftQ0HZk3GJcmQGFX7E lhFifO86m+8pLbyIba4ySjsoJNlAQwJdRt3Hn2nuu6/4/RjtyF4+PI983HD4ga+g2Yd9 BnIArrJUwsrzEjzZ1VyD8VDcGJ1rm8mscSukIQuqywjCycYpERXeji10RvJgesW8OKfx 6KCGJnz1X3UHtT1/sq9aFNL7ZYeZI/pJr8PvJMk8SfnJk57Ft9GxurWdh++7uHtbAm8Y FNCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PMF2gEbv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b5-20020a170902e94500b001ab286e740bsi5868238pll.328.2023.05.24.10.13.41; Wed, 24 May 2023 10:13:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PMF2gEbv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235952AbjEXRKw (ORCPT + 99 others); Wed, 24 May 2023 13:10:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235414AbjEXRKY (ORCPT ); Wed, 24 May 2023 13:10:24 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F59ABB; Wed, 24 May 2023 10:10:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948223; x=1716484223; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=+YuUSbnWKYlNrEuY9wZwu6rqIXV81jiJurK8saoa8ho=; b=PMF2gEbvIbagokWkViQWbp84stDhh3SsejV4F9LLuzMaPn+mDgsOdKzt KJOncQvORfliKZkoKh88CzQDGtSsx1X/RsF4uvzp/GFq14o0jxUwDy1+0 09xjR88IQROTk1Iyume+rUoGf85PNoxHe15VQqBp3QZQcx0R3SPQzirhs sCTrEjHfpmalO3HQYI/mnIE5Y+s2WLs0yo+z+GYNksSi7Mw9pngHr0pEJ 9Fv1nt0UlvKmLXdFGd29IQBvPqkAIErFEPymys68jHe+aHlkV2K6IgGpi oVgWK5BtUn40ahBxacgtB6ZNgsTKqY7wMcku/a4WBdJzdCj9eWQNa52cT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206726" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206726" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427352" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427352" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:53 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" Subject: [PATCH v7 06/12] x86/keylocker: Define Key Locker CPUID leaf Date: Wed, 24 May 2023 09:57:11 -0700 Message-Id: <20230524165717.14062-7-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762833284730392449?= X-GMAIL-MSGID: =?utf-8?q?1766796488312906732?= Both Key Locker enabling code in the x86 core and AES Key Locker code in the crypto library will need to reference some CPUID bits. Define these feature-specific CPUID leaf and bits. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Tweak the changelog -- comment the reason first and then brief the change. Changes from RFC v2: * Separate out the code as a new patch. --- arch/x86/include/asm/keylocker.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 9b3bec452b31..1717e0866254 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -5,6 +5,7 @@ #ifndef __ASSEMBLY__ +#include #include /** @@ -21,5 +22,11 @@ struct iwkey { struct reg_128_bit encryption_key[2]; }; +#define KEYLOCKER_CPUID 0x019 +#define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0) +#define KEYLOCKER_CPUID_EBX_AESKLE BIT(0) +#define KEYLOCKER_CPUID_EBX_WIDE BIT(2) +#define KEYLOCKER_CPUID_EBX_BACKUP BIT(4) + #endif /*__ASSEMBLY__ */ #endif /* _ASM_KEYLOCKER_H */ From patchwork Wed May 24 16:57:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98634 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp66928vqr; Wed, 24 May 2023 10:25:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7r1YfQN7TQZjxSbjx+qGnHGwqTjLEgzLliyFxE4Hic23nbu8/XRJE+xfm2muxGIuO6x+04 X-Received: by 2002:a05:6a00:16d0:b0:64f:74d9:eb4a with SMTP id l16-20020a056a0016d000b0064f74d9eb4amr1642808pfc.8.1684949124481; Wed, 24 May 2023 10:25:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684949124; cv=none; d=google.com; s=arc-20160816; b=rpZOeXUoI1DArUGBhDf+fw9xciqnohlLo100vDCnIbJ9YcXks0ioJQ1bBRjZOvfY/s FHKPqWoKEkwchRAsNB84IWU2lUJplkn0dr4AT7w9CQCelziIi9Wpm9dyY/2F2+oS8Ide saMb8lkL8W/oCdwsvwmp5g0QRFdBse7PeahtcWkWhkxKUV/YKT91WgrJMq24Igw4EoRE +q1zFBApMMCA+NpKmWYsUAOAsY1QdGhxhDCS1xeqRPAWSJ3yOkUnzQDDv9VOyOr7bICs Ih78RSFqR7/0FhfNyI+MeRIV7oKzSCuHZxHqBT2GbDjBK4JVdmNKckKm8g4N0TKyfu8c leRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=QU5HQwNBtyn1s9/TJpcQ53cG0Jf28B1r8dTrcRCwzhQ=; b=nnSQ3b9cabVCvI6tIoBiqIAcEkSE5UsBv+KI2zZ2R+N/x9XSn19xkxCWhVz00x66BN pdHJYlwMkff2e6Kl0bYTRrDXhEW6yI8K5yBDkQzcFJHbnVCNaAdAbcwITFxrbRFaQY90 a+QH7ICCyoOC6Lj8MDze2EqprZcn+GI4dVHIQFBnZs63fVvNtCNKWBEOATYzipuupANO wQ61Z/JJwj89i+ZieFzt9sQgz6C720RbekK50kR0Vxsq0/YM6XelSUZL8RWL9ff1zxsK FoCRbaljtQ5nqZQ4cfVlYKQAtuUuCzqNEf5VcNXiwSupUwJFN74I8eLGMHN657WEgMjs NtZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OOfnugxp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x30-20020aa79a5e000000b0064f37ac8ea3si319218pfj.296.2023.05.24.10.25.09; Wed, 24 May 2023 10:25:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OOfnugxp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235815AbjEXRK6 (ORCPT + 99 others); Wed, 24 May 2023 13:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235164AbjEXRKm (ORCPT ); Wed, 24 May 2023 13:10:42 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16F9F180; Wed, 24 May 2023 10:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948224; x=1716484224; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=hNwP92Fv0O2IjL5reMXC0VMmleDiOlnYLtu6Ii1PZt4=; b=OOfnugxpGyccFuvrftoCdTAD7G9FXxOc/WMnMGBRFEbWk072N8V3FmhW DfhyqDf6yf4hmI+Wxo+nxkISsTb9U47OhOTk9L5Ood5s6vYCht7KLeF5R 7PNJE1xK5O1Eq7dOjAlDbv2HAoEp7Ia+n4yR6bT4mK4RLo/sDK+V1AlPJ PAy6IYY1wHuwtTEkVvA21IhnaXckOc1JHUdjwhmgrwKOVMooWoy78AQPl OMxKzTM9Op8kbQRYqUpHSTowXNCksJv6tCifdBlxRks1nqIp/icxZxflO L/8oV5n7nshGTz03WW+gGq4sKXkk345SMJ4mkISq6hXYvOW5nKtFtMAVU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206739" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206739" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427356" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427356" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:54 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Subject: [PATCH v7 07/12] x86/cpu/keylocker: Load a wrapping key at boot-time Date: Wed, 24 May 2023 09:57:12 -0700 Message-Id: <20230524165717.14062-8-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766797212819628559?= X-GMAIL-MSGID: =?utf-8?q?1766797212819628559?= The wrapping key is an entity to encode a clear text key into a key handle. This key is a pivot in protecting user keys. So the value has to be randomized before being loaded in the software-invisible CPU state. The wrapping key needs to be established before the first user. Given that the only proposed Linux use case for Key Locker is dm-crypt, the feature could be lazily enabled when the first dm-crypt user arrives. But there is no precedent for late enabling of CPU features and it adds maintenance burden without demonstrative benefit outside of minimizing the visibility of Key Locker to userspace. Therefore, generate random bytes and load them at boot time. Ensure the dynamic CPU bit (CPUID.AESKLE) is set before the load and flush out these bytes after that. Given that the Linux Key Locker support is only intended for bare metal dm-crypt consumption, and that switching wrapping key per VM is untenable, explicitly skip this setup in the X86_FEATURE_HYPERVISOR case. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Switch to use 'static inline' for the empty functions, instead of macro that disallows type checks. (Eric Biggers and Dave Hansen) * Use memzero_explicit() to wipe out the key data instead of writing the poison value over there. (Robert Elliott) * Massage the changelog for the better readability. Changes from v5: * Call out the disabling when the feature is available on a virtual machine. Then, it will turn off the feature flag Changes from RFC v2: * Make bare metal only. * Clean up the code (e.g. dynamically allocate the key cache). (Dan Williams) * Massage the changelog. * Move out the LOADIWKEY wrapper and the Key Locker CPUID defines. Note, Dan wonders that given that the only proposed Linux use case for Key Locker is dm-crypt, the feature could be lazily enabled when the first dm-crypt user arrives, but as Dave notes there is no precedent for late enabling of CPU features and it adds maintenance burden without demonstrative benefit outside of minimizing the visibility of Key Locker to userspace. --- arch/x86/include/asm/keylocker.h | 9 ++++ arch/x86/kernel/Makefile | 1 + arch/x86/kernel/cpu/common.c | 5 +- arch/x86/kernel/keylocker.c | 82 ++++++++++++++++++++++++++++++++ arch/x86/kernel/smpboot.c | 2 + 5 files changed, 98 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kernel/keylocker.c diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 1717e0866254..9040e10ab648 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -5,6 +5,7 @@ #ifndef __ASSEMBLY__ +#include #include #include @@ -28,5 +29,13 @@ struct iwkey { #define KEYLOCKER_CPUID_EBX_WIDE BIT(2) #define KEYLOCKER_CPUID_EBX_BACKUP BIT(4) +#ifdef CONFIG_X86_KEYLOCKER +void setup_keylocker(struct cpuinfo_x86 *c); +void destroy_keylocker_data(void); +#else +static inline void setup_keylocker(struct cpuinfo_x86 *c) { } +static inline void destroy_keylocker_data(void) { } +#endif + #endif /*__ASSEMBLY__ */ #endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 4070a01c11b7..9e2647320dc6 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -134,6 +134,7 @@ obj-$(CONFIG_PERF_EVENTS) += perf_regs.o obj-$(CONFIG_TRACING) += tracepoint.o obj-$(CONFIG_SCHED_MC_PRIO) += itmt.o obj-$(CONFIG_X86_UMIP) += umip.o +obj-$(CONFIG_X86_KEYLOCKER) += keylocker.o obj-$(CONFIG_UNWINDER_ORC) += unwind_orc.o obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 8f284e185aea..5882ff6e3c6b 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -58,6 +58,8 @@ #include #include #include +#include + #include #include #include @@ -1834,10 +1836,11 @@ static void identify_cpu(struct cpuinfo_x86 *c) /* Disable the PN if appropriate */ squash_the_stupid_serial_number(c); - /* Set up SMEP/SMAP/UMIP */ + /* Setup various Intel-specific CPU security features */ setup_smep(c); setup_smap(c); setup_umip(c); + setup_keylocker(c); /* Enable FSGSBASE instructions if available. */ if (cpu_has(c, X86_FEATURE_FSGSBASE)) { diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c new file mode 100644 index 000000000000..038e5268e6b8 --- /dev/null +++ b/arch/x86/kernel/keylocker.c @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * Setup Key Locker feature and support the wrapping key management. + */ + +#include +#include +#include + +#include +#include +#include + +static __initdata struct keylocker_setup_data { + struct iwkey key; +} kl_setup; + +static void __init generate_keylocker_data(void) +{ + get_random_bytes(&kl_setup.key.integrity_key, sizeof(kl_setup.key.integrity_key)); + get_random_bytes(&kl_setup.key.encryption_key, sizeof(kl_setup.key.encryption_key)); +} + +void __init destroy_keylocker_data(void) +{ + memzero_explicit(&kl_setup.key, sizeof(kl_setup.key)); +} + +static void __init load_keylocker(void) +{ + kernel_fpu_begin(); + load_xmm_iwkey(&kl_setup.key); + kernel_fpu_end(); +} + +/** + * setup_keylocker - Enable the feature. + * @c: A pointer to struct cpuinfo_x86 + */ +void __ref setup_keylocker(struct cpuinfo_x86 *c) +{ + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) + goto out; + + if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) { + pr_debug("x86/keylocker: Not compatible with a hypervisor.\n"); + goto disable; + } + + cr4_set_bits(X86_CR4_KEYLOCKER); + + if (c == &boot_cpu_data) { + u32 eax, ebx, ecx, edx; + + cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx); + /* + * Check the feature readiness via CPUID. Note that the + * CPUID AESKLE bit is conditionally set only when CR4.KL + * is set. + */ + if (!(ebx & KEYLOCKER_CPUID_EBX_AESKLE) || + !(eax & KEYLOCKER_CPUID_EAX_SUPERVISOR)) { + pr_debug("x86/keylocker: Not fully supported.\n"); + goto disable; + } + + generate_keylocker_data(); + } + + load_keylocker(); + + pr_info_once("x86/keylocker: Enabled.\n"); + return; + +disable: + setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); + pr_info_once("x86/keylocker: Disabled.\n"); +out: + /* Make sure the feature disabled for kexec-reboot. */ + cr4_clear_bits(X86_CR4_KEYLOCKER); +} diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index e82be82cda55..41f24ab74fb9 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -86,6 +86,7 @@ #include #include #include +#include /* representing HT siblings of each logical CPU */ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map); @@ -1386,6 +1387,7 @@ void __init native_smp_cpus_done(unsigned int max_cpus) nmi_selftest(); impress_friends(); cache_aps_init(); + destroy_keylocker_data(); } static int __initdata setup_possible_cpus = -1; From patchwork Wed May 24 16:57:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98631 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp63808vqr; Wed, 24 May 2023 10:19:26 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6kbkrrouTIT0pofuae8NIEZ/2llPs8+292FHrtUaguophx93ZXwRp9VE0QtMbua3XB1CnD X-Received: by 2002:a17:90a:cb84:b0:255:9453:3780 with SMTP id a4-20020a17090acb8400b0025594533780mr7445930pju.32.1684948766460; Wed, 24 May 2023 10:19:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948766; cv=none; d=google.com; s=arc-20160816; b=K1T62D4pmx26R8CDnraVE8rMUX5+73rmpoASNcl8LWAY4KW1h5jaxrt+LFMsz/HwDa +Y0ROHVWjvoQJnJvwOvD+8C4fXLYiCtbFjV8h5tRpAIkxpYRxGiC4nFy8YnuRyh/3kB3 cvDHirLfUIpFxDfWFn3ZRiZDm9XiKt6CGdmZPZ1bC454+k0sf9q4uRmofcuzQcaXHAwo LPuzJlo/cMbdpFPaSq1+hjC8pG0/PNWii9FyhFHmEgFPi3yUaXAJUAdXW1e6+Rlq1+zb AFWIBQyyaP415nJ8Z5cIpEiF83iwysxG2s1lXu74WgcKC9aO9jjWou3AkevrLXwJ2/E/ 8drg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=9WxPv1EC9GpDsb3GBH3opGYBCwmR3LDuBPRUfcf3rtM=; b=iqe4l828w4bEJ/YFJTYDEF2LNjTBIGcuRiCbwNbukUUjyvyHEeRP1l0Loew2ZaC7we k1DrOMsGhKcXaPJDMn9USyxqeFUhcYJVrg7zjj0IHprpjVgGzYvWJmgG+h4ifkuGcrby oXd3Il2PMKcZDWkcNiZNjaU4FGzcTmhCrGOzHSjVeE0Bm0IUTQl9kSJ17qeTmGxp4ScN KuFg7Av3QpaCwfYgjCat8m78Kos7unsn/8nmCfd4u77TzgIYeVpXMtG6rRc8jf7w9T7e 0kcsBpKHjUxPEPxkvfIHsUAKY90YQRT7WrT1UdBrBwJhPdNCARF2O3uIDalkRwlclEvQ Y5UQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a40fkCs8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d63-20020a17090a6f4500b00252ad7ab4a5si1770835pjk.5.2023.05.24.10.19.11; Wed, 24 May 2023 10:19:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=a40fkCs8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231352AbjEXRLC (ORCPT + 99 others); Wed, 24 May 2023 13:11:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235670AbjEXRKm (ORCPT ); Wed, 24 May 2023 13:10:42 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BCB718C; Wed, 24 May 2023 10:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948224; x=1716484224; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=46ODicMEHTm3WSA9Zthp4yjDFP1jOE55xd09xeCbdG8=; b=a40fkCs8WDErTtd0sOmUDvdZI6OJ4s4DHSB9VF4O+d76FlSDFcAX3bmw Uea8T0iIX96X2VrOPUsDAdB8NevMZ4+MciiTakqiWmhO2xY7Y3acqx387 w3bkyY9qa2aSXNr25h7qQSL4SvVDEojFfXXwNqEqT06vA15OomKCdsXZx 3JCTg7sX58LU2BhoXmI4YMYtIGd66czgMcBK3NBx5vBBE8rH2L60eb/hS 61Y2O1c7T4iVZ0XXFFKjigOnk1p7OyQ0OdriFYl3hY4uTaNmdabRdmXog j8PwaJKWbSkyBnyquQUewyGHfdM5BJkt9+l7vvKI8fOgtzEHlwbXqqghD g==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206755" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206755" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427361" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427361" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:54 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , linux-pm@vger.kernel.org Subject: [PATCH v7 08/12] x86/PM/keylocker: Restore the wrapping key on the resume from ACPI S3/4 Date: Wed, 24 May 2023 09:57:13 -0700 Message-Id: <20230524165717.14062-9-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766796837896322781?= X-GMAIL-MSGID: =?utf-8?q?1766796837896322781?= The primary use case for the feature is bare metal dm-crypt. The key needs to be restored properly on wakeup, as dm-crypt does not prompt for the key on resume from suspend. Even if the prompt performs for unlocking the volume, where the hibernation image is stored, it still expects to reuse the key handles within the hibernation image once it is loaded. == Wrapping-key Restore == So it is motivated to meet dm-crypt's expectation that the key handles in the suspend-image remain valid after resume from an S-state. But, when the system enters the ACPI S3 or S4 sleep states, the wrapping key is discarded. Key Locker provides a mechanism to back up the wrapping key in non-volatile storage. So, request a backup right after the key is loaded at boot time, and copy it back to each CPU upon wakeup. Then, the entirety of Key Locker has to be disabled if the backup mechanism is not available unless CONFIG_SUSPEND=n. == Restore Failure == In the event of a key restore failure, the kernel proceeds with an initialized wrapping key state. This has the effect of invalidating any key handles that might be present in a suspend-image. When this happens, dm-crypt will see I/O errors resulting from error returns from crypto_skcipher_encrypt()/decrypt(). While this will disrupt operations in the current boot, data is not at risk and access is restored with new handles created by the new wrapping key at the next boot. Also, manage a feature-specific flag to communicate with the crypto implementation. This ensures to stop using the AES instructions upon the key restore failure while not turning off the feature. == Off-states == While the backup may be maintained in non-volatile media across S5 and G3 "off" states, it is neither architecturally guaranteed nor it is expected by dm-crypt as prompting for the key whenever the volume is started. Then, a reboot can cover this case with a new wrapping key. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Acked-by: Rafael J. Wysocki Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- Changes from v6: * Limit the symbol export only when needed. * Improve the coding style -- reduce an indent after 'if() { ... return; }'. (Eric Biggers) * Fix the coding style -- reduce an indent after if() {...return;}. (Eric Biggers) Tweak the comment along with that. * Improve the function prototype, instead of using a macro. (Eric Biggers and Dave Hansen) * Update the documentation: - Massage the changelog to clarify the problem-and-solution by sections - Clarify the comment about the key restore failure. Changes from v5: * Fix the 'valid_kl' flag not to be set when the feature is disabled. (Reported by Marvin Hsu marvin.hsu@intel.com) Add the function comment about this. * Improve the error handling in setup_keylocker(). All the error cases fall through the end that disables the feature. Otherwise, all the successful cases return immediately. Changes from v4: * Update the changelog and title. (Rafael Wysocki) Changes from v3: * Fix the build issue with !X86_KEYLOCKER. (Eric Biggers) Changes from RFC v2: * Change the backup key failure handling. (Dan Williams) Changes from RFC v1: * Folded the warning message into the if condition check. (Rafael Wysocki) * Rebased on the changes of the previous patches. * Added error code for key restoration failures. * Moved the restore helper. * Added function descriptions. --- arch/x86/include/asm/keylocker.h | 4 + arch/x86/kernel/keylocker.c | 136 ++++++++++++++++++++++++++++++- arch/x86/power/cpu.c | 2 + 3 files changed, 139 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h index 9040e10ab648..8621234c5897 100644 --- a/arch/x86/include/asm/keylocker.h +++ b/arch/x86/include/asm/keylocker.h @@ -32,9 +32,13 @@ struct iwkey { #ifdef CONFIG_X86_KEYLOCKER void setup_keylocker(struct cpuinfo_x86 *c); void destroy_keylocker_data(void); +void restore_keylocker(void); +extern bool valid_keylocker(void); #else static inline void setup_keylocker(struct cpuinfo_x86 *c) { } static inline void destroy_keylocker_data(void) { } +static inline void restore_keylocker(void) { } +static inline bool valid_keylocker(void) { return false; } #endif #endif /*__ASSEMBLY__ */ diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c index 038e5268e6b8..8f3acc34b6da 100644 --- a/arch/x86/kernel/keylocker.c +++ b/arch/x86/kernel/keylocker.c @@ -11,20 +11,48 @@ #include #include #include +#include static __initdata struct keylocker_setup_data { + bool initialized; struct iwkey key; } kl_setup; +/* + * This flag is set with wrapping key load. When the key restore + * fails, it is reset. This restore state is exported to the crypto + * library, then Key Locker will not be used there. So, the feature is + * soft-disabled with this flag. + */ +static bool valid_kl; + +bool valid_keylocker(void) +{ + return valid_kl; +} +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(valid_keylocker); +#endif + static void __init generate_keylocker_data(void) { get_random_bytes(&kl_setup.key.integrity_key, sizeof(kl_setup.key.integrity_key)); get_random_bytes(&kl_setup.key.encryption_key, sizeof(kl_setup.key.encryption_key)); } +/* + * This is invoked when the boot-up is finished, which means wrapping + * key is loaded. Then, the 'valid_kl' flag is set here as the feature + * is enabled. + */ void __init destroy_keylocker_data(void) { + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) + return; + memzero_explicit(&kl_setup.key, sizeof(kl_setup.key)); + kl_setup.initialized = true; + valid_kl = true; } static void __init load_keylocker(void) @@ -34,6 +62,27 @@ static void __init load_keylocker(void) kernel_fpu_end(); } +/** + * copy_keylocker - Copy the wrapping key from the backup. + * + * Request hardware to copy the key in non-volatile storage to the CPU + * state. + * + * Returns: -EBUSY if the copy fails, 0 if successful. + */ +static int copy_keylocker(void) +{ + u64 status; + + wrmsrl(MSR_IA32_COPY_IWKEY_TO_LOCAL, 1); + + rdmsrl(MSR_IA32_IWKEY_COPY_STATUS, status); + if (status & BIT(0)) + return 0; + else + return -EBUSY; +} + /** * setup_keylocker - Enable the feature. * @c: A pointer to struct cpuinfo_x86 @@ -52,6 +101,7 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c) if (c == &boot_cpu_data) { u32 eax, ebx, ecx, edx; + bool backup_available; cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx); /* @@ -65,13 +115,53 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c) goto disable; } + backup_available = !!(ebx & KEYLOCKER_CPUID_EBX_BACKUP); + /* + * The wrapping key in CPU state is volatile in S3/4 + * states. So ensure the backup capability along with + * S-states. + */ + if (!backup_available && IS_ENABLED(CONFIG_SUSPEND)) { + pr_debug("x86/keylocker: No key backup support with possible S3/4.\n"); + goto disable; + } + generate_keylocker_data(); + load_keylocker(); + + /* Backup a wrapping key in non-volatile media. */ + if (backup_available) + wrmsrl(MSR_IA32_BACKUP_IWKEY_TO_PLATFORM, 1); + + pr_info("x86/keylocker: Enabled.\n"); + return; } - load_keylocker(); + /* + * At boot time, the key is loaded directly from the memory. + * Otherwise, this path performs the key recovery on each CPU + * wake-up. Then, the backup in the platform-scoped state is + * copied to the CPU state. + */ + if (!kl_setup.initialized) { + load_keylocker(); + return; + } else if (valid_kl) { + int rc; - pr_info_once("x86/keylocker: Enabled.\n"); - return; + rc = copy_keylocker(); + if (!rc) + return; + + /* + * The key copy failed here. The subsequent feature + * use will have inconsistent keys and failures. So, + * invalidate the feature via the flag like with the + * backup failure. + */ + valid_kl = false; + pr_err_once("x86/keylocker: Invalid copy status (rc: %d).\n", rc); + } disable: setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); @@ -80,3 +170,43 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c) /* Make sure the feature disabled for kexec-reboot. */ cr4_clear_bits(X86_CR4_KEYLOCKER); } + +/** + * restore_keylocker - Restore the wrapping key. + * + * The boot CPU executes this while other CPUs restore it through the + * setup function. + */ +void restore_keylocker(void) +{ + u64 backup_status; + int rc; + + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER) || !valid_kl) + return; + + /* + * The IA32_IWKEYBACKUP_STATUS MSR contains a bitmap that + * indicates an invalid backup if bit 0 is set and a read (or + * write) error if bit 2 is set. + */ + rdmsrl(MSR_IA32_IWKEY_BACKUP_STATUS, backup_status); + if (backup_status & BIT(0)) { + rc = copy_keylocker(); + if (rc) + pr_err("x86/keylocker: Invalid copy state (rc: %d).\n", rc); + else + return; + } else { + pr_err("x86/keylocker: The key backup access failed with %s.\n", + (backup_status & BIT(2)) ? "read error" : "invalid status"); + } + + /* + * Now the backup key is not available. Invalidate the feature + * via the flag to avoid any subsequent use. But keep the + * feature with zeroed wrapping key instead of disabling it. + */ + pr_err("x86/keylocker: Failed to restore wrapping key.\n"); + valid_kl = false; +} diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c index 63230ff8cf4f..e99be45354cd 100644 --- a/arch/x86/power/cpu.c +++ b/arch/x86/power/cpu.c @@ -27,6 +27,7 @@ #include #include #include +#include #ifdef CONFIG_X86_32 __visible unsigned long saved_context_ebx; @@ -264,6 +265,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt) x86_platform.restore_sched_clock_state(); cache_bp_restore(); perf_restore_debug_store(); + restore_keylocker(); c = &cpu_data(smp_processor_id()); if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL)) From patchwork Wed May 24 16:57:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98629 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp61913vqr; Wed, 24 May 2023 10:16:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6oFKoUdVR74VycGwdj8KLS5PjhLPA4djB1BDhhYcBI/WWK/rzcRW/tYemXiaIGyqfp3Pap X-Received: by 2002:a05:6a20:4282:b0:10c:6:61d1 with SMTP id o2-20020a056a20428200b0010c000661d1mr10560766pzj.39.1684948574744; Wed, 24 May 2023 10:16:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948574; cv=none; d=google.com; s=arc-20160816; b=0zWvIh5ztCQo3p2W2g09V0R0LNBin2rVfymXNgMAI36zOJuH9RI5dc+cHGQMSEFYfS dhwN6hSCccIqblstmpFiK6yFinVqYQGXna18wryEuorO95zxFWZ7cSmdDyrIAQbFMapc u/QsVxFW2N4sYNufZ5jHn2kKnQM0SOrqYSXS3zLZUXnBQV9srPoe0qRHFNRR5ABLxZbg BnDSr1kbc4rD4mjEJPcLKJxLhv6PNZnW15sBlFInNoswkAF1K03PomhZthkxUJff9krl /QY7O5y9LKNYw0sGKWPQYM5KwwWywfWogelrqQxNV1T+TWDj0m3i7pMjJYpeaem30SkE aVlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=tSYSnHQyDsRX58YyfTu8dK2Nzdwwg4bt2pcBGXyZ01I=; b=NXvQWsBtUtblhyAtZ9hhQFiRLwYzQk7pbePPlTOOnL30C8G2bUVOUn8AF7tWUuQgfY pUV7L6UpY8wB8uGM8GNP5FZlcL1+yFCoQy/E9REkLrvfTf7tgLHW4LYVi3OgMtpNuUcD SwA0xH4A3Doilpgnfu5x5EqCBYREXwfGHodWUEyeKm24zhoMV3H8nk4z3L0Pi82VCfgq BH7OM/BbHf+PEwtGGMLGRODruw9auErKHGFVTYThpSn8gVVy4n0sYbW3EbCMCm5jO3f2 6Sn2Usp9ELrkUGXB8z2wnnLtmFnjQbTpb0Pq1EGq9ulPG5zfbi7cc/wtlms/2DwbC3hn OxTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ti1wuyvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h185-20020a6383c2000000b00533fce755a8si1348130pge.471.2023.05.24.10.16.01; Wed, 24 May 2023 10:16:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ti1wuyvg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229630AbjEXRLJ (ORCPT + 99 others); Wed, 24 May 2023 13:11:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235428AbjEXRKn (ORCPT ); Wed, 24 May 2023 13:10:43 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE5B18E; Wed, 24 May 2023 10:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948225; x=1716484225; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=59alQ36GnfZiWbiOj/6mr9HkoxqD3DcjLyz1IMSV1Bk=; b=Ti1wuyvgxRqiU+6uukNsxcLq3jOx3k4Ge4gfpCBZsRRfIVPUPotURTji m+BR+F+PurP89ybRa1Ryzrd6TRC3vLYpXeR+dnImTE3lTjNfLIf1PXayA doIYzsvHAqpboGVawXRpvI5wQQg9o8zFFDVDW7pJLm1d8JL0HTlK06/K0 4ex1WoMHJObugYv/y0f4cNTMXv2yZMx1UeYCgN7C3ld6rqdP07EAgQCLI m22QI4AHvte++1aOXF31xuM0YlvjcKrgYlsKN6TOnSfDwWPhznAlU8zL0 Unx+/yR5KJIH9EIZvOlYMOYdmdfti4eBX4P/bDI7P7sTUzb9nZc1Se3K/ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206776" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206776" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427364" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427364" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:55 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Jonathan Corbet , Ingo Molnar , "H. Peter Anvin" , linux-doc@vger.kernel.org Subject: [PATCH v7 09/12] x86/cpu: Add a configuration and command line option for Key Locker Date: Wed, 24 May 2023 09:57:14 -0700 Message-Id: <20230524165717.14062-10-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762833404025384177?= X-GMAIL-MSGID: =?utf-8?q?1766796636854740091?= Add CONFIG_X86_KEYLOCKER to gate whether Key Locker is initialized at boot. The option is selected by the Key Locker cipher module CRYPTO_AES_KL (to be added in a later patch). Add a new command line option "nokeylocker" to optionally override the default CONFIG_X86_KEYLOCKER=y behavior. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Jonathan Corbet Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: linux-doc@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Rebase on the upstream: commit a894a8a56b57 ("Documentation: kernel-parameters: sort all "no..." parameters") Changes from RFC v2: * Make the option selected by CRYPTO_AES_KL. (Dan Williams) * Massage the changelog and the config option description. --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ arch/x86/Kconfig | 3 +++ arch/x86/kernel/cpu/common.c | 16 ++++++++++++++++ 3 files changed, 21 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index c1247ec4589a..b42fc53cbcf9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3749,6 +3749,8 @@ kernel and module base offset ASLR (Address Space Layout Randomization). + nokeylocker [X86] Disable Key Locker hardware feature. + no-kvmapf [X86,KVM] Disable paravirtualized asynchronous page fault handling. diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a98c5f82be48..f9788b477db1 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1879,6 +1879,9 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS If unsure, say y. +config X86_KEYLOCKER + bool + choice prompt "TSX enable mode" depends on CPU_SUP_INTEL diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 5882ff6e3c6b..718ff1b1d6dd 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -402,6 +402,22 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c) static const unsigned long cr4_pinned_mask = X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP | X86_CR4_FSGSBASE | X86_CR4_CET; + +static __init int x86_nokeylocker_setup(char *arg) +{ + /* Expect an exact match without trailing characters. */ + if (strlen(arg)) + return 0; + + if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER)) + return 1; + + setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER); + pr_info("x86/keylocker: Disabled by kernel command line.\n"); + return 1; +} +__setup("nokeylocker", x86_nokeylocker_setup); + static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning); static unsigned long cr4_pinned_bits __ro_after_init; From patchwork Wed May 24 16:57:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98633 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp64802vqr; Wed, 24 May 2023 10:21:19 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ54aqL229pnNSPNmbagjsFlUP78nOKV/AYKN3X7PDOECBniOO27fQjv9zGduOTSLf718bru X-Received: by 2002:a05:6a20:918f:b0:10a:b812:bcc7 with SMTP id v15-20020a056a20918f00b0010ab812bcc7mr49038pzd.17.1684948879581; Wed, 24 May 2023 10:21:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948879; cv=none; d=google.com; s=arc-20160816; b=rcR7uzfphT9dSCs6rzd6tqef+CsCjasU54Ojlzico+rrv1+Vipc3q3W3alWXKaxwuN ja1rIpi1mEdYyxxZDwZdvX6E0hzpM75fG3s4WVQWXQh/9ST0b4842IyWbpwFxwDY4vGK o9QJvd8ECe29inoWJp8HKDO8ozE1kxyrIP6Z0oFQjqRNgeZyCFEvrUbuRb/vD6n0RzW5 C2fUthnWw6r7L/Wsclp7c4DqS5gCsUdiiRfi7gmvT7PSxLmFNVTpVzPHXjlO3NTc1HkS nESMsuAWq4VEcdjvhymGdOwlz4vn9TM9PsyTxX7nReRLzaiOC/S/bg+VLcVke07kvtYG fvMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=/kupUi+WhhhuY37mT90NFxGObwheSHTZGTGJD7Q05G0=; b=cnwQz/xSoJtTIGQm5V/hJijEhbEc4cGd8xiyF0678YXNUXmVmgpiSviQd0lk2onBT3 oqkkwPHz7u/nrH/ubkmpX4wGonzevl0mERJgH1U+ArMO2I7otLMG8pnasrTKXzcU8/7Q lX4DONl0C6cuD2hP8HlE2mdFd2bHC2WFJKqA1Pr13FjYyqAVAJQfwVqpGWfXnsT46nCg Ulop4Mx2WMFyJ/1dO/1wkbF0Qol5vQ0S/VvhPMfYkV8KdtTYAR0tBeJUfd89Jq/bRlEe 239KMBtxky1yhGkDJX4ZrxFTHEXLyKIexBiO38tpvji5RLGMQT0eyEPQ12l7cpUAbOWH mlhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="izxE/czs"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z186-20020a6265c3000000b0064f3917aed4si3806153pfb.93.2023.05.24.10.21.04; Wed, 24 May 2023 10:21:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="izxE/czs"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233242AbjEXRLN (ORCPT + 99 others); Wed, 24 May 2023 13:11:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235857AbjEXRKn (ORCPT ); Wed, 24 May 2023 13:10:43 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE6141A4; Wed, 24 May 2023 10:10:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948227; x=1716484227; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=CraprLocyXDvhENSj+WjpPeWHSBemlZGGpDdve25Ay4=; b=izxE/czsUdE9+HROV9hfg372+Pf96XX2CX2TN+770RFmOArZ0AfAA0LT JP1RtVNq8BtRrrKuBEFu1mJxV1BH+EeLSBlODdNWXMHQElW3HRb7BrOw8 STTnOful+vAoEmTMKyNM9gQxwWXKp0M90YMCwIdu2gyiVPqizb+sL8j6F /ZQsVfGv0BNbuEQpH3TOx/gJTX/uRF1Wjwvl41AuueaJkvBCHDfHUz8G8 soGSHTfharSZuLaLxrDDB5K/z2pHcDDTVOVi9YQ9MgiDkO6200LVunjl3 QWMCMlgwQKjuBrVumg4CJG+1ROMi/qQEj0AxUjFoKcCX3BFGNxPcuOqOf w==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206796" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206796" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427368" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427368" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:55 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, "David S. Miller" , Ingo Molnar , "H. Peter Anvin" Subject: [PATCH v7 10/12] crypto: x86/aesni - Use the proper data type in struct aesni_xts_ctx Date: Wed, 24 May 2023 09:57:15 -0700 Message-Id: <20230524165717.14062-11-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766796956170496905?= X-GMAIL-MSGID: =?utf-8?q?1766796956170496905?= Every field in struct aesni_xts_ctx is a pointer to a byte array. Each array has a size of struct crypto_aes_ctx. Then, the field can be redefined as that struct type instead of the obscure pointer. Subsequently, the address to struct aesni_xts_ctx should be aligned right away. This can also simplify the runtime alignment code. Redefine struct aesni_xts_ctx, and align its address on the front. This draws a rework by refactoring the common alignment code. Then, clean up the alignment for the old pointers. Suggested-by: Eric Biggers Signed-off-by: Chang S. Bae Cc: Herbert Xu Cc: "David S. Miller" Cc: Eric Biggers Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Add as a new patch. (Eric Biggers) This fix was considered to be better addressed before the preparatory AES-NI code rework. --- arch/x86/crypto/aesni-intel_glue.c | 38 +++++++++++++++++------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index a5b0cb3efeba..97a1629b84c4 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -61,8 +61,8 @@ struct generic_gcmaes_ctx { }; struct aesni_xts_ctx { - u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; - u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR; + struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR; + struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR; }; #define GCM_BLOCK_LEN 16 @@ -219,14 +219,20 @@ generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm) } #endif +static inline unsigned long aes_align_addr(unsigned long addr) +{ + return (crypto_tfm_ctx_alignment() >= AESNI_ALIGN) ? + ALIGN(addr, 1) : ALIGN(addr, AESNI_ALIGN); +} + static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) { - unsigned long addr = (unsigned long)raw_ctx; - unsigned long align = AESNI_ALIGN; + return (struct crypto_aes_ctx *)aes_align_addr((unsigned long)raw_ctx); +} - if (align <= crypto_tfm_ctx_alignment()) - align = 1; - return (struct crypto_aes_ctx *)ALIGN(addr, align); +static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) +{ + return (struct aesni_xts_ctx *)aes_align_addr((unsigned long)crypto_skcipher_ctx(tfm)); } static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, @@ -883,7 +889,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req) static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); int err; err = xts_verify_key(tfm, key, keylen); @@ -893,20 +899,20 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, keylen /= 2; /* first half of xts-key is for crypt */ - err = aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_crypt_ctx, + err = aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->crypt_ctx, key, keylen); if (err) return err; /* second half of xts-key is for tweak */ - return aes_set_key_common(crypto_skcipher_tfm(tfm), ctx->raw_tweak_ctx, + return aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->tweak_ctx, key + keylen, keylen); } static int xts_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); int tail = req->cryptlen % AES_BLOCK_SIZE; struct skcipher_request subreq; struct skcipher_walk walk; @@ -942,7 +948,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); /* calculate first value of T */ - aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv); + aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv); while (walk.nbytes > 0) { int nbytes = walk.nbytes; @@ -951,11 +957,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) nbytes &= ~(AES_BLOCK_SIZE - 1); if (encrypt) - aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_encrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_decrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, nbytes, walk.iv); kernel_fpu_end(); @@ -983,11 +989,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) kernel_fpu_begin(); if (encrypt) - aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_encrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); else - aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx), + aesni_xts_decrypt(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, walk.nbytes, walk.iv); kernel_fpu_end(); From patchwork Wed May 24 16:57:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98635 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp66933vqr; Wed, 24 May 2023 10:25:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7pt5yROehcZIJP4EuEoGRs2qCrl5wLaFqSzzDH39ftbFrZMZBAYNSNg5nJsEixwzcyOBw+ X-Received: by 2002:a05:6a00:1254:b0:640:f313:efba with SMTP id u20-20020a056a00125400b00640f313efbamr4539202pfi.19.1684949125152; Wed, 24 May 2023 10:25:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684949125; cv=none; d=google.com; s=arc-20160816; b=h8ea5z9uWNZvUwWYj0b56JAM//LEpu/IDJamJAX8oFKwYeJ3FWTs3k4a6xHhGpIsIU KbHmUJ5r1bn8RgpFfdSytM5RXYHcwh0TtA4140nfxb8BlycJI5UlUpfBAHOVlHVwkGnY 2QbVOEGX2fX81aLx+YLAKxluMjfMRwa0EuDVR+JdQ5808sia5+KlqXP57OgOyDJDQfEi A6C2w/6U6YvAHRwA9lr7QjNH6d0dscsgRVgkuVTSXnyf4FfnC0lLS33w2lZK7jHO6X9N Ku6/q0CsVEW0W05QW+N171NzUi/CQUTCiDLs4w+j/0xqu2HbgDpLzae6Rw2ks8eDJ6S4 STKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=Y8hFij9wIB6KR29sCcpikCOcFONTsFOKQzyT6SoOJbE=; b=FqdjF+Uz+DTD3yZHpt9Lq7V7UoP+qMBjs1D7ben+eYiCVpkkUCbGbeJ4MHCqpCjEyh BK3dODwatkDG9qBa3VcGbgr8k5L2iLu4nqm3+QzSymOoaGYqCmHMm1nuKB4iEjhOTKfk E2ZoG2tWiCijdbb3AHfw4q6GT+H7I3qL3e3Pq4zkFtWrXQidEXvgRFQ0ZU2aiEg9TQDw AWIq4hn81fq8nLmOljCH301/H+ZWQo9PcEbA5IjxwNlFsPaqXiYrSYeHgj0MOoL5PL3d cQ0rbGNdtMLbfQcacoleQRQNxU0K4Qs00vS9/ezZ1Rc/c30LJJ3nktNbwmr6MNqQr2LQ l6+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="W/t1W6G6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l21-20020a637015000000b0053efe1af2c2si141728pgc.677.2023.05.24.10.25.10; Wed, 24 May 2023 10:25:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="W/t1W6G6"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235528AbjEXRLW (ORCPT + 99 others); Wed, 24 May 2023 13:11:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235354AbjEXRKw (ORCPT ); Wed, 24 May 2023 13:10:52 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F40461B1; Wed, 24 May 2023 10:10:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948229; x=1716484229; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=FS1MVcvymvzwGb9PeKlzvr92KXLR2ebij/E0D73r3lo=; b=W/t1W6G6iY7VVvuw+fnJKvezthoucI7oWSYh/o4gRIUOXOhLhYqtEsIe KIIAw/KjL5Jaa/2Zuucnx3y3sn4PnrYTMwwSAWHnxwhRXIpELCgeL48jy i9c0EDuC/h7cQUOBc1ULnOOTd1IXZDCf7pzrkc5Utfgmh6+7trS0ACTun jfOY2oP2n29JfS9UmHsKcSGMxdH1++ZkjayBGCnVSPTzZW/uA+YSr3mx4 SfJYxe1QfN8n96p9D9XZts84FuGrbPD02ZjItfAFuMulB0B5njpYF2guF FKtW5hJX9bynwTg8lN+8mDScpiN3PYfC6mGe9G1Ktzo8RPxEeIB3I00d6 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206812" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206812" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427372" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427372" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:56 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, "David S. Miller" , Ingo Molnar , "H. Peter Anvin" Subject: [PATCH v7 11/12] crypto: x86/aes - Prepare for a new AES implementation Date: Wed, 24 May 2023 09:57:16 -0700 Message-Id: <20230524165717.14062-12-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762833298567459259?= X-GMAIL-MSGID: =?utf-8?q?1766797214032698699?= Key Locker's AES instruction set ('AES-KL') has a similar programming interface to AES-NI. The internal ABI in the assembly code will have the same prototype as AES-NI. Then, the glue code will be the same as AES-NI's. The new AES code will support the XTS mode alone as disk encryption is the only intended use case. Refactor the XTS-related code to avoid code duplication, and also move some constant values to be shareable. Introduce wrappers for data transformation functions to return an error code as AES-KL may populate it. The refactored code needs to invoke the implementation-specific functions. Then, while reusable, the code has to be inlined to the caller to avoid the indirect call for its possible overhead. Neither functional change nor performance regression is intended. Signed-off-by: Chang S. Bae Acked-by: Dan Williams Cc: Herbert Xu Cc: "David S. Miller" Cc: Eric Biggers Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Ard Biesheuvel Cc: x86@kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Inline the helper code to avoid the indirect call. (Eric Biggers) * Rename the filename: aes-intel* -> aes-helper*. (Eric Biggers) * Don't export symbols yet here. Instead, do it when needed later. * Improve the coding style: - Follow the symbol convention: '_' -> '__' (Eric Biggers) - Fix a style issue -- 'dst = src = ...' catched by checkpatch.pl: "CHECK: multiple assignments should be avoided" * Cleanup: move some define back to AES-NI code as not used by AES-KL Changes from v5: * Clean up the staled function definition -- cbc_crypt_common(). * Ensure kernel_fpu_end() for the possible error return from xts_crypt_common()->crypt1_fn(). Changes from v4: * Drop CBC mode changes. (Eric Biggers) Changes from v3: * Drop ECB and CTR mode changes. (Eric Biggers) * Export symbols. (Eric Biggers) Changes from RFC v2: * Massage the changelog. (Dan Williams) Changes from RFC v1: * Added as a new patch. (Ard Biesheuvel) Link: https://lore.kernel.org/lkml/CAMj1kXGa4f21eH0mdxd1pQsZMUjUr1Btq+Dgw-gC=O-yYft7xw@mail.gmail.com/ --- arch/x86/crypto/aes-helper_asm.S | 22 +++ arch/x86/crypto/aes-helper_glue.h | 161 +++++++++++++++++++++ arch/x86/crypto/aesni-intel_asm.S | 47 +++---- arch/x86/crypto/aesni-intel_glue.c | 218 ++++++++--------------------- 4 files changed, 257 insertions(+), 191 deletions(-) create mode 100644 arch/x86/crypto/aes-helper_asm.S create mode 100644 arch/x86/crypto/aes-helper_glue.h diff --git a/arch/x86/crypto/aes-helper_asm.S b/arch/x86/crypto/aes-helper_asm.S new file mode 100644 index 000000000000..b31abcdf63cb --- /dev/null +++ b/arch/x86/crypto/aes-helper_asm.S @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +/* + * Constant values shared between AES implementations: + */ + +.pushsection .rodata +.align 16 +.Lcts_permute_table: + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 + .byte 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 + .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 +.popsection + +.section .rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16 +.align 16 +.Lgf128mul_x_ble_mask: + .octa 0x00000000000000010000000000000087 +.previous diff --git a/arch/x86/crypto/aes-helper_glue.h b/arch/x86/crypto/aes-helper_glue.h new file mode 100644 index 000000000000..af422e2c9263 --- /dev/null +++ b/arch/x86/crypto/aes-helper_glue.h @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Shared glue code between AES implementations, refactored from the AES-NI's. + * + * The helper code is inlined for a performance reason. With the mitigation + * for speculative executions like retpoline, indirect calls become very + * expensive at a cost of measurable overhead. + */ + +#ifndef _AES_INTEL_GLUE_H +#define _AES_INTEL_GLUE_H + +#include +#include +#include +#include +#include +#include +#include + +#define AES_ALIGN 16 +#define AES_ALIGN_ATTR __attribute__((__aligned__(AES_ALIGN))) +#define AES_ALIGN_EXTRA ((AES_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) +#define XTS_AES_CTX_SIZE (sizeof(struct aes_xts_ctx) + AES_ALIGN_EXTRA) + +struct aes_xts_ctx { + struct crypto_aes_ctx tweak_ctx AES_ALIGN_ATTR; + struct crypto_aes_ctx crypt_ctx AES_ALIGN_ATTR; +}; + +static inline unsigned long aes_align_addr(unsigned long addr) +{ + return (crypto_tfm_ctx_alignment() >= AES_ALIGN) ? ALIGN(addr, 1) : ALIGN(addr, AES_ALIGN); +} + +static inline struct aes_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) +{ + return (struct aes_xts_ctx *)aes_align_addr((unsigned long)crypto_skcipher_ctx(tfm)); +} + +static inline int +xts_setkey_common(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen, + int (*fn)(struct crypto_tfm *tfm, void *ctx, const u8 *in_key, + unsigned int key_len)) +{ + struct aes_xts_ctx *ctx = aes_xts_ctx(tfm); + int err; + + err = xts_verify_key(tfm, key, keylen); + if (err) + return err; + + keylen /= 2; + + /* first half of xts-key is for crypt */ + err = fn(crypto_skcipher_tfm(tfm), &ctx->crypt_ctx, key, keylen); + if (err) + return err; + + /* second half of xts-key is for tweak */ + return fn(crypto_skcipher_tfm(tfm), &ctx->tweak_ctx, key + keylen, keylen); +} + +static inline int +xts_crypt_common(struct skcipher_request *req, + int (*crypt_fn)(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv), + int (*crypt1_fn)(const void *ctx, u8 *out, const u8 *in)) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct aes_xts_ctx *ctx = aes_xts_ctx(tfm); + int tail = req->cryptlen % AES_BLOCK_SIZE; + struct skcipher_request subreq; + struct skcipher_walk walk; + int err; + + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + + if (unlikely(tail > 0 && walk.nbytes < walk.total)) { + int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; + + skcipher_walk_abort(&walk); + + skcipher_request_set_tfm(&subreq, tfm); + skcipher_request_set_callback(&subreq, + skcipher_request_flags(req), + NULL, NULL); + skcipher_request_set_crypt(&subreq, req->src, req->dst, + blocks * AES_BLOCK_SIZE, req->iv); + req = &subreq; + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + } else { + tail = 0; + } + + kernel_fpu_begin(); + + /* calculate first value of T */ + err = crypt1_fn(&ctx->tweak_ctx, walk.iv, walk.iv); + if (err) { + kernel_fpu_end(); + return err; + } + + while (walk.nbytes > 0) { + int nbytes = walk.nbytes; + + if (nbytes < walk.total) + nbytes &= ~(AES_BLOCK_SIZE - 1); + + err = crypt_fn(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, + nbytes, walk.iv); + kernel_fpu_end(); + if (err) + return err; + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + + if (walk.nbytes > 0) + kernel_fpu_begin(); + } + + if (unlikely(tail > 0 && !err)) { + struct scatterlist sg_src[2], sg_dst[2]; + struct scatterlist *src, *dst; + + src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); + if (req->dst != req->src) + dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); + else + dst = src; + + skcipher_request_set_crypt(req, src, dst, AES_BLOCK_SIZE + tail, + req->iv); + + err = skcipher_walk_virt(&walk, &subreq, false); + if (err) + return err; + + kernel_fpu_begin(); + err = crypt_fn(&ctx->crypt_ctx, walk.dst.virt.addr, walk.src.virt.addr, + walk.nbytes, walk.iv); + kernel_fpu_end(); + if (err) + return err; + + err = skcipher_walk_done(&walk, 0); + } + return err; +} + +#endif /* _AES_INTEL_GLUE_H */ diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 3ac7487ecad2..3922d24cae2b 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -28,6 +28,7 @@ #include #include #include +#include "aes-helper_asm.S" /* * The following macros are used to move an (un)aligned 16 byte value to/from @@ -1935,9 +1936,9 @@ SYM_FUNC_START(aesni_set_key) SYM_FUNC_END(aesni_set_key) /* - * void aesni_enc(const void *ctx, u8 *dst, const u8 *src) + * void __aesni_enc(const void *ctx, u8 *dst, const u8 *src) */ -SYM_FUNC_START(aesni_enc) +SYM_FUNC_START(__aesni_enc) FRAME_BEGIN #ifndef __x86_64__ pushl KEYP @@ -1956,7 +1957,7 @@ SYM_FUNC_START(aesni_enc) #endif FRAME_END RET -SYM_FUNC_END(aesni_enc) +SYM_FUNC_END(__aesni_enc) /* * _aesni_enc1: internal ABI @@ -2124,9 +2125,9 @@ SYM_FUNC_START_LOCAL(_aesni_enc4) SYM_FUNC_END(_aesni_enc4) /* - * void aesni_dec (const void *ctx, u8 *dst, const u8 *src) + * void __aesni_dec (const void *ctx, u8 *dst, const u8 *src) */ -SYM_FUNC_START(aesni_dec) +SYM_FUNC_START(__aesni_dec) FRAME_BEGIN #ifndef __x86_64__ pushl KEYP @@ -2146,7 +2147,7 @@ SYM_FUNC_START(aesni_dec) #endif FRAME_END RET -SYM_FUNC_END(aesni_dec) +SYM_FUNC_END(__aesni_dec) /* * _aesni_dec1: internal ABI @@ -2689,22 +2690,14 @@ SYM_FUNC_START(aesni_cts_cbc_dec) RET SYM_FUNC_END(aesni_cts_cbc_dec) +#ifdef __x86_64__ + .pushsection .rodata .align 16 -.Lcts_permute_table: - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 - .byte 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 - .byte 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 -#ifdef __x86_64__ .Lbswap_mask: .byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 -#endif .popsection -#ifdef __x86_64__ /* * _aesni_inc_init: internal ABI * setup registers used by _aesni_inc @@ -2819,12 +2812,6 @@ SYM_FUNC_END(aesni_ctr_enc) #endif -.section .rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16 -.align 16 -.Lgf128mul_x_ble_mask: - .octa 0x00000000000000010000000000000087 -.previous - /* * _aesni_gf128mul_x_ble: internal ABI * Multiply in GF(2^128) for XTS IVs @@ -2844,10 +2831,10 @@ SYM_FUNC_END(aesni_ctr_enc) pxor KEY, IV; /* - * void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, - * const u8 *src, unsigned int len, le128 *iv) + * void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) */ -SYM_FUNC_START(aesni_xts_encrypt) +SYM_FUNC_START(__aesni_xts_encrypt) FRAME_BEGIN #ifndef __x86_64__ pushl IVP @@ -2996,13 +2983,13 @@ SYM_FUNC_START(aesni_xts_encrypt) movups STATE, (OUTP) jmp .Lxts_enc_ret -SYM_FUNC_END(aesni_xts_encrypt) +SYM_FUNC_END(__aesni_xts_encrypt) /* - * void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, - * const u8 *src, unsigned int len, le128 *iv) + * void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) */ -SYM_FUNC_START(aesni_xts_decrypt) +SYM_FUNC_START(__aesni_xts_decrypt) FRAME_BEGIN #ifndef __x86_64__ pushl IVP @@ -3158,4 +3145,4 @@ SYM_FUNC_START(aesni_xts_decrypt) movups STATE, (OUTP) jmp .Lxts_dec_ret -SYM_FUNC_END(aesni_xts_decrypt) +SYM_FUNC_END(__aesni_xts_decrypt) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 97a1629b84c4..857cff484bb3 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -36,33 +36,25 @@ #include #include +#include "aes-helper_glue.h" -#define AESNI_ALIGN 16 -#define AESNI_ALIGN_ATTR __attribute__ ((__aligned__(AESNI_ALIGN))) -#define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) #define RFC4106_HASH_SUBKEY_SIZE 16 -#define AESNI_ALIGN_EXTRA ((AESNI_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1)) -#define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AESNI_ALIGN_EXTRA) -#define XTS_AES_CTX_SIZE (sizeof(struct aesni_xts_ctx) + AESNI_ALIGN_EXTRA) +#define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) +#define CRYPTO_AES_CTX_SIZE (sizeof(struct crypto_aes_ctx) + AES_ALIGN_EXTRA) /* This data is stored at the end of the crypto_tfm struct. * It's a type of per "session" data storage location. * This needs to be 16 byte aligned. */ struct aesni_rfc4106_gcm_ctx { - u8 hash_subkey[16] AESNI_ALIGN_ATTR; - struct crypto_aes_ctx aes_key_expanded AESNI_ALIGN_ATTR; + u8 hash_subkey[16] AES_ALIGN_ATTR; + struct crypto_aes_ctx aes_key_expanded AES_ALIGN_ATTR; u8 nonce[4]; }; struct generic_gcmaes_ctx { - u8 hash_subkey[16] AESNI_ALIGN_ATTR; - struct crypto_aes_ctx aes_key_expanded AESNI_ALIGN_ATTR; -}; - -struct aesni_xts_ctx { - struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR; - struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR; + u8 hash_subkey[16] AES_ALIGN_ATTR; + struct crypto_aes_ctx aes_key_expanded AES_ALIGN_ATTR; }; #define GCM_BLOCK_LEN 16 @@ -82,8 +74,8 @@ struct gcm_context_data { asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); -asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in); -asmlinkage void aesni_dec(const void *ctx, u8 *out, const u8 *in); +asmlinkage void __aesni_enc(const void *ctx, u8 *out, const u8 *in); +asmlinkage void __aesni_dec(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len); asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out, @@ -97,14 +89,40 @@ asmlinkage void aesni_cts_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out, asmlinkage void aesni_cts_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); +static int aesni_enc(const void *ctx, u8 *out, const u8 *in) +{ + __aesni_enc(ctx, out, in); + return 0; +} + +static int aesni_dec(const void *ctx, u8 *out, const u8 *in) +{ + __aesni_dec(ctx, out, in); + return 0; +} + #define AVX_GEN2_OPTSIZE 640 #define AVX_GEN4_OPTSIZE 4096 -asmlinkage void aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); +asmlinkage void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); -asmlinkage void aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, - const u8 *in, unsigned int len, u8 *iv); +asmlinkage void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, + const u8 *in, unsigned int len, u8 *iv); + +static int aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_encrypt(ctx, out, in, len, iv); + return 0; +} + +static int aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + __aesni_xts_decrypt(ctx, out, in, len, iv); + return 0; +} #ifdef CONFIG_X86_64 @@ -201,7 +219,7 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2); static inline struct aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) { - unsigned long align = AESNI_ALIGN; + unsigned long align = AES_ALIGN; if (align <= crypto_tfm_ctx_alignment()) align = 1; @@ -211,7 +229,7 @@ aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm) static inline struct generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm) { - unsigned long align = AESNI_ALIGN; + unsigned long align = AES_ALIGN; if (align <= crypto_tfm_ctx_alignment()) align = 1; @@ -219,22 +237,11 @@ generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm) } #endif -static inline unsigned long aes_align_addr(unsigned long addr) -{ - return (crypto_tfm_ctx_alignment() >= AESNI_ALIGN) ? - ALIGN(addr, 1) : ALIGN(addr, AESNI_ALIGN); -} - static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx) { return (struct crypto_aes_ctx *)aes_align_addr((unsigned long)raw_ctx); } -static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) -{ - return (struct aesni_xts_ctx *)aes_align_addr((unsigned long)crypto_skcipher_ctx(tfm)); -} - static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, const u8 *in_key, unsigned int key_len) { @@ -270,7 +277,7 @@ static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) aes_encrypt(ctx, dst, src); } else { kernel_fpu_begin(); - aesni_enc(ctx, dst, src); + __aesni_enc(ctx, dst, src); kernel_fpu_end(); } } @@ -283,7 +290,7 @@ static void aesni_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) aes_decrypt(ctx, dst, src); } else { kernel_fpu_begin(); - aesni_dec(ctx, dst, src); + __aesni_dec(ctx, dst, src); kernel_fpu_end(); } } @@ -534,7 +541,7 @@ static int ctr_crypt(struct skcipher_request *req) nbytes &= ~AES_BLOCK_MASK; if (walk.nbytes == walk.total && nbytes > 0) { - aesni_enc(ctx, keystream, walk.iv); + __aesni_enc(ctx, keystream, walk.iv); crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes - nbytes, walk.src.virt.addr + walk.nbytes - nbytes, keystream, nbytes); @@ -679,8 +686,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, u8 *iv, void *aes_ctx, u8 *auth_tag, unsigned long auth_tag_len) { - u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8); - struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN); + u8 databuf[sizeof(struct gcm_context_data) + (AES_ALIGN - 8)] __aligned(8); + struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AES_ALIGN); unsigned long left = req->cryptlen; struct scatter_walk assoc_sg_walk; struct skcipher_walk walk; @@ -835,8 +842,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); unsigned int i; __be32 counter = cpu_to_be32(1); @@ -863,8 +870,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); unsigned int i; if (unlikely(req->assoclen != 16 && req->assoclen != 20)) @@ -889,128 +896,17 @@ static int helper_rfc4106_decrypt(struct aead_request *req) static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int keylen) { - struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); - int err; - - err = xts_verify_key(tfm, key, keylen); - if (err) - return err; - - keylen /= 2; - - /* first half of xts-key is for crypt */ - err = aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->crypt_ctx, - key, keylen); - if (err) - return err; - - /* second half of xts-key is for tweak */ - return aes_set_key_common(crypto_skcipher_tfm(tfm), &ctx->tweak_ctx, - key + keylen, keylen); -} - -static int xts_crypt(struct skcipher_request *req, bool encrypt) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm); - int tail = req->cryptlen % AES_BLOCK_SIZE; - struct skcipher_request subreq; - struct skcipher_walk walk; - int err; - - if (req->cryptlen < AES_BLOCK_SIZE) - return -EINVAL; - - err = skcipher_walk_virt(&walk, req, false); - if (!walk.nbytes) - return err; - - if (unlikely(tail > 0 && walk.nbytes < walk.total)) { - int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; - - skcipher_walk_abort(&walk); - - skcipher_request_set_tfm(&subreq, tfm); - skcipher_request_set_callback(&subreq, - skcipher_request_flags(req), - NULL, NULL); - skcipher_request_set_crypt(&subreq, req->src, req->dst, - blocks * AES_BLOCK_SIZE, req->iv); - req = &subreq; - - err = skcipher_walk_virt(&walk, req, false); - if (!walk.nbytes) - return err; - } else { - tail = 0; - } - - kernel_fpu_begin(); - - /* calculate first value of T */ - aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv); - - while (walk.nbytes > 0) { - int nbytes = walk.nbytes; - - if (nbytes < walk.total) - nbytes &= ~(AES_BLOCK_SIZE - 1); - - if (encrypt) - aesni_xts_encrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - nbytes, walk.iv); - else - aesni_xts_decrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - nbytes, walk.iv); - kernel_fpu_end(); - - err = skcipher_walk_done(&walk, walk.nbytes - nbytes); - - if (walk.nbytes > 0) - kernel_fpu_begin(); - } - - if (unlikely(tail > 0 && !err)) { - struct scatterlist sg_src[2], sg_dst[2]; - struct scatterlist *src, *dst; - - dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); - if (req->dst != req->src) - dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); - - skcipher_request_set_crypt(req, src, dst, AES_BLOCK_SIZE + tail, - req->iv); - - err = skcipher_walk_virt(&walk, &subreq, false); - if (err) - return err; - - kernel_fpu_begin(); - if (encrypt) - aesni_xts_encrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes, walk.iv); - else - aesni_xts_decrypt(&ctx->crypt_ctx, - walk.dst.virt.addr, walk.src.virt.addr, - walk.nbytes, walk.iv); - kernel_fpu_end(); - - err = skcipher_walk_done(&walk, 0); - } - return err; + return xts_setkey_common(tfm, key, keylen, aes_set_key_common); } static int xts_encrypt(struct skcipher_request *req) { - return xts_crypt(req, true); + return xts_crypt_common(req, aesni_xts_encrypt, aesni_enc); } static int xts_decrypt(struct skcipher_request *req) { - return xts_crypt(req, false); + return xts_crypt_common(req, aesni_xts_decrypt, aesni_enc); } static struct crypto_alg aesni_cipher_alg = { @@ -1166,8 +1062,8 @@ static int generic_gcmaes_encrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); __be32 counter = cpu_to_be32(1); memcpy(iv, req->iv, 12); @@ -1183,8 +1079,8 @@ static int generic_gcmaes_decrypt(struct aead_request *req) struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); void *aes_ctx = &(ctx->aes_key_expanded); - u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); - u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + u8 ivbuf[16 + (AES_ALIGN - 8)] __aligned(8); + u8 *iv = PTR_ALIGN(&ivbuf[0], AES_ALIGN); memcpy(iv, req->iv, 12); *((__be32 *)(iv+12)) = counter; From patchwork Wed May 24 16:57:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chang S. Bae" X-Patchwork-Id: 98630 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp62382vqr; Wed, 24 May 2023 10:16:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5RGBGVtmUort6d+ZPnaHIjzwa6FM6vBQQShYJTMp4ZDcIU/qcbib+Jvp3uztfLbZfm8E3C X-Received: by 2002:a17:90a:d513:b0:24d:d377:d1 with SMTP id t19-20020a17090ad51300b0024dd37700d1mr18089563pju.45.1684948613314; Wed, 24 May 2023 10:16:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684948613; cv=none; d=google.com; s=arc-20160816; b=P4AruAnsV1kBW0KOifX0CGwDv65lQQPJoTrsK610SK/w9/axOCyd6W64tKFZH7NBFf YcY9F85Kn2W0UTAwJltmcZpUwKMbyaOP1NOGHDRkthSk3+QWSPgtAPxutk/khXfh7Jhf lNB/Py8CURy6IsUWU0b0WqS+dnn/dZpjHuRePeWb+GXMMmQzwCMmbhnf/szQ9Rgupx4C hrMiLdIy7AZpb1TNqpZnP5yowOJVjg8vu6NkBWS8XlSWm6N6uP8sKRUCBxVu9mzWLwju 9oASjkQnB0kaQV7P4Vd1C+FFw/b1psBrZNHo9AhTV+EWGg99DHxUKyynsFHpx8QilsYE GL4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=k8ET96bdMXc8jVDiz+mlvB8emS3MT0r+RoJv3XG5+hw=; b=ibYFSVjcLYTeDVgPDq5tvMKtvbWicSuXBZK2fs7Qjql1+oWG9znePX1+wEfc2NafWo pYPMLD3FL3HEZGjCjpjeUFZHTf0QmAL16RL4IEWlroUeJt5tEPBLhE0Vgnrx3qWYy+eM aCqmvQ3/W5FzJM2uljwiH4MW3+KeZwhOhhSt4rV8+PXiq9dKj1DhywrrRMEXOxAMZ+aE hdg7MAWtRJonni8Vnb5OI/Gl3QkGIEaW/bwSPWn5X8URDh6rsyTIV8qyZGEepbMuFnQm 98JyPs3D+dgbqEiVRQc4SuieGf9IbeXps4QqtBz9IXojSWM8KuublOlJAJ97XjkMSmBe pXJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NEb0W29Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h185-20020a6383c2000000b00533fce755a8si1348130pge.471.2023.05.24.10.16.38; Wed, 24 May 2023 10:16:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=NEb0W29Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230473AbjEXRLl (ORCPT + 99 others); Wed, 24 May 2023 13:11:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235980AbjEXRK5 (ORCPT ); Wed, 24 May 2023 13:10:57 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 674071B4; Wed, 24 May 2023 10:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684948229; x=1716484229; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=9/5gnc/XJECGIE2pSVhiuudjhsgUa8FK7B6AwTL6aPs=; b=NEb0W29QdfC3QIIdcM/MMs2Dbaad0R7hk1yfPy31CJeKmmkQqBYmcgkN Qd+ztOcSAXdmpF8hNIXGKzw2BQ7SM3O4soyTcWQbxTTiquQfTH8WeEfYW pz5nJme7bRUKeCKn8C7w+jq7bF6rDliRZ18DNlq1fti81AxvKP2KC104t Xf6LGcUSIyesXZWEt/T9PiXVy0g2LTwyHIZ4zFwVQMz7/6cuHt7w3LZAl +docfbGcjPcEyCpT5PwCHuI+llxDML9Eislr8L8y7t1uM55yuseA+5ejL cYTp6u1fCnt0ZcJHURYQWCBFMp7iHlHvQqBUwyqd1heXTc1OBoM4wZJ9m w==; X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="338206830" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="338206830" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2023 10:09:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10720"; a="704427378" X-IronPort-AV: E=Sophos;i="6.00,189,1681196400"; d="scan'208";a="704427378" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga002.jf.intel.com with ESMTP; 24 May 2023 10:09:57 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, "David S. Miller" , Ingo Molnar , "H. Peter Anvin" Subject: [PATCH v7 12/12] crypto: x86/aes-kl - Implement the AES-XTS algorithm Date: Wed, 24 May 2023 09:57:17 -0700 Message-Id: <20230524165717.14062-13-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230524165717.14062-1-chang.seok.bae@intel.com> References: <20230410225936.8940-1-chang.seok.bae@intel.com> <20230524165717.14062-1-chang.seok.bae@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766796677303115151?= X-GMAIL-MSGID: =?utf-8?q?1766796677303115151?= Key Locker is a CPU feature to reduce key exfiltration opportunities. It converts the AES key into an encoded form, called 'key handle', to reduce the exposure of private key material in memory. This key conversion as well as all subsequent data transformation are provided by new AES instructions ('AES-KL'). AES-KL is analogous to that of AES-NI as maintains a similar programming interface. Support the XTS mode as the primary use case is dm-crypt. The implementation has some details worth mentioning, which differentiate itself from others, that users may need to be aware of: == Key Handle Restriction == A key handle may be encoded with some restrictions. Restrict every handle only available in kernel mode via setkey(). Subsequently the key handle could be corrupted or fail with handle restrictions. Then, encrypt()/decrypt() returns -EINVAL. == API Limitation == The setkey() function transforms an AES key into a handle. But, an extended key is a usual outcome of setkey() in other AES cipher implementations. For this reason, a setkey() failure does not fall back to the other. So, expose AES-KL methods via synchronous interfaces only. === AES Compliance === Key Locker is not AES compliant as it lacks 192-bit key support. However, per the expectations of Linux crypto-cipher implementations the software cipher implementation must support all the AES-compliant key sizes. The AES-KL cipher implementation achieves this constraint by logging a warning and falling back to AES-NI. In other words, the 192-bit key-size limitation for what can be converted into a key handle is only documented, not enforced. == Wrapping Key Restore Failure == The failure of setkey() as well as encode()/decode() is also possible with the wrapping key failure. In the event of hardware failure, the wrapping key is lost from deep sleep states. Then, those functions return -ENODEV as the feature is disabled. == Userspace Exposition == Some hardware implementations may have some performance penalties. E.g., the cryptsetup benchmark indicates the raw throughput is measurably slower than AES-NI. But, for disk encryption, storage bandwidth may be the bottleneck before encryption bandwidth. This, along with the above points, is an end-user consideration for selecting AES-KL over AES-NI. Thus, advertise it with a unique name 'xts-aes-aeskl' in /proc/crypto while not replacing AES-NI under the generic name 'xts(aes)' with a lower priority. == 64-bit Only == AES-KL provides wide instructions that process eight blocks at once which can boost the AES performance. Leveraging those, the code needs to clobber more than eight 128-bit registers. But, the 32-bit does not have enough wide registers. Then, the performance is unlikely better than 64-bit which has already a gap vs. AES-NI. So, simply make it for the 64-bit mode only at the moment. Signed-off-by: Chang S. Bae Acked-by: Dan Williams Cc: Herbert Xu Cc: "David S. Miller" Cc: Eric Biggers Cc: Milan Broz Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: x86@kernel.org Cc: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Merge all the AES-KL patches. (Eric Biggers) * Make the driver for the 64-bit mode only. (Eric Biggers) * Rework the key-size check code: - Trim unnecessary checks. (Eric Biggers) - Document the reason - Make sure both XTS keys with the same size * Adjust the Kconfig change: - Move the location. (Robert Elliott) - Trim the description to follow others such as AES-NI. * Update the changelog: - Explain the priority value for the common name under 'User Exposition' (renamed from 'Performance'). (Eric Biggers) - Trim the introduction - Switch to more imperative mood for those explaining the code change - Add a new section '64-bit Only' * Adjust the ASM code to return a proper error code. (Eric Biggers) * Update assembly code macros: - Remove unused one. - Document the reason for the duplicated ones. Changes from v5: * Replace the ret instruction with RET as rebased on the upstream -- commit f94909ceb1ed ("x86: Prepare asm files for straight-line-speculation"). Changes from v3: * Exclude non-AES-KL objects. (Eric Biggers) * Simplify the assembler dependency check. (Peter Zijlstra) * Trim the Kconfig help text. (Dan Williams) * Fix a defined-but-not-used warning. Changes from RFC v2: * Move out each mode support in new patches. * Update the changelog to describe the limitation and the tradeoff clearly. (Andy Lutomirski) Changes from RFC v1: * Rebased on the refactored code. (Ard Biesheuvel) * Dropped exporting the single block interface. (Ard Biesheuvel) * Fixed the fallback and error handling paths. (Ard Biesheuvel) * Revised the module description. (Dave Hansen and Peter Zijlsta) * Made the build depend on the binutils version to support new instructions. (Borislav Petkov and Peter Zijlstra) * Updated the changelog accordingly. Link: https://lore.kernel.org/lkml/CAMj1kXGa4f21eH0mdxd1pQsZMUjUr1Btq+Dgw-gC=O-yYft7xw@mail.gmail.com/ --- arch/x86/crypto/Kconfig | 22 ++ arch/x86/crypto/Makefile | 3 + arch/x86/crypto/aeskl-intel_asm.S | 580 +++++++++++++++++++++++++++++ arch/x86/crypto/aeskl-intel_glue.c | 216 +++++++++++ arch/x86/crypto/aesni-intel_asm.S | 8 +- arch/x86/crypto/aesni-intel_glue.c | 40 +- arch/x86/crypto/aesni-intel_glue.h | 17 + 7 files changed, 873 insertions(+), 13 deletions(-) create mode 100644 arch/x86/crypto/aeskl-intel_asm.S create mode 100644 arch/x86/crypto/aeskl-intel_glue.c create mode 100644 arch/x86/crypto/aesni-intel_glue.h diff --git a/arch/x86/crypto/Kconfig b/arch/x86/crypto/Kconfig index 9bbfd01cfa2f..658adfd7aebf 100644 --- a/arch/x86/crypto/Kconfig +++ b/arch/x86/crypto/Kconfig @@ -2,6 +2,11 @@ menu "Accelerated Cryptographic Algorithms for CPU (x86)" +config AS_HAS_KEYLOCKER + def_bool $(as-instr,encodekey256 %eax$(comma)%eax) + help + Supported by binutils >= 2.36 and LLVM integrated assembler >= V12 + config CRYPTO_CURVE25519_X86 tristate "Public key crypto: Curve25519 (ADX)" depends on X86 && 64BIT @@ -29,6 +34,23 @@ config CRYPTO_AES_NI_INTEL Architecture: x86 (32-bit and 64-bit) using: - AES-NI (AES new instructions) +config CRYPTO_AES_KL + tristate "Ciphers: AES, modes: XTS (AES-KL)" + depends on X86 && 64BIT + depends on AS_HAS_KEYLOCKER + depends on CRYPTO_AES_NI_INTEL + select X86_KEYLOCKER + + help + Block cipher: AES cipher algorithms + Length-preserving ciphers: AES with XTS + + Architecture: x86 (64-bit) using: + - AES-KL (AES Key Locker) + - AES-NI for a 192-bit key + + See Documentation/arch/x86/keylocker.rst for more details. + config CRYPTO_BLOWFISH_X86_64 tristate "Ciphers: Blowfish, modes: ECB, CBC" depends on X86 && 64BIT diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index 9aa46093c91b..ae2aa7abd151 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -50,6 +50,9 @@ obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o +obj-$(CONFIG_CRYPTO_AES_KL) += aeskl-intel.o +aeskl-intel-y := aeskl-intel_asm.o aeskl-intel_glue.o + obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o sha1-ssse3-$(CONFIG_AS_SHA1_NI) += sha1_ni_asm.o diff --git a/arch/x86/crypto/aeskl-intel_asm.S b/arch/x86/crypto/aeskl-intel_asm.S new file mode 100644 index 000000000000..402dd7796375 --- /dev/null +++ b/arch/x86/crypto/aeskl-intel_asm.S @@ -0,0 +1,580 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Implement AES algorithm using AES Key Locker instructions. + * + * Most code is based from the AES-NI implementation, aesni-intel_asm.S + * + */ + +#include +#include +#include +#include +#include +#include "aes-helper_asm.S" + +.text + +#define STATE1 %xmm0 +#define STATE2 %xmm1 +#define STATE3 %xmm2 +#define STATE4 %xmm3 +#define STATE5 %xmm4 +#define STATE6 %xmm5 +#define STATE7 %xmm6 +#define STATE8 %xmm7 +#define STATE STATE1 + +#define IV %xmm9 +#define KEY %xmm10 +#define INC %xmm13 + +#define IN1 %xmm8 +#define IN IN1 + +#define AREG %rax +#define HANDLEP %rdi +#define OUTP %rsi +#define KLEN %r9d +#define INP %rdx +#define T1 %r10 +#define LEN %rcx +#define IVP %r8 + +#define UKEYP OUTP +#define GF128MUL_MASK %xmm11 + +/* + * int aeskl_setkey(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len) + */ +SYM_FUNC_START(aeskl_setkey) + FRAME_BEGIN + movl %edx, 480(HANDLEP) + movdqu (UKEYP), STATE1 + mov $1, %eax + cmp $16, %dl + je .Lsetkey_128 + + movdqu 0x10(UKEYP), STATE2 + encodekey256 %eax, %eax + movdqu STATE4, 0x30(HANDLEP) + jmp .Lsetkey_end +.Lsetkey_128: + encodekey128 %eax, %eax + +.Lsetkey_end: + movdqu STATE1, (HANDLEP) + movdqu STATE2, 0x10(HANDLEP) + movdqu STATE3, 0x20(HANDLEP) + + xor AREG, AREG + FRAME_END + RET +SYM_FUNC_END(aeskl_setkey) + +/* + * int __aeskl_enc(const void *ctx, u8 *dst, const u8 *src) + */ +SYM_FUNC_START(__aeskl_enc) + FRAME_BEGIN + movdqu (INP), STATE + movl 480(HANDLEP), KLEN + + cmp $16, KLEN + je .Lenc_128 + aesenc256kl (HANDLEP), STATE + jz .Lenc_err + jmp .Lenc_noerr +.Lenc_128: + aesenc128kl (HANDLEP), STATE + jz .Lenc_err + +.Lenc_noerr: + xor AREG, AREG + jmp .Lenc_end +.Lenc_err: + mov $(-EINVAL), AREG +.Lenc_end: + movdqu STATE, (OUTP) + FRAME_END + RET +SYM_FUNC_END(__aeskl_enc) + +/* + * int __aeskl_dec(const void *ctx, u8 *dst, const u8 *src) + */ +SYM_FUNC_START(__aeskl_dec) + FRAME_BEGIN + movdqu (INP), STATE + mov 480(HANDLEP), KLEN + + cmp $16, KLEN + je .Ldec_128 + aesdec256kl (HANDLEP), STATE + jz .Ldec_err + jmp .Ldec_noerr +.Ldec_128: + aesdec128kl (HANDLEP), STATE + jz .Ldec_err + +.Ldec_noerr: + xor AREG, AREG + jmp .Ldec_end +.Ldec_err: + mov $(-EINVAL), AREG +.Ldec_end: + movdqu STATE, (OUTP) + FRAME_END + RET +SYM_FUNC_END(__aeskl_dec) + +/* + * XTS implementation + */ + +/* + * _aeskl_gf128mul_x_ble: internal ABI + * Multiply in GF(2^128) for XTS IVs + * input: + * IV: current IV + * GF128MUL_MASK == mask with 0x87 and 0x01 + * output: + * IV: next IV + * changed: + * CTR: == temporary value + * + * While based on the AES-NI code, this macro is separated here due to + * the register constraint. E.g., aesencwide256kl has implicit + * operands: XMM0-7. + */ +#define _aeskl_gf128mul_x_ble() \ + pshufd $0x13, IV, KEY; \ + paddq IV, IV; \ + psrad $31, KEY; \ + pand GF128MUL_MASK, KEY; \ + pxor KEY, IV; + +/* + * int __aeskl_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) + */ +SYM_FUNC_START(__aeskl_xts_encrypt) + FRAME_BEGIN + movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK + movups (IVP), IV + + mov 480(HANDLEP), KLEN + +.Lxts_enc8: + sub $128, LEN + jl .Lxts_enc1_pre + + movdqa IV, STATE1 + movdqu (INP), INC + pxor INC, STATE1 + movdqu IV, (OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE2 + movdqu 0x10(INP), INC + pxor INC, STATE2 + movdqu IV, 0x10(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE3 + movdqu 0x20(INP), INC + pxor INC, STATE3 + movdqu IV, 0x20(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE4 + movdqu 0x30(INP), INC + pxor INC, STATE4 + movdqu IV, 0x30(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE5 + movdqu 0x40(INP), INC + pxor INC, STATE5 + movdqu IV, 0x40(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE6 + movdqu 0x50(INP), INC + pxor INC, STATE6 + movdqu IV, 0x50(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE7 + movdqu 0x60(INP), INC + pxor INC, STATE7 + movdqu IV, 0x60(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE8 + movdqu 0x70(INP), INC + pxor INC, STATE8 + movdqu IV, 0x70(OUTP) + + cmp $16, KLEN + je .Lxts_enc8_128 + aesencwide256kl (%rdi) + jz .Lxts_enc_ret_err + jmp .Lxts_enc8_end +.Lxts_enc8_128: + aesencwide128kl (%rdi) + jz .Lxts_enc_ret_err + +.Lxts_enc8_end: + movdqu 0x00(OUTP), INC + pxor INC, STATE1 + movdqu STATE1, 0x00(OUTP) + + movdqu 0x10(OUTP), INC + pxor INC, STATE2 + movdqu STATE2, 0x10(OUTP) + + movdqu 0x20(OUTP), INC + pxor INC, STATE3 + movdqu STATE3, 0x20(OUTP) + + movdqu 0x30(OUTP), INC + pxor INC, STATE4 + movdqu STATE4, 0x30(OUTP) + + movdqu 0x40(OUTP), INC + pxor INC, STATE5 + movdqu STATE5, 0x40(OUTP) + + movdqu 0x50(OUTP), INC + pxor INC, STATE6 + movdqu STATE6, 0x50(OUTP) + + movdqu 0x60(OUTP), INC + pxor INC, STATE7 + movdqu STATE7, 0x60(OUTP) + + movdqu 0x70(OUTP), INC + pxor INC, STATE8 + movdqu STATE8, 0x70(OUTP) + + _aeskl_gf128mul_x_ble() + + add $128, INP + add $128, OUTP + test LEN, LEN + jnz .Lxts_enc8 + +.Lxts_enc_ret_iv: + movups IV, (IVP) +.Lxts_enc_ret_noerr: + xor AREG, AREG + jmp .Lxts_enc_ret +.Lxts_enc_ret_err: + mov $(-EINVAL), AREG +.Lxts_enc_ret: + FRAME_END + RET + +.Lxts_enc1_pre: + add $128, LEN + jz .Lxts_enc_ret_iv + sub $16, LEN + jl .Lxts_enc_cts4 + +.Lxts_enc1: + movdqu (INP), STATE1 + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_enc1_128 + aesenc256kl (HANDLEP), STATE1 + jz .Lxts_enc_ret_err + jmp .Lxts_enc1_end +.Lxts_enc1_128: + aesenc128kl (HANDLEP), STATE1 + jz .Lxts_enc_ret_err + +.Lxts_enc1_end: + pxor IV, STATE1 + _aeskl_gf128mul_x_ble() + + test LEN, LEN + jz .Lxts_enc1_out + + add $16, INP + sub $16, LEN + jl .Lxts_enc_cts1 + + movdqu STATE1, (OUTP) + add $16, OUTP + jmp .Lxts_enc1 + +.Lxts_enc1_out: + movdqu STATE1, (OUTP) + jmp .Lxts_enc_ret_iv + +.Lxts_enc_cts4: + movdqu STATE8, STATE1 + sub $16, OUTP + +.Lxts_enc_cts1: + lea .Lcts_permute_table(%rip), T1 + add LEN, INP /* rewind input pointer */ + add $16, LEN /* # bytes in final block */ + movups (INP), IN1 + + mov T1, IVP + add $32, IVP + add LEN, T1 + sub LEN, IVP + add OUTP, LEN + + movups (T1), STATE2 + movaps STATE1, STATE3 + pshufb STATE2, STATE1 + movups STATE1, (LEN) + + movups (IVP), STATE1 + pshufb STATE1, IN1 + pblendvb STATE3, IN1 + movaps IN1, STATE1 + + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_enc1_cts_128 + aesenc256kl (HANDLEP), STATE1 + jz .Lxts_enc_ret_err + jmp .Lxts_enc1_cts_end +.Lxts_enc1_cts_128: + aesenc128kl (HANDLEP), STATE1 + jz .Lxts_enc_ret_err + +.Lxts_enc1_cts_end: + pxor IV, STATE1 + movups STATE1, (OUTP) + jmp .Lxts_enc_ret_noerr +SYM_FUNC_END(__aeskl_xts_encrypt) + +/* + * int __aeskl_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *dst, + * const u8 *src, unsigned int len, le128 *iv) + */ +SYM_FUNC_START(__aeskl_xts_decrypt) + FRAME_BEGIN + movdqa .Lgf128mul_x_ble_mask(%rip), GF128MUL_MASK + movups (IVP), IV + + mov 480(HANDLEP), KLEN + + test $15, LEN + jz .Lxts_dec8 + sub $16, LEN + +.Lxts_dec8: + sub $128, LEN + jl .Lxts_dec1_pre + + movdqa IV, STATE1 + movdqu (INP), INC + pxor INC, STATE1 + movdqu IV, (OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE2 + movdqu 0x10(INP), INC + pxor INC, STATE2 + movdqu IV, 0x10(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE3 + movdqu 0x20(INP), INC + pxor INC, STATE3 + movdqu IV, 0x20(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE4 + movdqu 0x30(INP), INC + pxor INC, STATE4 + movdqu IV, 0x30(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE5 + movdqu 0x40(INP), INC + pxor INC, STATE5 + movdqu IV, 0x40(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE6 + movdqu 0x50(INP), INC + pxor INC, STATE6 + movdqu IV, 0x50(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE7 + movdqu 0x60(INP), INC + pxor INC, STATE7 + movdqu IV, 0x60(OUTP) + + _aeskl_gf128mul_x_ble() + movdqa IV, STATE8 + movdqu 0x70(INP), INC + pxor INC, STATE8 + movdqu IV, 0x70(OUTP) + + cmp $16, KLEN + je .Lxts_dec8_128 + aesdecwide256kl (%rdi) + jz .Lxts_dec_ret_err + jmp .Lxts_dec8_end +.Lxts_dec8_128: + aesdecwide128kl (%rdi) + jz .Lxts_dec_ret_err + +.Lxts_dec8_end: + movdqu 0x00(OUTP), INC + pxor INC, STATE1 + movdqu STATE1, 0x00(OUTP) + + movdqu 0x10(OUTP), INC + pxor INC, STATE2 + movdqu STATE2, 0x10(OUTP) + + movdqu 0x20(OUTP), INC + pxor INC, STATE3 + movdqu STATE3, 0x20(OUTP) + + movdqu 0x30(OUTP), INC + pxor INC, STATE4 + movdqu STATE4, 0x30(OUTP) + + movdqu 0x40(OUTP), INC + pxor INC, STATE5 + movdqu STATE5, 0x40(OUTP) + + movdqu 0x50(OUTP), INC + pxor INC, STATE6 + movdqu STATE6, 0x50(OUTP) + + movdqu 0x60(OUTP), INC + pxor INC, STATE7 + movdqu STATE7, 0x60(OUTP) + + movdqu 0x70(OUTP), INC + pxor INC, STATE8 + movdqu STATE8, 0x70(OUTP) + + _aeskl_gf128mul_x_ble() + + add $128, INP + add $128, OUTP + test LEN, LEN + jnz .Lxts_dec8 + +.Lxts_dec_ret_iv: + movups IV, (IVP) +.Lxts_dec_ret_noerr: + xor AREG, AREG + jmp .Lxts_dec_ret +.Lxts_dec_ret_err: + mov $(-EINVAL), AREG +.Lxts_dec_ret: + FRAME_END + RET + +.Lxts_dec1_pre: + add $128, LEN + jz .Lxts_dec_ret_iv + +.Lxts_dec1: + movdqu (INP), STATE1 + + add $16, INP + sub $16, LEN + jl .Lxts_dec_cts1 + + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_dec1_128 + aesdec256kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + jmp .Lxts_dec1_end +.Lxts_dec1_128: + aesdec128kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + +.Lxts_dec1_end: + pxor IV, STATE1 + _aeskl_gf128mul_x_ble() + + test LEN, LEN + jz .Lxts_dec1_out + + movdqu STATE1, (OUTP) + add $16, OUTP + jmp .Lxts_dec1 + +.Lxts_dec1_out: + movdqu STATE1, (OUTP) + jmp .Lxts_dec_ret_iv + +.Lxts_dec_cts1: + movdqa IV, STATE5 + _aeskl_gf128mul_x_ble() + + pxor IV, STATE1 + + cmp $16, KLEN + je .Lxts_dec1_cts_pre_128 + aesdec256kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + jmp .Lxts_dec1_cts_pre_end +.Lxts_dec1_cts_pre_128: + aesdec128kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + +.Lxts_dec1_cts_pre_end: + pxor IV, STATE1 + + lea .Lcts_permute_table(%rip), T1 + add LEN, INP /* rewind input pointer */ + add $16, LEN /* # bytes in final block */ + movups (INP), IN1 + + mov T1, IVP + add $32, IVP + add LEN, T1 + sub LEN, IVP + add OUTP, LEN + + movups (T1), STATE2 + movaps STATE1, STATE3 + pshufb STATE2, STATE1 + movups STATE1, (LEN) + + movups (IVP), STATE1 + pshufb STATE1, IN1 + pblendvb STATE3, IN1 + movaps IN1, STATE1 + + pxor STATE5, STATE1 + + cmp $16, KLEN + je .Lxts_dec1_cts_128 + aesdec256kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + jmp .Lxts_dec1_cts_end +.Lxts_dec1_cts_128: + aesdec128kl (HANDLEP), STATE1 + jz .Lxts_dec_ret_err + +.Lxts_dec1_cts_end: + pxor STATE5, STATE1 + + movups STATE1, (OUTP) + jmp .Lxts_dec_ret_noerr + +SYM_FUNC_END(__aeskl_xts_decrypt) + diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c new file mode 100644 index 000000000000..c6824c50fc72 --- /dev/null +++ b/arch/x86/crypto/aeskl-intel_glue.c @@ -0,0 +1,216 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Support for AES Key Locker instructions. This file contains glue + * code and the real AES implementation is in aeskl-intel_asm.S. + * + * Most code is based on AES-NI glue code, aesni-intel_glue.c + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-helper_glue.h" +#include "aesni-intel_glue.h" + +asmlinkage int aeskl_setkey(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int keylen); + +asmlinkage int __aeskl_enc(const void *ctx, u8 *out, const u8 *in); +asmlinkage int __aeskl_dec(const void *ctx, u8 *out, const u8 *in); + +asmlinkage int __aeskl_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); +asmlinkage int __aeskl_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); + +static int aeskl_setkey_common(struct crypto_tfm *tfm, void *raw_ctx, const u8 *in_key, + unsigned int keylen) +{ + /* raw_ctx is an aligned address via xts_setkey_common() */ + struct crypto_aes_ctx *ctx = (struct crypto_aes_ctx *)raw_ctx; + int err; + + if (!crypto_simd_usable()) + return -EBUSY; + + if (keylen != AES_KEYSIZE_128 && keylen != AES_KEYSIZE_192 && + keylen != AES_KEYSIZE_256) + return -EINVAL; + + kernel_fpu_begin(); + if (unlikely(keylen == AES_KEYSIZE_192)) { + pr_warn_once("AES-KL does not support 192-bit key. Use AES-NI.\n"); + err = aesni_set_key(ctx, in_key, keylen); + } else { + if (!valid_keylocker()) + err = -ENODEV; + else + err = aeskl_setkey(ctx, in_key, keylen); + } + kernel_fpu_end(); + + return err; +} + +/* + * The below wrappers for the encryption/decryption functions + * incorporate the feature availability check: + * + * In the rare event of hardware failure, the wrapping key can be lost + * after wake-up from a deep sleep state. Then, this check helps to + * avoid any subsequent misuse with populating a proper error code. + */ + +static inline int aeskl_enc(const void *ctx, u8 *out, const u8 *in) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_enc(ctx, out, in); +} + +static inline int aeskl_dec(const void *ctx, u8 *out, const u8 *in) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_dec(ctx, out, in); +} + +static inline int aeskl_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_xts_encrypt(ctx, out, in, len, iv); +} + +static inline int aeskl_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) +{ + if (!valid_keylocker()) + return -ENODEV; + + return __aeskl_xts_decrypt(ctx, out, in, len, iv); +} + +static int aeskl_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int keylen) +{ + return xts_setkey_common(tfm, key, keylen, aeskl_setkey_common); +} + +static inline int xts_keylen(struct skcipher_request *req, u32 *keylen) +{ + struct aes_xts_ctx *ctx = aes_xts_ctx(crypto_skcipher_reqtfm(req)); + + if (ctx->crypt_ctx.key_length != ctx->tweak_ctx.key_length) + return -EINVAL; + + *keylen = ctx->crypt_ctx.key_length; + return 0; +} + +static int xts_encrypt(struct skcipher_request *req) +{ + u32 keylen; + int err; + + err = xts_keylen(req, &keylen); + if (err) + return err; + + if (likely(keylen != AES_KEYSIZE_192)) + return xts_crypt_common(req, aeskl_xts_encrypt, aeskl_enc); + else + return xts_crypt_common(req, aesni_xts_encrypt, aesni_enc); +} + +static int xts_decrypt(struct skcipher_request *req) +{ + u32 keylen; + int rc; + + rc = xts_keylen(req, &keylen); + if (rc) + return rc; + + if (likely(keylen != AES_KEYSIZE_192)) + return xts_crypt_common(req, aeskl_xts_decrypt, aeskl_enc); + else + return xts_crypt_common(req, aesni_xts_decrypt, aesni_enc); +} + +static struct skcipher_alg aeskl_skciphers[] = { + { + .base = { + .cra_name = "__xts(aes)", + .cra_driver_name = "__xts-aes-aeskl", + .cra_priority = 200, + .cra_flags = CRYPTO_ALG_INTERNAL, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = XTS_AES_CTX_SIZE, + .cra_module = THIS_MODULE, + }, + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .walksize = 2 * AES_BLOCK_SIZE, + .setkey = aeskl_xts_setkey, + .encrypt = xts_encrypt, + .decrypt = xts_decrypt, + } +}; + +static struct simd_skcipher_alg *aeskl_simd_skciphers[ARRAY_SIZE(aeskl_skciphers)]; + +static int __init aeskl_init(void) +{ + u32 eax, ebx, ecx, edx; + int err; + + if (!valid_keylocker()) + return -ENODEV; + + cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx); + if (!(ebx & KEYLOCKER_CPUID_EBX_WIDE)) + return -ENODEV; + + /* + * AES-KL itself does not depend on AES-NI. But AES-KL does not + * support 192-bit keys. To make itself AES-compliant, it falls + * back to AES-NI. + */ + if (!boot_cpu_has(X86_FEATURE_AES)) + return -ENODEV; + + err = simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers), + aeskl_simd_skciphers); + if (err) + return err; + + return 0; +} + +static void __exit aeskl_exit(void) +{ + simd_unregister_skciphers(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers), + aeskl_simd_skciphers); +} + +late_initcall(aeskl_init); +module_exit(aeskl_exit); + +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, AES Key Locker implementation"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("aes"); diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 3922d24cae2b..d38abcc69d9e 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -1821,10 +1821,10 @@ SYM_FUNC_START_LOCAL(_key_expansion_256b) SYM_FUNC_END(_key_expansion_256b) /* - * int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - * unsigned int key_len) + * int __aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + * unsigned int key_len) */ -SYM_FUNC_START(aesni_set_key) +SYM_FUNC_START(__aesni_set_key) FRAME_BEGIN #ifndef __x86_64__ pushl KEYP @@ -1933,7 +1933,7 @@ SYM_FUNC_START(aesni_set_key) #endif FRAME_END RET -SYM_FUNC_END(aesni_set_key) +SYM_FUNC_END(__aesni_set_key) /* * void __aesni_enc(const void *ctx, u8 *dst, const u8 *src) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 857cff484bb3..3aaf5504e349 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -37,6 +37,7 @@ #include #include "aes-helper_glue.h" +#include "aesni-intel_glue.h" #define RFC4106_HASH_SUBKEY_SIZE 16 #define AES_BLOCK_MASK (~(AES_BLOCK_SIZE - 1)) @@ -72,8 +73,8 @@ struct gcm_context_data { u8 hash_keys[GCM_BLOCK_LEN * 16]; }; -asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, - unsigned int key_len); +asmlinkage int __aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len); asmlinkage void __aesni_enc(const void *ctx, u8 *out, const u8 *in); asmlinkage void __aesni_dec(const void *ctx, u8 *out, const u8 *in); asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, @@ -89,17 +90,32 @@ asmlinkage void aesni_cts_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out, asmlinkage void aesni_cts_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); -static int aesni_enc(const void *ctx, u8 *out, const u8 *in) +int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, + unsigned int key_len) +{ + return __aesni_set_key(ctx, in_key, key_len); +} +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(aesni_set_key); +#endif + +int aesni_enc(const void *ctx, u8 *out, const u8 *in) { __aesni_enc(ctx, out, in); return 0; } +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(aesni_enc); +#endif -static int aesni_dec(const void *ctx, u8 *out, const u8 *in) +int aesni_dec(const void *ctx, u8 *out, const u8 *in) { __aesni_dec(ctx, out, in); return 0; } +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(aesni_dec); +#endif #define AVX_GEN2_OPTSIZE 640 #define AVX_GEN4_OPTSIZE 4096 @@ -110,19 +126,25 @@ asmlinkage void __aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, asmlinkage void __aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len, u8 *iv); -static int aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, - unsigned int len, u8 *iv) +int aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) { __aesni_xts_encrypt(ctx, out, in, len, iv); return 0; } +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(aesni_xts_encrypt); +#endif -static int aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, - unsigned int len, u8 *iv) +int aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv) { __aesni_xts_decrypt(ctx, out, in, len, iv); return 0; } +#if IS_MODULE(CONFIG_CRYPTO_AES_KL) +EXPORT_SYMBOL_GPL(aesni_xts_decrypt); +#endif #ifdef CONFIG_X86_64 @@ -256,7 +278,7 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, err = aes_expandkey(ctx, in_key, key_len); else { kernel_fpu_begin(); - err = aesni_set_key(ctx, in_key, key_len); + err = __aesni_set_key(ctx, in_key, key_len); kernel_fpu_end(); } diff --git a/arch/x86/crypto/aesni-intel_glue.h b/arch/x86/crypto/aesni-intel_glue.h new file mode 100644 index 000000000000..81ecacb4e54c --- /dev/null +++ b/arch/x86/crypto/aesni-intel_glue.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Support for Intel AES-NI instructions. This file contains function + * prototypes to be referenced for other AES implementations + */ + +int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len); + +int aesni_enc(const void *ctx, u8 *out, const u8 *in); +int aesni_dec(const void *ctx, u8 *out, const u8 *in); + +int aesni_xts_encrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); +int aesni_xts_decrypt(const struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, + unsigned int len, u8 *iv); +