From patchwork Thu Sep 21 20:33:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 143305 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp5441736vqi; Fri, 22 Sep 2023 02:52:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9gtACdy7K+M+yxBR6mwrFqiBkKEqvdmzCErp/Fl4jOXyETDnHXWBzG4wM4438tLTbiBPp X-Received: by 2002:a05:6a00:815:b0:690:3a0f:4164 with SMTP id m21-20020a056a00081500b006903a0f4164mr9043099pfk.19.1695376355833; Fri, 22 Sep 2023 02:52:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695376355; cv=none; d=google.com; s=arc-20160816; b=wH9ltDfbl/x7y9AZYrnbFfHR16xghfptM+2T6wSrX1498fi1TERjPiNrwE44KFD1bZ RXdLpO97QNryibkNNdtpVUo0kBj3aZnh3AjyCG3qgpesNUW77/9BcSL6bMWQ7AHtEGbU g3oxw0DC1VKOK7E3wV4idtrZs8kjmRQCvFmqMX8sx8mUiIP3Ld4bc3FrCs2DzDHM8c+u NKNrP5qsbAHONhr80y1+1FIOJI8VCZhPqlZXhWtf6uZ51t+E7WniC65wXdT41fdgucZP 2yMdDyO+8BNJ9DAkdSHBVobgVDi2m4RsI4J0uReN8zeTMQExnVIx8n/lk67PB5DXfu/q o4qA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:dkim-signature; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; fh=wic1xWUcCfivNDoGhaOGRT3zShU41cz7kO9KD9kCHzs=; b=lS9fwvDWADlJ+mcxbyolWwkXdPqDHqNTN6TVSB+C/AFZ2FaBQ/9disXKqhlTQibohX GQKaem7Zud8YAb5R2IrQcqPAemC5PIC+kOlOdWblbyzTKa9WWbRQBnLNOn6UIj8mOUXf jHEJ44vynmRKHv9Kklob/p7n6dMNr5Xei23KYLSdjwPpGQbQ4Lyhf0AK3MNnGJgXXiD+ Mur+15eYiu13EgwoxfcznKCxkYpOMVRFrlgpcdb2a3fNdfpiRpGAcX5PWWeDJeTwKbfs 3ivt9BQvNXqm3+ab1WMHta9BF2PkyTSkSRW7EgworE+n+DRx9/60V4Rlqme2Tr1zWxCu RKiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=rFUUewuO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id a33-20020a056a001d2100b00690fec2f3bbsi3355816pfx.85.2023.09.22.02.52.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 02:52:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=rFUUewuO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 0EF218526809; Thu, 21 Sep 2023 14:44:12 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231621AbjIUVoH (ORCPT + 29 others); Thu, 21 Sep 2023 17:44:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229907AbjIUVnj (ORCPT ); Thu, 21 Sep 2023 17:43:39 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C10A3C06BD for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id d9443c01a7336-1c46ce0c39fso11210605ad.2 for ; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695328425; x=1695933225; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=rFUUewuO4GiLk6j9N+4yp4fVjBcvcNNt50CIzAVuZSejDlBuhrCOxxOZhc8+s1TLM8 J7VZoW31U63mk8Sr2bA42PCP59D6/UbKYU68Gb988+21XZqO/uiTRQ++zSP9M7VuQdTY fWZHjOXbjbBzOIH5lb8Me7NfER71QI426xW2bVmo6OVkMZN1P3Kf/95+V7SaqbWIvUep S9L8rrrGHAloQASoSFIgpXDbWCOcL0YurN1tRF6PwSf3OY4QSn4i3SuxJKZNG5H0bxkw t/huqM7dO8MESLuMR8p9A9Vg/hr6nzktEEkW4cxkVgBpB7t0tc23FLgGxiLfq0imX7GG L96w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695328425; x=1695933225; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TrGawBYlg9dO9+tpkWoFC/Bm9/vroFhKv3wNgQsPHaE=; b=LX/3U3N/1+ndY0LvT2Fnq7q5WYWnz4K8rg/OoNu3j+FvApKqP51sgsY8E2hDsOtY8E cMkBCfpfJQyxEEEfyBEBYhg/jnX+mpIKzA9JfPoFWzziMFbGKVzP9PLE/MxqIQS/uL5P 33sFajM8kRkBqtfcvQ1fVNbLjeCyFDiFdNI7rChJgWeAKPv1woLcZ5JseqCh7A+Ii10H dVe81Ki/6pkLBHA+gCYTvoBG9CVFt2jsrEP4p+NFCYSxZKy6MzI9k+v+0/vHKUSYM7N2 SqhF+w2mK/DFDUwfpoUENrLzVF20kVnHQOfUvHzMVbeU4JtFU/2bfTIbadVfPYJ0XbO0 yniQ== X-Gm-Message-State: AOJu0YwCcWn4cK01FuaISGLyJDi2cQI+JyyUBugrecmokTC+GT8KAI9u aOg/c87PMrePxioimFBXtutfqnyLh8A= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d490:b0:1bc:7c69:925c with SMTP id c16-20020a170902d49000b001bc7c69925cmr94805plg.10.1695328425216; Thu, 21 Sep 2023 13:33:45 -0700 (PDT) Reply-To: Sean Christopherson Date: Thu, 21 Sep 2023 13:33:23 -0700 In-Reply-To: <20230921203331.3746712-1-seanjc@google.com> Mime-Version: 1.0 References: <20230921203331.3746712-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.515.g380fc7ccd1-goog Message-ID: <20230921203331.3746712-7-seanjc@google.com> Subject: [PATCH 06/13] KVM: Disallow hugepages for incompatible gmem bindings, but let 'em succeed From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Roth , Binbin Wu X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 21 Sep 2023 14:44:13 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777730957301969630 X-GMAIL-MSGID: 1777730957301969630 Remove the restriction that a guest_memfd instance that supports hugepages can *only* be bound by memslots that are 100% compatible with hugepage mappings, and instead force KVM to use an order-0 mapping if the binding isn't compatible with hugepages. The intent of the draconian binding restriction was purely to simplify the guest_memfd implementation, e.g. to avoid repeatining the existing logic in KVM x86ial for precisely tracking which GFNs support hugepages. But checking that the binding's offset and size is compatible is just as easy to do when KVM wants to create a mapping. And on the other hand, completely rejecting bindings that are incompatible with hugepages makes it practically impossible for userspace to use a single guest_memfd instance for all guest memory, e.g. on x86 it would be impossible to skip the legacy VGA hole while still allowing hugepage mappings for the rest of guest memory. Suggested-by: Michael Roth Link: https://lore.kernel.org/all/20230918163647.m6bjgwusc7ww5tyu@amd.com Signed-off-by: Sean Christopherson --- virt/kvm/guest_mem.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c index 68528e9cddd7..4f3a313f5532 100644 --- a/virt/kvm/guest_mem.c +++ b/virt/kvm/guest_mem.c @@ -434,20 +434,6 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags, return err; } -static bool kvm_gmem_is_valid_size(loff_t size, u64 flags) -{ - if (size < 0 || !PAGE_ALIGNED(size)) - return false; - -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && - !IS_ALIGNED(size, HPAGE_PMD_SIZE)) - return false; -#endif - - return true; -} - int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) { loff_t size = args->size; @@ -460,9 +446,15 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) if (flags & ~valid_flags) return -EINVAL; - if (!kvm_gmem_is_valid_size(size, flags)) + if (size < 0 || !PAGE_ALIGNED(size)) return -EINVAL; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) && + !IS_ALIGNED(size, HPAGE_PMD_SIZE)) + return -EINVAL; +#endif + return __kvm_gmem_create(kvm, size, flags, kvm_gmem_mnt); } @@ -470,7 +462,7 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { loff_t size = slot->npages << PAGE_SHIFT; - unsigned long start, end, flags; + unsigned long start, end; struct kvm_gmem *gmem; struct inode *inode; struct file *file; @@ -489,16 +481,9 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, goto err; inode = file_inode(file); - flags = (unsigned long)inode->i_private; - /* - * For simplicity, require the offset into the file and the size of the - * memslot to be aligned to the largest possible page size used to back - * the file (same as the size of the file itself). - */ - if (!kvm_gmem_is_valid_size(offset, flags) || - !kvm_gmem_is_valid_size(size, flags)) - goto err; + if (offset < 0 || !PAGE_ALIGNED(offset)) + return -EINVAL; if (offset + size > i_size_read(inode)) goto err; @@ -599,8 +584,23 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, page = folio_file_page(folio, index); *pfn = page_to_pfn(page); - if (max_order) - *max_order = compound_order(compound_head(page)); + if (!max_order) + goto success; + + *max_order = compound_order(compound_head(page)); + if (!*max_order) + goto success; + + /* + * For simplicity, allow mapping a hugepage if and only if the entire + * binding is compatible, i.e. don't bother supporting mapping interior + * sub-ranges with hugepages (unless userspace comes up with a *really* + * strong use case for needing hugepages within unaligned bindings). + */ + if (!IS_ALIGNED(slot->gmem.pgoff, 1ull << *max_order) || + !IS_ALIGNED(slot->npages, 1ull << *max_order)) + *max_order = 0; +success: r = 0; out_unlock: