Message ID | 20231222235209.32143-16-kirill.shutemov@linux.intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1396019dyi; Fri, 22 Dec 2023 15:59:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IEwaKJVkeVSJNMlZboc2iK/+4flFvy7gsBnFFFNkMnzUxm3bxpfQ83qzBtMJt+RznUCltdb X-Received: by 2002:a17:902:c1d5:b0:1d4:2283:6abb with SMTP id c21-20020a170902c1d500b001d422836abbmr1723430plc.67.1703289568419; Fri, 22 Dec 2023 15:59:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703289568; cv=none; d=google.com; s=arc-20160816; b=xyksTid2gqjeeHZHblSJTz5my5GJ2cKpduqzqXWY2ERpsoxr4iwhtPSmNxU/qT7euh VcHsNA1zDiqocy7EYveysbG5EMvoiedBgcmVw6p4OXYce9Wyz5tpRYyapOkvUwx0ImZi X7Ww04xxkf3tgkMrUK5Fe2piSPC3eeObkJ+JxLlBlziXqiFYE0FZJ0sXcEQgXLBJWVUA Vy8x8v9MJHxYy9AzxdFHJ1C2lm3rCtR5BKeZk475bF6td8mRUsXe8asJHv73Dsyl2QgX qJ7Al5/QXSs5+SvYYY/RiadTJ9XSkVxIW2yTX5FWifY31y4W0phlzppz+xd9WNzKFvc4 LrEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=91ymazoo0RKRu9k17Ba4s4AasV5hBx6Nfj64CPYL1NU=; fh=OvJRnOqsMTm9XoNmEwebcqh9Ud7yh1CTeKAP84ols98=; b=iQGreMkH89U9YhnWyC+86AXo30dH3Qhw+nG7tzahpefgBWSHfBmuih8YWQ9kbuwSoL NGCaG/EFpKT4odCf/Sk7cw1YPsFkD2hhE3Y3NAsV6JeCwquN643yk4a2b6rw5Xv2kURD WhqJVrcK6vyJoKePTH9vIAB3IVpeRZtVibNkLmK6J5s/Sbe/PDL+PXmpRYK0otRP8RE1 9khiLEfiGUKLAjpOFTxGxfQt1hay6selUKNDXP4Qh0000rp5L2AzPQXDFsHj0jYN8fA1 +VeZcRSi7agC6vACkR+KIl99BhnsDuzPbursoORSWjUn2AsltF8e1wICl4U85yHdOfaT S4BA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mCSP2jn9; spf=pass (google.com: domain of linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id mb6-20020a170903098600b001d40595144bsi3728107plb.282.2023.12.22.15.59.28 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 15:59:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mCSP2jn9; spf=pass (google.com: domain of linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10142-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0D4CBB25343 for <ouuuleilei@gmail.com>; Fri, 22 Dec 2023 23:56:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CEE7A4D113; Fri, 22 Dec 2023 23:52:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mCSP2jn9" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E55C54C63A for <linux-kernel@vger.kernel.org>; Fri, 22 Dec 2023 23:52:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703289155; x=1734825155; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H0KNXcx7kYiuLNhk2lubKN7E5oHJBEIlL2/mP4OOsUg=; b=mCSP2jn9kr4WoHyXb5ln9Tj0+CopPr6EAkE6svaPxvzArippZoBd1/90 5vyzyZq7q7dgYx2HgisuXWg/uN5ajLiH11TFNSmaYWfmUGc0K/0WolU85 lFqcuHAtzF6xnOcMTAgkFGX5kcHEptayo3SM+k+UUSuMja50qfv3XLgJi g2HHX4XxzQmSo9yQ/kz1KycJhJJoyzq4xs6X3eJmVN8IdKzboU/lZzLYW 8GNBf1vOFbTuvPSbLt3oHHCI9iFQhwbBCgF5HffTRrsAsU/HsZM1aGrwX DKToOnq5IRMoMUsMMbxFdBUE3xLkurp9ZKoTkx521gBLwgqdVb6RaA1xS w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="3414396" X-IronPort-AV: E=Sophos;i="6.04,297,1695711600"; d="scan'208";a="3414396" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 15:52:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="726961507" X-IronPort-AV: E=Sophos;i="6.04,297,1695711600"; d="scan'208";a="726961507" Received: from jeroenke-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.249.35.180]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 15:52:29 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id 7502410A4E1; Sat, 23 Dec 2023 02:52:12 +0300 (+03) From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org Cc: "Rafael J. Wysocki" <rafael@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Adrian Hunter <adrian.hunter@intel.com>, Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>, Elena Reshetova <elena.reshetova@intel.com>, Jun Nakajima <jun.nakajima@intel.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, Tom Lendacky <thomas.lendacky@amd.com>, "Kalra, Ashish" <ashish.kalra@amd.com>, Sean Christopherson <seanjc@google.com>, "Huang, Kai" <kai.huang@intel.com>, Baoquan He <bhe@redhat.com>, kexec@lists.infradead.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv5 15/16] x86/mm: Introduce kernel_ident_mapping_free() Date: Sat, 23 Dec 2023 02:52:07 +0300 Message-ID: <20231222235209.32143-16-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231222235209.32143-1-kirill.shutemov@linux.intel.com> References: <20231222235209.32143-1-kirill.shutemov@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786028562454632275 X-GMAIL-MSGID: 1786028562454632275 |
Series |
x86/tdx: Add kexec support
|
|
Commit Message
Kirill A. Shutemov
Dec. 22, 2023, 11:52 p.m. UTC
The helper complements kernel_ident_mapping_init(): it frees the
identity mapping that was previously allocated. It will be used in the
error path to free a partially allocated mapping or if the mapping is no
longer needed.
The caller provides a struct x86_mapping_info with the free_pgd_page()
callback hooked up and the pgd_t to free.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
arch/x86/include/asm/init.h | 3 ++
arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)
Comments
On Sat, 2023-12-23 at 02:52 +0300, Kirill A. Shutemov wrote: > The helper complements kernel_ident_mapping_init(): it frees the > identity mapping that was previously allocated. It will be used in the > error path to free a partially allocated mapping or if the mapping is no > longer needed. > > The caller provides a struct x86_mapping_info with the free_pgd_page() > callback hooked up and the pgd_t to free. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/include/asm/init.h | 3 ++ > arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ > 2 files changed, 76 insertions(+) > > diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h > index cc9ccf61b6bd..14d72727d7ee 100644 > --- a/arch/x86/include/asm/init.h > +++ b/arch/x86/include/asm/init.h > @@ -6,6 +6,7 @@ > > struct x86_mapping_info { > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > + void (*free_pgt_page)(void *, void *); /* free buf for page table */ > void *context; /* context for alloc_pgt_page */ > unsigned long page_flag; /* page flag for PMD or PUD entry */ > unsigned long offset; /* ident mapping offset */ > @@ -16,4 +17,6 @@ struct x86_mapping_info { > int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, > unsigned long pstart, unsigned long pend); > > +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); Maybe range-based free function can provide more flexibility (e.g., you can directly call the free function to cleanup in kernel_ident_mapping_init() internally when something goes wrong), but I guess this is sufficient for current use case (and perhaps the majority use cases). Reviewed-by: Kai Huang <kai.huang@intel.com>
On Mon, 2024-01-08 at 03:13 +0000, Huang, Kai wrote: > On Sat, 2023-12-23 at 02:52 +0300, Kirill A. Shutemov wrote: > > The helper complements kernel_ident_mapping_init(): it frees the > > identity mapping that was previously allocated. It will be used in the > > error path to free a partially allocated mapping or if the mapping is no > > longer needed. > > > > The caller provides a struct x86_mapping_info with the free_pgd_page() > > callback hooked up and the pgd_t to free. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > --- > > arch/x86/include/asm/init.h | 3 ++ > > arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ > > 2 files changed, 76 insertions(+) > > > > diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h > > index cc9ccf61b6bd..14d72727d7ee 100644 > > --- a/arch/x86/include/asm/init.h > > +++ b/arch/x86/include/asm/init.h > > @@ -6,6 +6,7 @@ > > > > struct x86_mapping_info { > > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > > + void (*free_pgt_page)(void *, void *); /* free buf for page table */ > > void *context; /* context for alloc_pgt_page */ > > unsigned long page_flag; /* page flag for PMD or PUD entry */ > > unsigned long offset; /* ident mapping offset */ > > @@ -16,4 +17,6 @@ struct x86_mapping_info { > > int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, > > unsigned long pstart, unsigned long pend); > > > > +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); > > Maybe range-based free function can provide more flexibility (e.g., you can > directly call the free function to cleanup in kernel_ident_mapping_init() > internally when something goes wrong), but I guess this is sufficient for > current use case (and perhaps the majority use cases). > > Reviewed-by: Kai Huang <kai.huang@intel.com> > Another argument of range-based free function is, theoretically you can build the identical mapping table using different x86_mapping_info on different ranges, thus it makes less sense to use one 'struct x86_mapping_info *info' to free the entire page table, albeit in this implementation only the 'free_pgt_page()' callback is used.
On Mon, Jan 08, 2024 at 03:30:21AM +0000, Huang, Kai wrote: > On Mon, 2024-01-08 at 03:13 +0000, Huang, Kai wrote: > > On Sat, 2023-12-23 at 02:52 +0300, Kirill A. Shutemov wrote: > > > The helper complements kernel_ident_mapping_init(): it frees the > > > identity mapping that was previously allocated. It will be used in the > > > error path to free a partially allocated mapping or if the mapping is no > > > longer needed. > > > > > > The caller provides a struct x86_mapping_info with the free_pgd_page() > > > callback hooked up and the pgd_t to free. > > > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > > --- > > > arch/x86/include/asm/init.h | 3 ++ > > > arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 76 insertions(+) > > > > > > diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h > > > index cc9ccf61b6bd..14d72727d7ee 100644 > > > --- a/arch/x86/include/asm/init.h > > > +++ b/arch/x86/include/asm/init.h > > > @@ -6,6 +6,7 @@ > > > > > > struct x86_mapping_info { > > > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > > > + void (*free_pgt_page)(void *, void *); /* free buf for page table */ > > > void *context; /* context for alloc_pgt_page */ > > > unsigned long page_flag; /* page flag for PMD or PUD entry */ > > > unsigned long offset; /* ident mapping offset */ > > > @@ -16,4 +17,6 @@ struct x86_mapping_info { > > > int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, > > > unsigned long pstart, unsigned long pend); > > > > > > +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); > > > > Maybe range-based free function can provide more flexibility (e.g., you can > > directly call the free function to cleanup in kernel_ident_mapping_init() > > internally when something goes wrong), but I guess this is sufficient for > > current use case (and perhaps the majority use cases). > > > > Reviewed-by: Kai Huang <kai.huang@intel.com> > > > > Another argument of range-based free function is, theoretically you can build > the identical mapping table using different x86_mapping_info on different > ranges, thus it makes less sense to use one 'struct x86_mapping_info *info' to > free the entire page table, albeit in this implementation only the > 'free_pgt_page()' callback is used. The interface can be changed if there will be need for such behaviour. This kind of future-proofing rarely helpful.
On Mon, 2024-01-08 at 13:17 +0300, kirill.shutemov@linux.intel.com wrote: > On Mon, Jan 08, 2024 at 03:30:21AM +0000, Huang, Kai wrote: > > On Mon, 2024-01-08 at 03:13 +0000, Huang, Kai wrote: > > > On Sat, 2023-12-23 at 02:52 +0300, Kirill A. Shutemov wrote: > > > > The helper complements kernel_ident_mapping_init(): it frees the > > > > identity mapping that was previously allocated. It will be used in the > > > > error path to free a partially allocated mapping or if the mapping is no > > > > longer needed. > > > > > > > > The caller provides a struct x86_mapping_info with the free_pgd_page() > > > > callback hooked up and the pgd_t to free. > > > > > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > > > --- > > > > arch/x86/include/asm/init.h | 3 ++ > > > > arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ > > > > 2 files changed, 76 insertions(+) > > > > > > > > diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h > > > > index cc9ccf61b6bd..14d72727d7ee 100644 > > > > --- a/arch/x86/include/asm/init.h > > > > +++ b/arch/x86/include/asm/init.h > > > > @@ -6,6 +6,7 @@ > > > > > > > > struct x86_mapping_info { > > > > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > > > > + void (*free_pgt_page)(void *, void *); /* free buf for page table */ > > > > void *context; /* context for alloc_pgt_page */ > > > > unsigned long page_flag; /* page flag for PMD or PUD entry */ > > > > unsigned long offset; /* ident mapping offset */ > > > > @@ -16,4 +17,6 @@ struct x86_mapping_info { > > > > int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, > > > > unsigned long pstart, unsigned long pend); > > > > > > > > +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); > > > > > > Maybe range-based free function can provide more flexibility (e.g., you can > > > directly call the free function to cleanup in kernel_ident_mapping_init() > > > internally when something goes wrong), but I guess this is sufficient for > > > current use case (and perhaps the majority use cases). > > > > > > Reviewed-by: Kai Huang <kai.huang@intel.com> > > > > > > > Another argument of range-based free function is, theoretically you can build > > the identical mapping table using different x86_mapping_info on different > > ranges, thus it makes less sense to use one 'struct x86_mapping_info *info' to > > free the entire page table, albeit in this implementation only the > > 'free_pgt_page()' callback is used. > > The interface can be changed if there will be need for such behaviour. > This kind of future-proofing rarely helpful. > Do you want to just pass the 'free_pgt_page' function pointer to kernel_ident_mapping_free(), instead of 'struct x86_mapping_info *info'? As mentioned above conceptually the page table can be built from multiple x86_mapping_info for multiple ranges.
On Mon, Jan 08, 2024 at 01:13:18PM +0000, Huang, Kai wrote: > On Mon, 2024-01-08 at 13:17 +0300, kirill.shutemov@linux.intel.com wrote: > > On Mon, Jan 08, 2024 at 03:30:21AM +0000, Huang, Kai wrote: > > > On Mon, 2024-01-08 at 03:13 +0000, Huang, Kai wrote: > > > > On Sat, 2023-12-23 at 02:52 +0300, Kirill A. Shutemov wrote: > > > > > The helper complements kernel_ident_mapping_init(): it frees the > > > > > identity mapping that was previously allocated. It will be used in the > > > > > error path to free a partially allocated mapping or if the mapping is no > > > > > longer needed. > > > > > > > > > > The caller provides a struct x86_mapping_info with the free_pgd_page() > > > > > callback hooked up and the pgd_t to free. > > > > > > > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > > > > --- > > > > > arch/x86/include/asm/init.h | 3 ++ > > > > > arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++ > > > > > 2 files changed, 76 insertions(+) > > > > > > > > > > diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h > > > > > index cc9ccf61b6bd..14d72727d7ee 100644 > > > > > --- a/arch/x86/include/asm/init.h > > > > > +++ b/arch/x86/include/asm/init.h > > > > > @@ -6,6 +6,7 @@ > > > > > > > > > > struct x86_mapping_info { > > > > > void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ > > > > > + void (*free_pgt_page)(void *, void *); /* free buf for page table */ > > > > > void *context; /* context for alloc_pgt_page */ > > > > > unsigned long page_flag; /* page flag for PMD or PUD entry */ > > > > > unsigned long offset; /* ident mapping offset */ > > > > > @@ -16,4 +17,6 @@ struct x86_mapping_info { > > > > > int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, > > > > > unsigned long pstart, unsigned long pend); > > > > > > > > > > +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); > > > > > > > > Maybe range-based free function can provide more flexibility (e.g., you can > > > > directly call the free function to cleanup in kernel_ident_mapping_init() > > > > internally when something goes wrong), but I guess this is sufficient for > > > > current use case (and perhaps the majority use cases). > > > > > > > > Reviewed-by: Kai Huang <kai.huang@intel.com> > > > > > > > > > > Another argument of range-based free function is, theoretically you can build > > > the identical mapping table using different x86_mapping_info on different > > > ranges, thus it makes less sense to use one 'struct x86_mapping_info *info' to > > > free the entire page table, albeit in this implementation only the > > > 'free_pgt_page()' callback is used. > > > > The interface can be changed if there will be need for such behaviour. > > This kind of future-proofing rarely helpful. > > > > Do you want to just pass the 'free_pgt_page' function pointer to > kernel_ident_mapping_free(), instead of 'struct x86_mapping_info *info'? As > mentioned above conceptually the page table can be built from multiple > x86_mapping_info for multiple ranges. I don't think we have such cases in kernel. Let's not overcomplicate things. I see value in keeping interface symmetric. We can always change things according to needs.
diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h index cc9ccf61b6bd..14d72727d7ee 100644 --- a/arch/x86/include/asm/init.h +++ b/arch/x86/include/asm/init.h @@ -6,6 +6,7 @@ struct x86_mapping_info { void *(*alloc_pgt_page)(void *); /* allocate buf for page table */ + void (*free_pgt_page)(void *, void *); /* free buf for page table */ void *context; /* context for alloc_pgt_page */ unsigned long page_flag; /* page flag for PMD or PUD entry */ unsigned long offset; /* ident mapping offset */ @@ -16,4 +17,6 @@ struct x86_mapping_info { int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, unsigned long pstart, unsigned long pend); +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd); + #endif /* _ASM_X86_INIT_H */ diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c index 968d7005f4a7..3996af7b4abf 100644 --- a/arch/x86/mm/ident_map.c +++ b/arch/x86/mm/ident_map.c @@ -4,6 +4,79 @@ * included by both the compressed kernel and the regular kernel. */ +static void free_pte(struct x86_mapping_info *info, pmd_t *pmd) +{ + pte_t *pte = pte_offset_kernel(pmd, 0); + + info->free_pgt_page(pte, info->context); +} + +static void free_pmd(struct x86_mapping_info *info, pud_t *pud) +{ + pmd_t *pmd = pmd_offset(pud, 0); + int i; + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (!pmd_present(pmd[i])) + continue; + + if (pmd_leaf(pmd[i])) + continue; + + free_pte(info, &pmd[i]); + } + + info->free_pgt_page(pmd, info->context); +} + +static void free_pud(struct x86_mapping_info *info, p4d_t *p4d) +{ + pud_t *pud = pud_offset(p4d, 0); + int i; + + for (i = 0; i < PTRS_PER_PUD; i++) { + if (!pud_present(pud[i])) + continue; + + if (pud_leaf(pud[i])) + continue; + + free_pmd(info, &pud[i]); + } + + info->free_pgt_page(pud, info->context); +} + +static void free_p4d(struct x86_mapping_info *info, pgd_t *pgd) +{ + p4d_t *p4d = p4d_offset(pgd, 0); + int i; + + for (i = 0; i < PTRS_PER_P4D; i++) { + if (!p4d_present(p4d[i])) + continue; + + free_pud(info, &p4d[i]); + } + + if (pgtable_l5_enabled()) + info->free_pgt_page(pgd, info->context); +} + +void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd) +{ + int i; + + for (i = 0; i < PTRS_PER_PGD; i++) { + if (!pgd_present(pgd[i])) + continue; + + free_p4d(info, &pgd[i]); + } + + info->free_pgt_page(pgd, info->context); +} + static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) {