[RFC,4/5] EXPERIMENTAL: x86: use __GFP_UNMAPPED for modele_alloc()
Commit Message
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
---
arch/x86/kernel/module.c | 2 +-
mm/vmalloc.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
Comments
On Wed, 2023-03-08 at 11:41 +0200, Mike Rapoport wrote:
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ef910bf349e1..84220ec45ec2 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2892,7 +2892,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
> * to fails, fallback to a single page allocator that is
> * more permissive.
> */
> - if (!order) {
> + if (!order && !(gfp & __GFP_UNMAPPED)) {
> gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
>
> while (nr_allocated < nr_pages) {
This is obviously a quick POC patch, but I guess we should skip the
whole vm_remove_mappings() thing since it would reset the direct map to
RW for these unmapped pages. Or rather modules shouldn't set
VM_FLUSH_RESET_PERMS if it uses this.
@@ -67,7 +67,7 @@ static unsigned long int get_module_load_offset(void)
void *module_alloc(unsigned long size)
{
- gfp_t gfp_mask = GFP_KERNEL;
+ gfp_t gfp_mask = GFP_KERNEL | __GFP_UNMAPPED;
void *p;
if (PAGE_ALIGN(size) > MODULES_LEN)
@@ -2892,7 +2892,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
* to fails, fallback to a single page allocator that is
* more permissive.
*/
- if (!order) {
+ if (!order && !(gfp & __GFP_UNMAPPED)) {
gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL;
while (nr_allocated < nr_pages) {