[v7,2/4] mseal: add mseal syscall

Message ID 20240122152905.2220849-3-jeffxu@chromium.org
State New
Headers
Series Introduce mseal() |

Commit Message

Jeff Xu Jan. 22, 2024, 3:28 p.m. UTC
  From: Jeff Xu <jeffxu@chromium.org>

The new mseal() is an syscall on 64 bit CPU, and with
following signature:

int mseal(void addr, size_t len, unsigned long flags)
addr/len: memory range.
flags: reserved.

mseal() blocks following operations for the given memory range.

1> Unmapping, moving to another location, and shrinking the size,
   via munmap() and mremap(), can leave an empty space, therefore can
   be replaced with a VMA with a new set of attributes.

2> Moving or expanding a different VMA into the current location,
   via mremap().

3> Modifying a VMA via mmap(MAP_FIXED).

4> Size expansion, via mremap(), does not appear to pose any specific
   risks to sealed VMAs. It is included anyway because the use case is
   unclear. In any case, users can rely on merging to expand a sealed VMA.

5> mprotect() and pkey_mprotect().

6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
   memory, when users don't have write permission to the memory. Those
   behaviors can alter region contents by discarding pages, effectively a
   memset(0) for anonymous memory.

In addition: mmap() has two related changes.

The PROT_SEAL bit in prot field of mmap(). When present, it marks
the map sealed since creation.

The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
the map as sealable. A map created without MAP_SEALABLE will not support
sealing, i.e. mseal() will fail.

Applications that don't care about sealing will expect their behavior
unchanged. For those that need sealing support, opt-in by adding
MAP_SEALABLE in mmap().

I would like to formally acknowledge the valuable contributions
received during the RFC process, which were instrumental
in shaping this patch:

Jann Horn: raising awareness and providing valuable insights on the
destructive madvise operations.
Linus Torvalds: assisting in defining system call signature and scope.
Pedro Falcato: suggesting sealing in the mmap().
Theo de Raadt: sharing the experiences and insights gained from
implementing mimmutable() in OpenBSD.

Finally, the idea that inspired this patch comes from Stephen Röttger’s
work in Chrome V8 CFI.

Signed-off-by: Jeff Xu <jeffxu@chromium.org>
---
 include/linux/mm.h                     |  48 ++++
 include/linux/syscalls.h               |   1 +
 include/uapi/asm-generic/mman-common.h |   8 +
 mm/Makefile                            |   4 +
 mm/madvise.c                           |  12 +
 mm/mmap.c                              |  27 ++
 mm/mprotect.c                          |  10 +
 mm/mremap.c                            |  31 +++
 mm/mseal.c                             | 343 +++++++++++++++++++++++++
 9 files changed, 484 insertions(+)
 create mode 100644 mm/mseal.c
  

Comments

Jeff Xu Jan. 24, 2024, 5:50 p.m. UTC | #1
On Tue, Jan 23, 2024 at 10:15 AM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> > From: Jeff Xu <jeffxu@chromium.org>
> >
> > The new mseal() is an syscall on 64 bit CPU, and with
> > following signature:
> >
> > int mseal(void addr, size_t len, unsigned long flags)
> > addr/len: memory range.
> > flags: reserved.
> >
> > mseal() blocks following operations for the given memory range.
> >
> > 1> Unmapping, moving to another location, and shrinking the size,
> >    via munmap() and mremap(), can leave an empty space, therefore can
> >    be replaced with a VMA with a new set of attributes.
> >
> > 2> Moving or expanding a different VMA into the current location,
> >    via mremap().
> >
> > 3> Modifying a VMA via mmap(MAP_FIXED).
> >
> > 4> Size expansion, via mremap(), does not appear to pose any specific
> >    risks to sealed VMAs. It is included anyway because the use case is
> >    unclear. In any case, users can rely on merging to expand a sealed VMA.
> >
> > 5> mprotect() and pkey_mprotect().
> >
> > 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
> >    memory, when users don't have write permission to the memory. Those
> >    behaviors can alter region contents by discarding pages, effectively a
> >    memset(0) for anonymous memory.
> >
> > In addition: mmap() has two related changes.
> >
> > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > the map sealed since creation.
> >
> > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > the map as sealable. A map created without MAP_SEALABLE will not support
> > sealing, i.e. mseal() will fail.
> >
> > Applications that don't care about sealing will expect their behavior
> > unchanged. For those that need sealing support, opt-in by adding
> > MAP_SEALABLE in mmap().
> >
> > I would like to formally acknowledge the valuable contributions
> > received during the RFC process, which were instrumental
> > in shaping this patch:
> >
> > Jann Horn: raising awareness and providing valuable insights on the
> > destructive madvise operations.
> > Linus Torvalds: assisting in defining system call signature and scope.
> > Pedro Falcato: suggesting sealing in the mmap().
> > Theo de Raadt: sharing the experiences and insights gained from
> > implementing mimmutable() in OpenBSD.
> >
> > Finally, the idea that inspired this patch comes from Stephen Röttger’s
> > work in Chrome V8 CFI.
> >
> > Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> > ---
> >  include/linux/mm.h                     |  48 ++++
> >  include/linux/syscalls.h               |   1 +
> >  include/uapi/asm-generic/mman-common.h |   8 +
> >  mm/Makefile                            |   4 +
> >  mm/madvise.c                           |  12 +
> >  mm/mmap.c                              |  27 ++
> >  mm/mprotect.c                          |  10 +
> >  mm/mremap.c                            |  31 +++
> >  mm/mseal.c                             | 343 +++++++++++++++++++++++++
> >  9 files changed, 484 insertions(+)
> >  create mode 100644 mm/mseal.c
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index f5a97dec5169..bdd9a53e9291 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
>
> None of this can live in mm/internal.h ?
>
Will move. Thanks.


> > @@ -328,6 +328,14 @@ extern unsigned int kobjsize(const void *objp);
> >  #define VM_HIGH_ARCH_5       BIT(VM_HIGH_ARCH_BIT_5)
> >  #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
> >
> > +#ifdef CONFIG_64BIT
> > +/* VM is sealable, in vm_flags */
> > +#define VM_SEALABLE  _BITUL(63)
> > +
> > +/* VM is sealed, in vm_flags */
> > +#define VM_SEALED    _BITUL(62)
> > +#endif
> > +
> >  #ifdef CONFIG_ARCH_HAS_PKEYS
> >  # define VM_PKEY_SHIFT       VM_HIGH_ARCH_BIT_0
> >  # define VM_PKEY_BIT0        VM_HIGH_ARCH_0  /* A protection key is a 4-bit value */
> > @@ -4182,4 +4190,44 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
> >       return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
> >  }
> >
> > +#ifdef CONFIG_64BIT
> > +static inline int can_do_mseal(unsigned long flags)
> > +{
> > +     if (flags)
> > +             return -EINVAL;
> > +
> > +     return 0;
> > +}
> > +
> > +bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end);
> > +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end, int behavior);
> > +unsigned long get_mmap_seals(unsigned long prot,
> > +             unsigned long flags);
> > +#else
> > +static inline int can_do_mseal(unsigned long flags)
> > +{
> > +     return -EPERM;
> > +}
> > +
> > +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end)
> > +{
> > +     return true;
> > +}
> > +
> > +static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end, int behavior)
> > +{
> > +     return true;
> > +}
> > +
> > +static inline unsigned long get_mmap_seals(unsigned long prot,
> > +     unsigned long flags)
> > +{
> > +     return 0;
> > +}
> > +#endif
> > +
> >  #endif /* _LINUX_MM_H */
>
> ...
>
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index b78e83d351d2..32bc2179aed0 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >  {
> >       struct mm_struct *mm = current->mm;
> >       int pkey = 0;
> > +     unsigned long vm_seals;
> >
> >       *populate = 0;
> >
> > @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >       if (flags & MAP_FIXED_NOREPLACE)
> >               flags |= MAP_FIXED;
> >
> > +     vm_seals = get_mmap_seals(prot, flags);
> > +
> >       if (!(flags & MAP_FIXED))
> >               addr = round_hint_to_min(addr);
> >
> > @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >                       return -EEXIST;
> >       }
> >
> > +     /*
> > +      * Check if the address range is sealed for do_mmap().
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, addr, addr + len))
> > +             return -EPERM;
> > +
>
> This is called after get_unmapped_area(), so this area is either going
> to be MAP_FIXED and return the "hint" addr or it's going to be empty.
> You can probably avoid walking the VMAs in the non-FIXED case.  This
> would remove the overhead of your check in the most common case.
>

Thanks for flagging this!

I wasn't entirely sure about get_unmapped_area() after reading the
code,  It calls a few variants of  arch_get_unmapped_area_xxx()
functions.

e.g. it seems like the generic_get_unmapped_area_topdown  is returning
a non-null address even when MAP_FIXED is set to false

 ----------------------------------------------------------------------------
generic_get_unmapped_area_topdown (
..
if (flags & MAP_FIXED)  <-- MAP_FIXED case.
return addr;

/* requesting a specific address */
if (addr) {  <--  note not MAP_FIXED
addr = PAGE_ALIGN(addr);
vma = find_vma_prev(mm, addr, &prev);
if (mmap_end - len >= addr && addr >= mmap_min_addr &&
(!vma || addr + len <= vm_start_gap(vma)) &&
(!prev || addr >= vm_end_gap(prev)))
return addr;                         <--- note return not null addr here.
}

----------------------------------------------------------------------------
I thought also about adding a check for addr != null  instead, i.e.
if (addr && !can_modify_mm(mm, addr, addr + len))
    return -EPERM;
}

But using MAP_FIXED to allocate memory at address 0 is legit, e.g.
allocating a PROT_NONE | PROT_SEAL at address 0.

Another factor to consider is: what will be the cost of passing an
empty address into can_modify_mm() ? the search will be 0 to len.

> >       if (prot == PROT_EXEC) {
> >               pkey = execute_only_pkey(mm);
> >               if (pkey < 0)
> > @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >                       vm_flags |= VM_NORESERVE;
> >       }
> >
> > +     vm_flags |= vm_seals;
> >       addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
> >       if (!IS_ERR_VALUE(addr) &&
> >           ((vm_flags & VM_LOCKED) ||
> > @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> >       if (end == start)
> >               return -EINVAL;
> >
> > +     /*
> > +      * Check if memory is sealed before arch_unmap.
> > +      * Prevent unmapping a sealed VMA.
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, start, end))
> > +             return -EPERM;
> > +
>
> This function is currently called from mmap_region(), so we are going to
> run this check twice as you have it; once in do_mmap() then again in
> mma_region() -> do_vmi_munmap().  This effectively doubles your impact
> to MAP_FIXED calls.
>
Yes. To address this would require a new flag in the do_vmi_munmap(),
after passing the first check in mmap(), we could set the flag as false,
so do_vmi_munmap() would not check it again.

However, this approach was attempted in v1 and V2 of the patch [1] [2],
and was strongly opposed by Linus. It was considered as too random and
decreased the readability.

Below is my  text in V2: [3]

"When handing the mmap/munmap/mremap/mmap, once the code passed
can_modify_mm(), it means the memory area is not sealed, if the code
continues to call the other utility functions, we don't need to check
the seal again. This is the case for mremap(), the seal of src address
and dest address (when applicable) are checked first, later when the
code calls  do_vmi_munmap(), it no longer needs to check the seal
again."

Considering this is the MAP_FIXED case, and maybe that is not used
that often in practice, I think this is acceptable performance-wise,
unless you know another solution to help this.

[1] https://lore.kernel.org/lkml/20231016143828.647848-6-jeffxu@chromium.org/
[2] https://lore.kernel.org/lkml/20231017090815.1067790-6-jeffxu@chromium.org/
[3] https://lore.kernel.org/lkml/CALmYWFux2m=9189Gs0o8-xhPNW4dnFvtqj7ptcT5QvzxVgfvYQ@mail.gmail.com/


> >        /* arch_unmap() might do unmaps itself.  */
> >       arch_unmap(mm, start, end);
> >
> > @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >  {
> >       struct mm_struct *mm = vma->vm_mm;
> >
> > +     /*
> > +      * Check if memory is sealed before arch_unmap.
> > +      * Prevent unmapping a sealed VMA.
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, start, end))
> > +             return -EPERM;
> > +
>
> I am sure you've looked at the callers, from what I found there are two:
>
> The brk call uses this function, so it may check more than one VMA in
> that path.  Will the brk VMAs potentially be msealed?  I guess someone
> could do that?
>
> The other place this is use is in ipc/shm.c whhere the start/end is just
> the vma start/end, so we only really need to check that one vma.
>
Yes. Those two cases were looked at, and was the main reason that
MAP_SEALABLE is introduced as part of mmap().

As in the open discussion of the V3/V4 patch: [4] [5]

[4] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
[5] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/

Copied here for ease of reading:
---------------------------------------------------------------------------------------------

During the development of V3, I had new questions and thoughts and
wished to discuss.

1> shm/aio
From reading the code, it seems to me that aio/shm can mmap/munmap
maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
of those mapping are not tied to the lifetime of the process. If those
memories are sealed from userspace, then unmap will fail. This isn’t a
huge problem, since the memory will eventually be freed at exit or
exec. However, it feels like the solution is not complete, because of
the leaks in VMA address space during the lifetime of the process.

2> Brk (heap/stack)
Currently, userspace applications can seal parts of the heap by
calling malloc() and mseal(). This raises the question of what the
expected behavior is when sealing the heap is attempted.

let's assume following calls from user space:

ptr = malloc(size);
mprotect(ptr, size, RO);
mseal(ptr, size, SEAL_PROT_PKEY);
free(ptr);

Technically, before mseal() is added, the user can change the
protection of the heap by calling mprotect(RO). As long as the user
changes the protection back to RW before free(), the memory can be
reused.

Adding mseal() into picture, however, the heap is then sealed
partially, user can still free it, but the memory remains to be RO,
and the result of brk-shrink is nondeterministic, depending on if
munmap() will try to free the sealed memory.(brk uses munmap to shrink
the heap).

3> Above two cases led to the third topic:
There one option to address the problem mentioned above.
Option 1:  A “MAP_SEALABLE” flag in mmap().
If a map is created without this flag, the mseal() operation will
fail. Applications that are not concerned with sealing will expect
their behavior to be unchanged. For those that are concerned, adding a
flag at mmap time to opt in is not difficult. For the short term, this
solves problems 1 and 2 above. The memory in shm/aio/brk will not have
the MAP_SEALABLE flag at mmap(), and the same is true for the heap.

If we choose not to go with path, all mapping will by default
sealable. We could document above mentioned limitations so devs are
more careful at the time to choose what memory to seal. I think
deny of service through mseal() by attacker is probably not a concern,
if attackers have access to mseal() and unsealed memory, then they can
also do other harmful thing to the memory, such as munmap, etc.

4>
I think it might be possible to seal the stack or other special
mappings created at runtime (vdso, vsyscall, vvar). This means we can
enforce and seal W^X for certain types of application. For instance,
the stack is typically used in read-write mode, but in some cases, it
can become executable. To defend against unintented addition of
executable bit to stack, we could let the application to seal it.

Sealing the heap (for adding X) requires special handling, since the
heap can shrink, and shrink is implemented through munmap().

Indeed, it might be possible that all virtual memory accessible to user
space, regardless of its usage pattern, could be sealed. However, this
would require additional research and development work.

-----------------------------------------------------------------------------------------------------


> Is there a way to avoid walking the tree for the single known VMA?
Are you thinking about a hash table to record brk VMA ? or a dedicated
tree for sealed VMAs? possible. code will be a lot more though.

> Does
> it make sense to deny mseal writing to brk VMAs?
>
Yes. It makes sense. Since brk memory doesn't have MAP_SEALABLE at
this moment,  mseal will fail even if someone tries to seal it.
Sealing brk memory would require more research and design.

>
> >       arch_unmap(mm, start, end);
> >       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> >  }
>
> ...
>
>
> Ah, I see them now.  Yes, this is what I expected to see.  Does this not
> have any impact on mmap/munmap benchmarks?
>
Thanks for bringing this topic! I'm kind of hoping for performance related
questions.

I haven't done any benchmarks, due to lack of knowledge on how those
tests are usually performed.

For mseal(), since it will be called only in a few places (libc/elf
loading),  I'm expecting no real world  impact, and that can be
measured when we have implementations in place in libc and
elf-loading.

The hot path could be on mmap() and munmap(), as you pointed out.

mmap() was discussed above (adding a check for FIXED )

munmap(), There is a cost in calling can_modify_mm(). I thought about
calling can_modify_vma in do_vmi_align_munmap, but there are two reasons:

a. it skips arch_unmap, and arch_unmap can unmap the memory.
b. Current logic of checking sealing is: if one of VMAs between start to end is
sealed, mprotect/mmap/munmap will fail without any of VMAs being modified.
This means we will need additional walking over the VMA tree.

> > +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
> > +{
> > +     struct vm_area_struct *vma;
> > +
> > +     VMA_ITERATOR(vmi, mm, start);
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end) {
> > +             if (!can_modify_vma(vma))
> > +                     return false;
> > +     }
> > +
> > +     /* Allow by default. */
> > +     return true;
> > +}
> > +
> > +/*
> > + * Check if the vmas of a memory range are allowed to be modified by madvise.
> > + * the memory ranger can have a gap (unallocated memory).
> > + * return true, if it is allowed.
> > + */
> > +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
> > +             int behavior)
> > +{
> > +     struct vm_area_struct *vma;
> > +
> > +     VMA_ITERATOR(vmi, mm, start);
> > +
> > +     if (!is_madv_discard(behavior))
> > +             return true;
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end)
> > +             if (is_ro_anon(vma) && !can_modify_vma(vma))
> > +                     return false;
> > +
> > +     /* Allow by default. */
> > +     return true;
> > +}
> > +
>
> ...
>
> > +static int check_mm_seal(unsigned long start, unsigned long end)
> > +{
> > +     struct vm_area_struct *vma;
> > +     unsigned long nstart = start;
> > +
> > +     VMA_ITERATOR(vmi, current->mm, start);
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end) {
> > +             if (vma->vm_start > nstart)
> > +                     /* unallocated memory found. */
> > +                     return -ENOMEM;
>
> Ah, another potential user for a contiguous iterator of VMAs.
>
> > +
> > +             if (!can_add_vma_seal(vma))
> > +                     return -EACCES;
> > +
> > +             if (vma->vm_end >= end)
> > +                     return 0;
> > +
> > +             nstart = vma->vm_end;
> > +     }
> > +
> > +     return -ENOMEM;
> > +}
> > +
> > +/*
> > + * Apply sealing.
> > + */
> > +static int apply_mm_seal(unsigned long start, unsigned long end)
> > +{
> > +     unsigned long nstart;
> > +     struct vm_area_struct *vma, *prev;
> > +
> > +     VMA_ITERATOR(vmi, current->mm, start);
> > +
> > +     vma = vma_iter_load(&vmi);
> > +     /*
> > +      * Note: check_mm_seal should already checked ENOMEM case.
> > +      * so vma should not be null, same for the other ENOMEM cases.
>
> The start to end is contiguous, right?
Yes.  check_mm_seal makes sure the start to end is contiguous.

>
> > +      */
> > +     prev = vma_prev(&vmi);
> > +     if (start > vma->vm_start)
> > +             prev = vma;
> > +
> > +     nstart = start;
> > +     for_each_vma_range(vmi, vma, end) {
> > +             int error;
> > +             unsigned long tmp;
> > +             vm_flags_t newflags;
> > +
> > +             newflags = vma->vm_flags | VM_SEALED;
> > +             tmp = vma->vm_end;
> > +             if (tmp > end)
> > +                     tmp = end;
> > +             error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> > +             if (error)
> > +                     return error;
>
> > +             tmp = vma_iter_end(&vmi);
> > +             nstart = tmp;
>
> You set tmp before using it unconditionally to vma->vm_end above, so you
> can set nstart = vma_iter_end(&vmi) here.  But, also we know the
> VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
> vma->vm_start on the next loop.
The code is almost the same as in mlock.c, except that we know the
VMAs are contiguous, so we don't check for some of the ENOMEM cases.
There might be ways to improve this code. For ease of code review, I
choose a consistency (same as mlock)  for now.

>
> ...
  
Theo de Raadt Jan. 24, 2024, 8:37 p.m. UTC | #2
Liam R. Howlett <Liam.Howlett@Oracle.com> wrote:

> > Adding mseal() into picture, however, the heap is then sealed
> > partially, user can still free it, but the memory remains to be RO,
> > and the result of brk-shrink is nondeterministic, depending on if
> > munmap() will try to free the sealed memory.(brk uses munmap to shrink
> > the heap).

"You are holding it wrong".

> > [...]. We could document above mentioned limitations so devs are
> > more careful at the time to choose what memory to seal.

You mean like they need to be careful what memory they map, careful
what memory they unmap, careful what they do with mprotect, careful
about not writing or reading out of bounds, etc.  They need to be
careful about everything.

Programmers have complete control over the address space in a program.
This is Linux we are talking about, it still doesn't have strict policy
on W | X memory, but misuse of mseal is suddenly a developer crisis?

Why is this memory attribute different, and how does it actually help?

When they use mseal on objects with unproven future, the program will
crash later, beautifully demonstrating that they held it wrong.  Then
they can fix their abusive incorrect code.

This discussion about the malloc heap is ridiculous.  Obviously it is
programmer error to lock the permissions on memory you will free for
reuse.  But you can't fix this problem with malloc(), without breaking
other extremely common circumstances where the allocation of memory
and PERMANENT-USE-WITHOUT-RELEASE of such memory are seperated over a
memory boundary, unless you start telling all open source library authors
to always use MAP_SEALABLE in their mmap() calls.
  
Theo de Raadt Jan. 24, 2024, 8:51 p.m. UTC | #3
Theo de Raadt <deraadt@openbsd.org> wrote:

> This discussion about the malloc heap is ridiculous.  Obviously it is
> programmer error to lock the permissions on memory you will free for
> reuse.  But you can't fix this problem with malloc(), without breaking
> other extremely common circumstances where the allocation of memory
> and PERMANENT-USE-WITHOUT-RELEASE of such memory are seperated over a
> memory boundary, unless you start telling all open source library authors

  ^^^^^^^^^^^^^^^ library boundary, sorry

> to always use MAP_SEALABLE in their mmap() calls.

Example:

1. libcrypto (or some other library) has some ways to allocate memory and
   provide it to an application.
2. Even if this is using malloc(), heap allocations over a pagesize are
   page-aligned, so even then following assumptions are sound.
3. I have an application which uses that memory, but will never release the memory
   until program termination
4. The library interface is public and used by many programs, so the library
   author has a choice of using MAP_SEALABLE or not using MAP_SEALABLE

Due to your choice, my application cannot make lock the memory permissions
unless that library author chooses MAP_SEALABLE

If they choose to use MAP_SEALABLE, all programs get this memory you consider
less safe.

Exactly what is being gained here?
  
Jeff Xu Jan. 24, 2024, 10:49 p.m. UTC | #4
On Wed, Jan 24, 2024 at 12:06 PM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * Jeff Xu <jeffxu@chromium.org> [240124 12:50]:
> > On Tue, Jan 23, 2024 at 10:15 AM Liam R. Howlett
> > <Liam.Howlett@oracle.com> wrote:
> > >
> > > * jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> > > > From: Jeff Xu <jeffxu@chromium.org>
> > > >
> > > > The new mseal() is an syscall on 64 bit CPU, and with
> > > > following signature:
> > > >
> > > > int mseal(void addr, size_t len, unsigned long flags)
> > > > addr/len: memory range.
> > > > flags: reserved.
> > > >
> > > > mseal() blocks following operations for the given memory range.
> > > >
> > > > 1> Unmapping, moving to another location, and shrinking the size,
> > > >    via munmap() and mremap(), can leave an empty space, therefore can
> > > >    be replaced with a VMA with a new set of attributes.
> > > >
> > > > 2> Moving or expanding a different VMA into the current location,
> > > >    via mremap().
> > > >
> > > > 3> Modifying a VMA via mmap(MAP_FIXED).
> > > >
> > > > 4> Size expansion, via mremap(), does not appear to pose any specific
> > > >    risks to sealed VMAs. It is included anyway because the use case is
> > > >    unclear. In any case, users can rely on merging to expand a sealed VMA.
> > > >
> > > > 5> mprotect() and pkey_mprotect().
> > > >
> > > > 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
> > > >    memory, when users don't have write permission to the memory. Those
> > > >    behaviors can alter region contents by discarding pages, effectively a
> > > >    memset(0) for anonymous memory.
> > > >
> > > > In addition: mmap() has two related changes.
> > > >
> > > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > > the map sealed since creation.
> > > >
> > > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > > sealing, i.e. mseal() will fail.
> > > >
> > > > Applications that don't care about sealing will expect their behavior
> > > > unchanged. For those that need sealing support, opt-in by adding
> > > > MAP_SEALABLE in mmap().
> > > >
> > > > I would like to formally acknowledge the valuable contributions
> > > > received during the RFC process, which were instrumental
> > > > in shaping this patch:
> > > >
> > > > Jann Horn: raising awareness and providing valuable insights on the
> > > > destructive madvise operations.
> > > > Linus Torvalds: assisting in defining system call signature and scope.
> > > > Pedro Falcato: suggesting sealing in the mmap().
> > > > Theo de Raadt: sharing the experiences and insights gained from
> > > > implementing mimmutable() in OpenBSD.
> > > >
> > > > Finally, the idea that inspired this patch comes from Stephen Röttger’s
> > > > work in Chrome V8 CFI.
> > > >
> > > > Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> > > > ---
> > > >  include/linux/mm.h                     |  48 ++++
> > > >  include/linux/syscalls.h               |   1 +
> > > >  include/uapi/asm-generic/mman-common.h |   8 +
> > > >  mm/Makefile                            |   4 +
> > > >  mm/madvise.c                           |  12 +
> > > >  mm/mmap.c                              |  27 ++
> > > >  mm/mprotect.c                          |  10 +
> > > >  mm/mremap.c                            |  31 +++
> > > >  mm/mseal.c                             | 343 +++++++++++++++++++++++++
> > > >  9 files changed, 484 insertions(+)
> > > >  create mode 100644 mm/mseal.c
> > > >
>
> ...
>
> > >
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index b78e83d351d2..32bc2179aed0 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >  {
> > > >       struct mm_struct *mm = current->mm;
> > > >       int pkey = 0;
> > > > +     unsigned long vm_seals;
> > > >
> > > >       *populate = 0;
> > > >
> > > > @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >       if (flags & MAP_FIXED_NOREPLACE)
> > > >               flags |= MAP_FIXED;
> > > >
> > > > +     vm_seals = get_mmap_seals(prot, flags);
> > > > +
> > > >       if (!(flags & MAP_FIXED))
> > > >               addr = round_hint_to_min(addr);
> > > >
> > > > @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >                       return -EEXIST;
> > > >       }
> > > >
> > > > +     /*
> > > > +      * Check if the address range is sealed for do_mmap().
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, addr, addr + len))
> > > > +             return -EPERM;
> > > > +
> > >
> > > This is called after get_unmapped_area(), so this area is either going
> > > to be MAP_FIXED and return the "hint" addr or it's going to be empty.
> > > You can probably avoid walking the VMAs in the non-FIXED case.  This
> > > would remove the overhead of your check in the most common case.
> > >
> >
> > Thanks for flagging this!
> >
> > I wasn't entirely sure about get_unmapped_area() after reading the
> > code,  It calls a few variants of  arch_get_unmapped_area_xxx()
> > functions.
> >
> > e.g. it seems like the generic_get_unmapped_area_topdown  is returning
> > a non-null address even when MAP_FIXED is set to false
> >
> >  ----------------------------------------------------------------------------
> > generic_get_unmapped_area_topdown (
> > ...
> > if (flags & MAP_FIXED)  <-- MAP_FIXED case.
> > return addr;
> >
> > /* requesting a specific address */
> > if (addr) {  <--  note not MAP_FIXED
> > addr = PAGE_ALIGN(addr);
> > vma = find_vma_prev(mm, addr, &prev);
> > if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> > (!vma || addr + len <= vm_start_gap(vma)) &&
> > (!prev || addr >= vm_end_gap(prev)))
> > return addr;                         <--- note return not null addr here.
> > }
>
> Sorry, I was not clear.  Either MAP_FIXED will just return the addr, or
> the addr that is returned has no VMA (the memory area is empty).  This
> function finds a gap to place your data and the gap is (at least) as big
> as you want (usually oversized, but that doesn't matter here).  The
> mmap_lock is held, so we know it's going to remain empty.
>
> So there are two scenarios:
> 1. MAP_FIXED which may or may not have a VMA over the range
> 2. An address which has no VMA over the range
>
> Anyways, this is probably not needed, because of what I say later.
>
> >
> > ----------------------------------------------------------------------------
> > I thought also about adding a check for addr != null  instead, i.e.
> > if (addr && !can_modify_mm(mm, addr, addr + len))
> >     return -EPERM;
> > }
> >
> > But using MAP_FIXED to allocate memory at address 0 is legit, e.g.
> > allocating a PROT_NONE | PROT_SEAL at address 0.
> >
> > Another factor to consider is: what will be the cost of passing an
> > empty address into can_modify_mm() ? the search will be 0 to len.
>
> Almost always zero VMAs to check, it's not worth optimising.  The maple
> tree will walk to the first range and it'll be 0 to some very large
> number, most likely.
>
Got you.

> >
> > > >       if (prot == PROT_EXEC) {
> > > >               pkey = execute_only_pkey(mm);
> > > >               if (pkey < 0)
> > > > @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >                       vm_flags |= VM_NORESERVE;
> > > >       }
> > > >
> > > > +     vm_flags |= vm_seals;
> > > >       addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
> > > >       if (!IS_ERR_VALUE(addr) &&
> > > >           ((vm_flags & VM_LOCKED) ||
> > > > @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> > > >       if (end == start)
> > > >               return -EINVAL;
> > > >
> > > > +     /*
> > > > +      * Check if memory is sealed before arch_unmap.
> > > > +      * Prevent unmapping a sealed VMA.
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, start, end))
> > > > +             return -EPERM;
> > > > +
> > >
> > > This function is currently called from mmap_region(), so we are going to
> > > run this check twice as you have it; once in do_mmap() then again in
> > > mma_region() -> do_vmi_munmap().  This effectively doubles your impact
> > > to MAP_FIXED calls.
> > >
> > Yes. To address this would require a new flag in the do_vmi_munmap(),
> > after passing the first check in mmap(), we could set the flag as false,
> > so do_vmi_munmap() would not check it again.
> >
> > However, this approach was attempted in v1 and V2 of the patch [1] [2],
> > and was strongly opposed by Linus. It was considered as too random and
> > decreased the readability.
>
> Oh yes, I recall that now.  He was not pleased.
>
> >
> > Below is my  text in V2: [3]
> >
> > "When handing the mmap/munmap/mremap/mmap, once the code passed
> > can_modify_mm(), it means the memory area is not sealed, if the code
> > continues to call the other utility functions, we don't need to check
> > the seal again. This is the case for mremap(), the seal of src address
> > and dest address (when applicable) are checked first, later when the
> > code calls  do_vmi_munmap(), it no longer needs to check the seal
> > again."
> >
> > Considering this is the MAP_FIXED case, and maybe that is not used
> > that often in practice, I think this is acceptable performance-wise,
> > unless you know another solution to help this.
>
> Okay, sure, I haven't been yelled at on the ML for a few weeks.  Here
> goes:
>
> do_mmap() will call get_unmapped_area(), which will return an empty area
> (no need to check mseal, I hope - or we have larger issues here) or a
> MAP_FIXED address.
>
> do_mmap() will pass the address along to mmap_region()
>
> mmap_region() will then call do_vmi_munmap() - which will either remove
> the VMA(s) in the way, or do nothing... or error.
>
> mmap_region() will return -ENOMEM in the case of an error returned from
> do_vmi_munmap() today.  Change that to return the error code, and let
> do_vmi_munmap() do the mseal check.  If mseal check fails then the error
> is propagated the same way -ENOMEM is propagated today.
>
> This relies on the fact that we only really need to check the mseal
> status of existing VMAs and we can only really map over existing VMAs by
> first munmapping them.
>
> It does move your error return to much later in the call stack, but it
> removes duplicate work and less code.  Considering this should be a rare
> event, I don't think that's of concern.
>
I think that is a great idea, I will try to implement it and get back
to you on this.

> >
> > [1] https://lore.kernel.org/lkml/20231016143828.647848-6-jeffxu@chromium.org/
> > [2] https://lore.kernel.org/lkml/20231017090815.1067790-6-jeffxu@chromium.org/
> > [3] https://lore.kernel.org/lkml/CALmYWFux2m=9189Gs0o8-xhPNW4dnFvtqj7ptcT5QvzxVgfvYQ@mail.gmail.com/
> >
> >
> > > >        /* arch_unmap() might do unmaps itself.  */
> > > >       arch_unmap(mm, start, end);
> > > >
> > > > @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > > >  {
> > > >       struct mm_struct *mm = vma->vm_mm;
> > > >
> > > > +     /*
> > > > +      * Check if memory is sealed before arch_unmap.
> > > > +      * Prevent unmapping a sealed VMA.
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, start, end))
> > > > +             return -EPERM;
> > > > +
> > >
> > > I am sure you've looked at the callers, from what I found there are two:
> > >
> > > The brk call uses this function, so it may check more than one VMA in
> > > that path.  Will the brk VMAs potentially be msealed?  I guess someone
> > > could do that?
> > >
> > > The other place this is use is in ipc/shm.c whhere the start/end is just
> > > the vma start/end, so we only really need to check that one vma.
> > >
> > Yes. Those two cases were looked at, and was the main reason that
> > MAP_SEALABLE is introduced as part of mmap().
> >
> > As in the open discussion of the V3/V4 patch: [4] [5]
> >
> > [4] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
> > [5] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/
> >
> > Copied here for ease of reading:
> > ---------------------------------------------------------------------------------------------
> >
> > During the development of V3, I had new questions and thoughts and
> > wished to discuss.
> >
> > 1> shm/aio
> > From reading the code, it seems to me that aio/shm can mmap/munmap
> > maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
> > of those mapping are not tied to the lifetime of the process. If those
> > memories are sealed from userspace, then unmap will fail. This isn’t a
> > huge problem, since the memory will eventually be freed at exit or
> > exec. However, it feels like the solution is not complete, because of
> > the leaks in VMA address space during the lifetime of the process.
> >
> > 2> Brk (heap/stack)
> > Currently, userspace applications can seal parts of the heap by
> > calling malloc() and mseal(). This raises the question of what the
> > expected behavior is when sealing the heap is attempted.
> >
> > let's assume following calls from user space:
> >
> > ptr = malloc(size);
> > mprotect(ptr, size, RO);
> > mseal(ptr, size, SEAL_PROT_PKEY);
> > free(ptr);
> >
> > Technically, before mseal() is added, the user can change the
> > protection of the heap by calling mprotect(RO). As long as the user
> > changes the protection back to RW before free(), the memory can be
> > reused.
> >
> > Adding mseal() into picture, however, the heap is then sealed
> > partially, user can still free it, but the memory remains to be RO,
> > and the result of brk-shrink is nondeterministic, depending on if
> > munmap() will try to free the sealed memory.(brk uses munmap to shrink
> > the heap).
> >
> > 3> Above two cases led to the third topic:
> > There one option to address the problem mentioned above.
> > Option 1:  A “MAP_SEALABLE” flag in mmap().
> > If a map is created without this flag, the mseal() operation will
> > fail. Applications that are not concerned with sealing will expect
> > their behavior to be unchanged. For those that are concerned, adding a
> > flag at mmap time to opt in is not difficult. For the short term, this
> > solves problems 1 and 2 above. The memory in shm/aio/brk will not have
> > the MAP_SEALABLE flag at mmap(), and the same is true for the heap.
> >
> > If we choose not to go with path, all mapping will by default
> > sealable. We could document above mentioned limitations so devs are
> > more careful at the time to choose what memory to seal. I think
> > deny of service through mseal() by attacker is probably not a concern,
> > if attackers have access to mseal() and unsealed memory, then they can
> > also do other harmful thing to the memory, such as munmap, etc.
> >
> > 4>
> > I think it might be possible to seal the stack or other special
> > mappings created at runtime (vdso, vsyscall, vvar). This means we can
> > enforce and seal W^X for certain types of application. For instance,
> > the stack is typically used in read-write mode, but in some cases, it
> > can become executable. To defend against unintented addition of
> > executable bit to stack, we could let the application to seal it.
> >
> > Sealing the heap (for adding X) requires special handling, since the
> > heap can shrink, and shrink is implemented through munmap().
> >
> > Indeed, it might be possible that all virtual memory accessible to user
> > space, regardless of its usage pattern, could be sealed. However, this
> > would require additional research and development work.
> >
> > -----------------------------------------------------------------------------------------------------
> >
> >
> > > Is there a way to avoid walking the tree for the single known VMA?
> > Are you thinking about a hash table to record brk VMA ? or a dedicated
> > tree for sealed VMAs? possible. code will be a lot more though.
>
> No, instead of calling a loop to walk the tree to find the same VMA,
> just check the single VMA.
>
> ipc/shm.c: do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end...
>
> So if you just check the single VMA then we don't need to worry about
> re-walking.
>
If you meant:
have a new function  do_single_vma_munmap() which checks sealing flag
and munmap for one VMA, and is used by ipc/shm.c
Yes, we can have that.

> I think this is a moot point if my outline above works.
>
Yes, I agree. that  has performance impact only for shm. We can do
this optimationzation as a follow-up patch set.

> >
> > > Does
> > > it make sense to deny mseal writing to brk VMAs?
> > >
> > Yes. It makes sense. Since brk memory doesn't have MAP_SEALABLE at
> > this moment,  mseal will fail even if someone tries to seal it.
> > Sealing brk memory would require more research and design.
> >
> > >
> > > >       arch_unmap(mm, start, end);
> > > >       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> > > >  }
> > >
> > > ...
> > >
> > >
> > > Ah, I see them now.  Yes, this is what I expected to see.  Does this not
> > > have any impact on mmap/munmap benchmarks?
> > >
> > Thanks for bringing this topic! I'm kind of hoping for performance related
> > questions.
> >
> > I haven't done any benchmarks, due to lack of knowledge on how those
> > tests are usually performed.
> >
> > For mseal(), since it will be called only in a few places (libc/elf
> > loading),  I'm expecting no real world  impact, and that can be
> > measured when we have implementations in place in libc and
> > elf-loading.
> >
> > The hot path could be on mmap() and munmap(), as you pointed out.
> >
> > mmap() was discussed above (adding a check for FIXED )
>
> That can probably be dropped as discussed above.
>
Ok.

> >
> > munmap(), There is a cost in calling can_modify_mm(). I thought about
> > calling can_modify_vma in do_vmi_align_munmap, but there are two reasons:
> >
> > a. it skips arch_unmap, and arch_unmap can unmap the memory.
> > b. Current logic of checking sealing is: if one of VMAs between start to end is
> > sealed, mprotect/mmap/munmap will fail without any of VMAs being modified.
> > This means we will need additional walking over the VMA tree.
>
> Certainly, but it comes at a cost.  I was just surprised with the
> statement that there is no negative from the previous discussion, as I
> replied to the cover letter.
>
Ah, the context of  my "no downside" comment is specifically to
"having the PROT_SEAL  flag in mmap()", i.e. combine mmap() and
mseal() in one call.


> > > > +/*
> > > > + * Apply sealing.
> > > > + */
> > > > +static int apply_mm_seal(unsigned long start, unsigned long end)
> > > > +{
> > > > +     unsigned long nstart;
> > > > +     struct vm_area_struct *vma, *prev;
> > > > +
> > > > +     VMA_ITERATOR(vmi, current->mm, start);
> > > > +
> > > > +     vma = vma_iter_load(&vmi);
> > > > +     /*
> > > > +      * Note: check_mm_seal should already checked ENOMEM case.
> > > > +      * so vma should not be null, same for the other ENOMEM cases.
> > >
> > > The start to end is contiguous, right?
> > Yes.  check_mm_seal makes sure the start to end is contiguous.
> >
> > >
> > > > +      */
> > > > +     prev = vma_prev(&vmi);
> > > > +     if (start > vma->vm_start)
> > > > +             prev = vma;
> > > > +
> > > > +     nstart = start;
> > > > +     for_each_vma_range(vmi, vma, end) {
> > > > +             int error;
> > > > +             unsigned long tmp;
> > > > +             vm_flags_t newflags;
> > > > +
> > > > +             newflags = vma->vm_flags | VM_SEALED;
> > > > +             tmp = vma->vm_end;
> > > > +             if (tmp > end)
> > > > +                     tmp = end;
> > > > +             error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> > > > +             if (error)
> > > > +                     return error;
> > >
> > > > +             tmp = vma_iter_end(&vmi);
> > > > +             nstart = tmp;
> > >
> > > You set tmp before using it unconditionally to vma->vm_end above, so you
> > > can set nstart = vma_iter_end(&vmi) here.  But, also we know the
> > > VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
> > > vma->vm_start on the next loop.
> > The code is almost the same as in mlock.c, except that we know the
> > VMAs are contiguous, so we don't check for some of the ENOMEM cases.
> > There might be ways to improve this code. For ease of code review, I
> > choose a consistency (same as mlock)  for now.
>
> Yes, I thought that was the case.  tmp is updated in that code to ensure
> we have reached the end of the range without a gap at the end.  Since
> you already checked that the VMAs are contiguous, the last tmp update in
> your loop is not needed.
>
> Thanks,
> Liam
  
Jeff Xu Jan. 25, 2024, 2:04 a.m. UTC | #5
On Wed, Jan 24, 2024 at 2:49 PM Jeff Xu <jeffxu@chromium.org> wrote:
>
> On Wed, Jan 24, 2024 at 12:06 PM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> >
> > > Considering this is the MAP_FIXED case, and maybe that is not used
> > > that often in practice, I think this is acceptable performance-wise,
> > > unless you know another solution to help this.
> >
> > Okay, sure, I haven't been yelled at on the ML for a few weeks.  Here
> > goes:
> >
> > do_mmap() will call get_unmapped_area(), which will return an empty area
> > (no need to check mseal, I hope - or we have larger issues here) or a
> > MAP_FIXED address.
> >
> > do_mmap() will pass the address along to mmap_region()
> >
> > mmap_region() will then call do_vmi_munmap() - which will either remove
> > the VMA(s) in the way, or do nothing... or error.
> >
> > mmap_region() will return -ENOMEM in the case of an error returned from
> > do_vmi_munmap() today.  Change that to return the error code, and let
> > do_vmi_munmap() do the mseal check.  If mseal check fails then the error
> > is propagated the same way -ENOMEM is propagated today.
> >
> > This relies on the fact that we only really need to check the mseal
> > status of existing VMAs and we can only really map over existing VMAs by
> > first munmapping them.
> >
> > It does move your error return to much later in the call stack, but it
> > removes duplicate work and less code.  Considering this should be a rare
> > event, I don't think that's of concern.
> >
> I think that is a great idea, I will try to implement it and get back
> to you on this.
>
I confirm this works. I will add that in the next version. Thanks for
the suggestion.

-Jeff
  

Patch

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5a97dec5169..bdd9a53e9291 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -328,6 +328,14 @@  extern unsigned int kobjsize(const void *objp);
 #define VM_HIGH_ARCH_5	BIT(VM_HIGH_ARCH_BIT_5)
 #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
 
+#ifdef CONFIG_64BIT
+/* VM is sealable, in vm_flags */
+#define VM_SEALABLE	_BITUL(63)
+
+/* VM is sealed, in vm_flags */
+#define VM_SEALED	_BITUL(62)
+#endif
+
 #ifdef CONFIG_ARCH_HAS_PKEYS
 # define VM_PKEY_SHIFT	VM_HIGH_ARCH_BIT_0
 # define VM_PKEY_BIT0	VM_HIGH_ARCH_0	/* A protection key is a 4-bit value */
@@ -4182,4 +4190,44 @@  static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+#ifdef CONFIG_64BIT
+static inline int can_do_mseal(unsigned long flags)
+{
+	if (flags)
+		return -EINVAL;
+
+	return 0;
+}
+
+bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+		unsigned long end);
+bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+		unsigned long end, int behavior);
+unsigned long get_mmap_seals(unsigned long prot,
+		unsigned long flags);
+#else
+static inline int can_do_mseal(unsigned long flags)
+{
+	return -EPERM;
+}
+
+static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+		unsigned long end)
+{
+	return true;
+}
+
+static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+		unsigned long end, int behavior)
+{
+	return true;
+}
+
+static inline unsigned long get_mmap_seals(unsigned long prot,
+	unsigned long flags)
+{
+	return 0;
+}
+#endif
+
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index cdba4d0c6d4a..2d44e0d99e37 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -820,6 +820,7 @@  asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags);
 asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size,
 			unsigned long prot, unsigned long pgoff,
 			unsigned long flags);
+asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned long flags);
 asmlinkage long sys_mbind(unsigned long start, unsigned long len,
 				unsigned long mode,
 				const unsigned long __user *nmask,
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..3ca4d694a621 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -17,6 +17,11 @@ 
 #define PROT_GROWSDOWN	0x01000000	/* mprotect flag: extend change to start of growsdown vma */
 #define PROT_GROWSUP	0x02000000	/* mprotect flag: extend change to end of growsup vma */
 
+/*
+ * The PROT_SEAL defines memory sealing in the prot argument of mmap().
+ */
+#define PROT_SEAL	0x04000000	/* _BITUL(26) */
+
 /* 0x01 - 0x03 are defined in linux/mman.h */
 #define MAP_TYPE	0x0f		/* Mask for type of mapping */
 #define MAP_FIXED	0x10		/* Interpret addr exactly */
@@ -33,6 +38,9 @@ 
 #define MAP_UNINITIALIZED 0x4000000	/* For anonymous mmap, memory could be
 					 * uninitialized */
 
+/* map is sealable */
+#define MAP_SEALABLE	0x8000000	/* _BITUL(27) */
+
 /*
  * Flags for mlock
  */
diff --git a/mm/Makefile b/mm/Makefile
index e4b5b75aaec9..cbae83f74642 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -43,6 +43,10 @@  ifdef CONFIG_CROSS_MEMORY_ATTACH
 mmu-$(CONFIG_MMU)	+= process_vm_access.o
 endif
 
+ifdef CONFIG_64BIT
+mmu-$(CONFIG_MMU)	+= mseal.o
+endif
+
 obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   maccess.o page-writeback.o folio-compat.o \
 			   readahead.o swap.o truncate.o vmscan.o shrinker.o \
diff --git a/mm/madvise.c b/mm/madvise.c
index 912155a94ed5..41eb5163ed1f 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1393,6 +1393,7 @@  int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
  *  -EIO    - an I/O error occurred while paging in data.
  *  -EBADF  - map exists, but area maps something that isn't a file.
  *  -EAGAIN - a kernel resource was temporarily unavailable.
+ *  -EACCES - memory is sealed.
  */
 int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior)
 {
@@ -1436,10 +1437,21 @@  int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
 	start = untagged_addr_remote(mm, start);
 	end = start + len;
 
+	/*
+	 * Check if the address range is sealed for do_madvise().
+	 * can_modify_mm_madv assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm_madv(mm, start, end, behavior)) {
+		error = -EPERM;
+		goto out;
+	}
+
 	blk_start_plug(&plug);
 	error = madvise_walk_vmas(mm, start, end, behavior,
 			madvise_vma_behavior);
 	blk_finish_plug(&plug);
+
+out:
 	if (write)
 		mmap_write_unlock(mm);
 	else
diff --git a/mm/mmap.c b/mm/mmap.c
index b78e83d351d2..32bc2179aed0 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1213,6 +1213,7 @@  unsigned long do_mmap(struct file *file, unsigned long addr,
 {
 	struct mm_struct *mm = current->mm;
 	int pkey = 0;
+	unsigned long vm_seals;
 
 	*populate = 0;
 
@@ -1233,6 +1234,8 @@  unsigned long do_mmap(struct file *file, unsigned long addr,
 	if (flags & MAP_FIXED_NOREPLACE)
 		flags |= MAP_FIXED;
 
+	vm_seals = get_mmap_seals(prot, flags);
+
 	if (!(flags & MAP_FIXED))
 		addr = round_hint_to_min(addr);
 
@@ -1261,6 +1264,13 @@  unsigned long do_mmap(struct file *file, unsigned long addr,
 			return -EEXIST;
 	}
 
+	/*
+	 * Check if the address range is sealed for do_mmap().
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, addr, addr + len))
+		return -EPERM;
+
 	if (prot == PROT_EXEC) {
 		pkey = execute_only_pkey(mm);
 		if (pkey < 0)
@@ -1376,6 +1386,7 @@  unsigned long do_mmap(struct file *file, unsigned long addr,
 			vm_flags |= VM_NORESERVE;
 	}
 
+	vm_flags |= vm_seals;
 	addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
 	if (!IS_ERR_VALUE(addr) &&
 	    ((vm_flags & VM_LOCKED) ||
@@ -2679,6 +2690,14 @@  int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 	if (end == start)
 		return -EINVAL;
 
+	/*
+	 * Check if memory is sealed before arch_unmap.
+	 * Prevent unmapping a sealed VMA.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, start, end))
+		return -EPERM;
+
 	 /* arch_unmap() might do unmaps itself.  */
 	arch_unmap(mm, start, end);
 
@@ -3102,6 +3121,14 @@  int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
+	/*
+	 * Check if memory is sealed before arch_unmap.
+	 * Prevent unmapping a sealed VMA.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, start, end))
+		return -EPERM;
+
 	arch_unmap(mm, start, end);
 	return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 81991102f785..5f0f716bf4ae 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -32,6 +32,7 @@ 
 #include <linux/sched/sysctl.h>
 #include <linux/userfaultfd_k.h>
 #include <linux/memory-tiers.h>
+#include <uapi/linux/mman.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
@@ -743,6 +744,15 @@  static int do_mprotect_pkey(unsigned long start, size_t len,
 		}
 	}
 
+	/*
+	 * checking if memory is sealed.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(current->mm, start, end)) {
+		error = -EPERM;
+		goto out;
+	}
+
 	prev = vma_prev(&vmi);
 	if (start > vma->vm_start)
 		prev = vma;
diff --git a/mm/mremap.c b/mm/mremap.c
index 38d98465f3d8..d69b438dcf83 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -902,7 +902,25 @@  static unsigned long mremap_to(unsigned long addr, unsigned long old_len,
 	if ((mm->map_count + 2) >= sysctl_max_map_count - 3)
 		return -ENOMEM;
 
+	/*
+	 * In mremap_to().
+	 * Move a VMA to another location, check if src addr is sealed.
+	 *
+	 * Place can_modify_mm here because mremap_to()
+	 * does its own checking for address range, and we only
+	 * check the sealing after passing those checks.
+	 *
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, addr, addr + old_len))
+		return -EPERM;
+
 	if (flags & MREMAP_FIXED) {
+		/*
+		 * In mremap_to().
+		 * VMA is moved to dst address, and munmap dst first.
+		 * do_munmap will check if dst is sealed.
+		 */
 		ret = do_munmap(mm, new_addr, new_len, uf_unmap_early);
 		if (ret)
 			goto out;
@@ -1061,6 +1079,19 @@  SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 		goto out;
 	}
 
+	/*
+	 * Below is shrink/expand case (not mremap_to())
+	 * Check if src address is sealed, if so, reject.
+	 * In other words, prevent shrinking or expanding a sealed VMA.
+	 *
+	 * Place can_modify_mm here so we can keep the logic related to
+	 * shrink/expand together.
+	 */
+	if (!can_modify_mm(mm, addr, addr + old_len)) {
+		ret = -EPERM;
+		goto out;
+	}
+
 	/*
 	 * Always allow a shrinking remap: that just unmaps
 	 * the unnecessary pages..
diff --git a/mm/mseal.c b/mm/mseal.c
new file mode 100644
index 000000000000..abc00c0b9895
--- /dev/null
+++ b/mm/mseal.c
@@ -0,0 +1,343 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  Implement mseal() syscall.
+ *
+ *  Copyright (c) 2023,2024 Google, Inc.
+ *
+ *  Author: Jeff Xu <jeffxu@chromium.org>
+ */
+
+#include <linux/mempolicy.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/mm_inline.h>
+#include <linux/mmu_context.h>
+#include <linux/syscalls.h>
+#include <linux/sched.h>
+#include "internal.h"
+
+static inline bool vma_is_sealed(struct vm_area_struct *vma)
+{
+	return (vma->vm_flags & VM_SEALED);
+}
+
+static inline bool vma_is_sealable(struct vm_area_struct *vma)
+{
+	return vma->vm_flags & VM_SEALABLE;
+}
+
+static inline void set_vma_sealed(struct vm_area_struct *vma)
+{
+	vm_flags_set(vma, VM_SEALED);
+}
+
+/*
+ * check if a vma is sealed for modification.
+ * return true, if modification is allowed.
+ */
+static bool can_modify_vma(struct vm_area_struct *vma)
+{
+	if (vma_is_sealed(vma))
+		return false;
+
+	return true;
+}
+
+static bool is_madv_discard(int behavior)
+{
+	return	behavior &
+		(MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED |
+		 MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK);
+}
+
+static bool is_ro_anon(struct vm_area_struct *vma)
+{
+	/* check anonymous mapping. */
+	if (vma->vm_file || vma->vm_flags & VM_SHARED)
+		return false;
+
+	/*
+	 * check for non-writable:
+	 * PROT=RO or PKRU is not writeable.
+	 */
+	if (!(vma->vm_flags & VM_WRITE) ||
+		!arch_vma_access_permitted(vma, true, false, false))
+		return true;
+
+	return false;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma;
+
+	VMA_ITERATOR(vmi, mm, start);
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end) {
+		if (!can_modify_vma(vma))
+			return false;
+	}
+
+	/* Allow by default. */
+	return true;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified by madvise.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
+		int behavior)
+{
+	struct vm_area_struct *vma;
+
+	VMA_ITERATOR(vmi, mm, start);
+
+	if (!is_madv_discard(behavior))
+		return true;
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end)
+		if (is_ro_anon(vma) && !can_modify_vma(vma))
+			return false;
+
+	/* Allow by default. */
+	return true;
+}
+
+unsigned long get_mmap_seals(unsigned long prot,
+		unsigned long flags)
+{
+	unsigned long vm_seals;
+
+	if (prot & PROT_SEAL)
+		vm_seals = VM_SEALED | VM_SEALABLE;
+	else
+		vm_seals = (flags & MAP_SEALABLE) ? VM_SEALABLE : 0;
+
+	return vm_seals;
+}
+
+/*
+ * Check if a seal type can be added to VMA.
+ */
+static bool can_add_vma_seal(struct vm_area_struct *vma)
+{
+	/* if map is not sealable, reject. */
+	if (!vma_is_sealable(vma))
+		return false;
+
+	return true;
+}
+
+static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		struct vm_area_struct **prev, unsigned long start,
+		unsigned long end, vm_flags_t newflags)
+{
+	int ret = 0;
+	vm_flags_t oldflags = vma->vm_flags;
+
+	if (newflags == oldflags)
+		goto out;
+
+	vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out;
+	}
+
+	set_vma_sealed(vma);
+out:
+	*prev = vma;
+	return ret;
+}
+
+/*
+ * Check for do_mseal:
+ * 1> start is part of a valid vma.
+ * 2> end is part of a valid vma.
+ * 3> No gap (unallocated address) between start and end.
+ * 4> map is sealable.
+ */
+static int check_mm_seal(unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma;
+	unsigned long nstart = start;
+
+	VMA_ITERATOR(vmi, current->mm, start);
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end) {
+		if (vma->vm_start > nstart)
+			/* unallocated memory found. */
+			return -ENOMEM;
+
+		if (!can_add_vma_seal(vma))
+			return -EACCES;
+
+		if (vma->vm_end >= end)
+			return 0;
+
+		nstart = vma->vm_end;
+	}
+
+	return -ENOMEM;
+}
+
+/*
+ * Apply sealing.
+ */
+static int apply_mm_seal(unsigned long start, unsigned long end)
+{
+	unsigned long nstart;
+	struct vm_area_struct *vma, *prev;
+
+	VMA_ITERATOR(vmi, current->mm, start);
+
+	vma = vma_iter_load(&vmi);
+	/*
+	 * Note: check_mm_seal should already checked ENOMEM case.
+	 * so vma should not be null, same for the other ENOMEM cases.
+	 */
+	prev = vma_prev(&vmi);
+	if (start > vma->vm_start)
+		prev = vma;
+
+	nstart = start;
+	for_each_vma_range(vmi, vma, end) {
+		int error;
+		unsigned long tmp;
+		vm_flags_t newflags;
+
+		newflags = vma->vm_flags | VM_SEALED;
+		tmp = vma->vm_end;
+		if (tmp > end)
+			tmp = end;
+		error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
+		if (error)
+			return error;
+		tmp = vma_iter_end(&vmi);
+		nstart = tmp;
+	}
+
+	return 0;
+}
+
+/*
+ * mseal(2) seals the VM's meta data from
+ * selected syscalls.
+ *
+ * addr/len: VM address range.
+ *
+ *  The address range by addr/len must meet:
+ *   start (addr) must be in a valid VMA.
+ *   end (addr + len) must be in a valid VMA.
+ *   no gap (unallocated memory) between start and end.
+ *   start (addr) must be page aligned.
+ *
+ *  len: len will be page aligned implicitly.
+ *
+ *   Below VMA operations are blocked after sealing.
+ *   1> Unmapping, moving to another location, and shrinking
+ *	the size, via munmap() and mremap(), can leave an empty
+ *	space, therefore can be replaced with a VMA with a new
+ *	set of attributes.
+ *   2> Moving or expanding a different vma into the current location,
+ *	via mremap().
+ *   3> Modifying a VMA via mmap(MAP_FIXED).
+ *   4> Size expansion, via mremap(), does not appear to pose any
+ *	specific risks to sealed VMAs. It is included anyway because
+ *	the use case is unclear. In any case, users can rely on
+ *	merging to expand a sealed VMA.
+ *   5> mprotect and pkey_mprotect.
+ *   6> Some destructive madvice() behavior (e.g. MADV_DONTNEED)
+ *      for anonymous memory, when users don't have write permission to the
+ *	memory. Those behaviors can alter region contents by discarding pages,
+ *	effectively a memset(0) for anonymous memory.
+ *
+ *  flags: reserved.
+ *
+ * return values:
+ *  zero: success.
+ *  -EINVAL:
+ *   invalid input flags.
+ *   start address is not page aligned.
+ *   Address arange (start + len) overflow.
+ *  -ENOMEM:
+ *   addr is not a valid address (not allocated).
+ *   end (start + len) is not a valid address.
+ *   a gap (unallocated memory) between start and end.
+ *  -EACCES:
+ *   MAP_SEALABLE is not set.
+ *  -EPERM:
+ *  - In 32 bit architecture, sealing is not supported.
+ * Note:
+ *  user can call mseal(2) multiple times, adding a seal on an
+ *  already sealed memory is a no-action (no error).
+ *
+ *  unseal() is not supported.
+ */
+static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
+{
+	size_t len;
+	int ret = 0;
+	unsigned long end;
+	struct mm_struct *mm = current->mm;
+
+	ret = can_do_mseal(flags);
+	if (ret)
+		return ret;
+
+	start = untagged_addr(start);
+	if (!PAGE_ALIGNED(start))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len_in);
+	/* Check to see whether len was rounded up from small -ve to zero. */
+	if (len_in && !len)
+		return -EINVAL;
+
+	end = start + len;
+	if (end < start)
+		return -EINVAL;
+
+	if (end == start)
+		return 0;
+
+	if (mmap_write_lock_killable(mm))
+		return -EINTR;
+
+	/*
+	 * First pass, this helps to avoid
+	 * partial sealing in case of error in input address range,
+	 * e.g. ENOMEM and EACCESS error.
+	 */
+	ret = check_mm_seal(start, end);
+	if (ret)
+		goto out;
+
+	/*
+	 * Second pass, this should success, unless there are errors
+	 * from vma_modify_flags, e.g. merge/split error, or process
+	 * reaching the max supported VMAs, however, those cases shall
+	 * be rare.
+	 */
+	ret = apply_mm_seal(start, end);
+
+out:
+	mmap_write_unlock(current->mm);
+	return ret;
+}
+
+SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
+		flags)
+{
+	return do_mseal(start, len, flags);
+}