[RESEND,1/1] um: oops on accessing a non-present page in the vmalloc area

Message ID 20240223140435.1240-1-petrtesarik@huaweicloud.com
State New
Headers
Series [RESEND,1/1] um: oops on accessing a non-present page in the vmalloc area |

Commit Message

Petr Tesarik Feb. 23, 2024, 2:04 p.m. UTC
  From: Petr Tesarik <petr.tesarik1@huawei-partners.com>

If a segmentation fault is caused by accessing an address in the vmalloc
area, check that the target page is present.

Currently, if the kernel hits a guard page in the vmalloc area, UML blindly
assumes that the fault is caused by a stale mapping and will be fixed by
flush_tlb_kernel_vm(). Unsurprisingly, if the fault is caused by accessing
a guard page, no mapping is created, and when the faulting instruction is
restarted, it will cause exactly the same fault again, effectively creating
an infinite loop.

Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
---
 arch/um/kernel/trap.c | 4 ++++
 1 file changed, 4 insertions(+)
  

Patch

diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index 6d8ae86ae978..d5b85f1bfe33 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -206,11 +206,15 @@  unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user,
 	int err;
 	int is_write = FAULT_WRITE(fi);
 	unsigned long address = FAULT_ADDRESS(fi);
+	pte_t *pte;
 
 	if (!is_user && regs)
 		current->thread.segv_regs = container_of(regs, struct pt_regs, regs);
 
 	if (!is_user && (address >= start_vm) && (address < end_vm)) {
+		pte = virt_to_pte(&init_mm, address);
+		if (!pte_present(*pte))
+			page_fault_oops(regs, address, ip);
 		flush_tlb_kernel_vm();
 		goto out;
 	}