From patchwork Fri Oct 21 19:45:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 6953 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp874595wrr; Fri, 21 Oct 2022 12:47:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5GHVDaZ5vENvvCFbi0n3mlfvYW8rKw6CYtskbylW1I6giSMvzIXS4kfLrf6gjW+mY8aOz1 X-Received: by 2002:a17:907:c15:b0:78d:af58:4274 with SMTP id ga21-20020a1709070c1500b0078daf584274mr17190151ejc.150.1666381660010; Fri, 21 Oct 2022 12:47:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666381660; cv=none; d=google.com; s=arc-20160816; b=PEIYzmpTNnFWmKG48gSzs8Mo3U4c4AUhXRSqz4ilb9slexPvte3J/qTq95HyKbLFLU CEYNzDGF+9lm6fA0F79wI0AtkPYgwKBmXJd5gYdymxM5YB4hT4bwrkGdcGT+FQPn5bc3 0NwnAP9UHi1P343pjSbUArDcMfGe7/SHT3ogeLkJFqnghgHdtG2w+g2hcs1QQXk8QW5r 1F5zM0L1iM8M0VBvHRb+8fcmjpVJ9Trbh9jp/yyUyD8UKLRXfsYaxwSoYmzmuezbJJFb 55c0NJOjO8Tz3d/oGBMUGJlxN7YlhHJ55vGDAc66BjixDYInhmmqKalrSTOSF1gDjc0l O2bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:subject:cc:to:from:date; bh=dUkj2063HD6OEEIrmrM9FFi3TXCZQ3Z2Zy5AJ2EcRm8=; b=oMbwNphAddKoK39htPj5eocLwthhTOFNRJAPkUFGsJO+P62lc2GppbQ1xLe3343Rla RL3XvXP4/hvzcewQZpFjgfWQvIdXaKer+jK4bEOuTDqQLp5qEzLadq1SCt5J7ss77hgz /7H23qKG+farmJyfL881fMUiHnVOs50v0bHhSTXZY+k+/GPQhUa04/f7opJmUZPNGBJ6 B+sh03iDxVvl3AIdP7kD8mUiB6fcEfR0bzeZN6iMEprA/GeMuJVuG2N+Dkqp/SbR3UhN Bs4IvsEsG0JL29p0iz+dFYiOs8FDXXy/0BfaJXIE2axtj7kf7BGiojyFG3wCQwVZkmp6 /GTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nd10-20020a170907628a00b0078dacbcaa7asi21008989ejc.992.2022.10.21.12.47.15; Fri, 21 Oct 2022 12:47:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230257AbiJUTpz (ORCPT + 99 others); Fri, 21 Oct 2022 15:45:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230132AbiJUTpx (ORCPT ); Fri, 21 Oct 2022 15:45:53 -0400 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6596512344C for ; Fri, 21 Oct 2022 12:45:51 -0700 (PDT) Received: from [2603:3005:d05:2b00:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (Exim 4.96) (envelope-from ) id 1olxxi-0000eU-2z; Fri, 21 Oct 2022 15:45:46 -0400 Date: Fri, 21 Oct 2022 15:45:46 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, kernel-team@meta.com, Mike Kravetz , Andrew Morton , David Hildenbrand Subject: [PATCH] mm,madvise,hugetlb: fix unexpected data loss with MADV_DONTNEED on hugetlbfs Message-ID: <20221021154546.57df96db@imladris.surriel.com> X-Mailer: Claws Mail 4.1.0 (GTK 3.24.34; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Sender: riel@shelob.surriel.com X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747327815266644314?= X-GMAIL-MSGID: =?utf-8?q?1747327815266644314?= A common use case for hugetlbfs is for the application to create memory pools backed by huge pages, which then get handed over to some malloc library (eg. jemalloc) for further management. That malloc library may be doing MADV_DONTNEED calls on memory that is no longer needed, expecting those calls to happen on PAGE_SIZE boundaries. However, currently the MADV_DONTNEED code rounds up any such requests to HPAGE_PMD_SIZE boundaries. This leads to undesired outcomes when jemalloc expects a 4kB MADV_DONTNEED, but 2MB of memory get zeroed out, instead. Use of pre-built shared libraries means that user code does not always know the page size of every memory arena in use. Avoid unexpected data loss with MADV_DONTNEED by rounding up only to PAGE_SIZE (in do_madvise), and rounding down to huge page granularity. That way programs will only get as much memory zeroed out as they requested. While we're here, refactor madvise_dontneed_free_valid_vma a little so mlocked hugetlb VMAs need MADV_DONTNEED_LOCKED. Cc: Mike Kravetz Cc: Andrew Morton Cc: David Hildenbrand Fixes: 90e7e7f5ef3f ("mm: enable MADV_DONTNEED for hugetlb mappings") --- mm/madvise.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 2baa93ca2310..a60e8e23c323 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -799,21 +799,29 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma, unsigned long *end, int behavior) { - if (!is_vm_hugetlb_page(vma)) { - unsigned int forbidden = VM_PFNMAP; + unsigned int forbidden = VM_PFNMAP; - if (behavior != MADV_DONTNEED_LOCKED) - forbidden |= VM_LOCKED; + if (behavior != MADV_DONTNEED_LOCKED) + forbidden |= VM_LOCKED; - return !(vma->vm_flags & forbidden); - } + if (vma->vm_flags & forbidden) + return false; + + if (!is_vm_hugetlb_page(vma)) + return true; if (behavior != MADV_DONTNEED && behavior != MADV_DONTNEED_LOCKED) return false; if (start & ~huge_page_mask(hstate_vma(vma))) return false; - *end = ALIGN(*end, huge_page_size(hstate_vma(vma))); + /* + * Madvise callers expect the length to be rounded up to the page + * size, but they may not know the page size for this VMA is larger + * than PAGE_SIZE! Round down huge pages to avoid unexpected data loss. + */ + *end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma))); + return true; } @@ -828,6 +836,10 @@ static long madvise_dontneed_free(struct vm_area_struct *vma, if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior)) return -EINVAL; + /* A small MADV_DONTNEED on a huge page gets rounded down to zero. */ + if (start == end) + return 0; + if (!userfaultfd_remove(vma, start, end)) { *prev = NULL; /* mmap_lock has been dropped, prev is stale */