From patchwork Wed Nov 9 20:30:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 17807 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp547956wru; Wed, 9 Nov 2022 12:34:56 -0800 (PST) X-Google-Smtp-Source: AMsMyM6ug16d4KSN8PuIfFCkyrdyItW30dOY6QC8AB0ae21vNj2cU3YOEmqNHZHcmZNWyYhPhdkD X-Received: by 2002:a17:906:11cc:b0:7ad:e178:de59 with SMTP id o12-20020a17090611cc00b007ade178de59mr1748609eja.449.1668026095932; Wed, 09 Nov 2022 12:34:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668026095; cv=none; d=google.com; s=arc-20160816; b=1CSf4v21u1GhXU8Ousfuc0RRFvbjAf4/orNdkKu5P24VqjUXOIu8Jp+1L/4v8RV7IR mrKNmvMwXtfm0KY+itakH0XuPd8WjCa4YiCIodBHjDdHQVtzTJ7iPAZsZsUhNSmBDZpA duI6s6RsCOzOkPILJ1SjbAOn2p441qU4ZZ1G4NLFwuaPMRD/SrnkNpQ7AdiYKTW/3XVR tz5qssT3sWkv3nS+0jOFRkR81HPjknaSeq8HA392H3JxDWXfUJEIfOeU2Vj6yeeM3xOL d8SFwtL3uMzH9RjmVydOegHpd2+re1AUuJKzR8BPpHp8Eg/ortLpLjNgRJMLe4sHZLb9 vHeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=UAFQiOpzZMBKv2AQjP7YaqcakaLTBmk9LN48Ag/gOS8=; b=tHcQhi/iTTfs96cdZaizccEeQzbj44DWHPItG1tviyGYC61XfUNohwi+cEV9bErjIb XfWd81aix2YUjolx0v2+XOsmH3PPMiwPINUHSKFMIuFsTTcWFwpx5K6AnxugJi3ZnJTl EHJQ8FKTipG2szgQ5D5hTWaIWjJ6LdH/pTLoH9mZOXGgVs5fYnRUtkR9VQUNIkEE8FMQ vCSGSAKFnb5CYjaPKkLA/fNak1E2Q0VSa8Dfy4btmGG6Asb2o/mt/cvGt8o0v76VLn2e k1XjY9mhhKzbU8p2Gp2dZcA72FZ44ydsrGtwUoi6IYrzjCDs73+0y+IHd6BfnMvmlOmb bkiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="QDtR/oMN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gb12-20020a170907960c00b007ae1d040bffsi17028155ejc.223.2022.11.09.12.34.32; Wed, 09 Nov 2022 12:34:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b="QDtR/oMN"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231544AbiKIUbD (ORCPT + 99 others); Wed, 9 Nov 2022 15:31:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230443AbiKIUa4 (ORCPT ); Wed, 9 Nov 2022 15:30:56 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FDD019018 for ; Wed, 9 Nov 2022 12:30:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CFF24B82004 for ; Wed, 9 Nov 2022 20:30:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6098AC433C1; Wed, 9 Nov 2022 20:30:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1668025852; bh=penp5eGjlAzwEMHIlYfO2UthsDqmi4JE3eNVxwebGdU=; h=From:To:Cc:Subject:Date:From; b=QDtR/oMNFrNnuSC8rkqWQuyDgSXMdz7cE6sGNbUSY63i7rPywtVF2bs/QeWo1hR6D E+ZSTT3gQ1V/p4cp0tg271dSgkqpaQYCwj4Hw+IDg1mhCSpAbyrGxAwL6Aja826eaJ t6IFNeMUW7uI0RdE4bX2KpSOHjrhnD9sK5lLsPGY= From: Linus Torvalds To: Hugh Dickins , Johannes Weiner , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Gordeev Subject: [PATCH 1/4] mm: introduce 'encoded' page pointers with embedded extra bits Date: Wed, 9 Nov 2022 12:30:48 -0800 Message-Id: <20221109203051.1835763-1-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.38.1.284.gfd9468d787 MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748958273313708342?= X-GMAIL-MSGID: =?utf-8?q?1749052131440558640?= We already have this notion in parts of the MM code (see the mlock code with the LRU_PAGE and NEW_PAGE bits), but I'm going to introduce a new case, and I refuse to do the same thing we've done before where we just put bits in the raw pointer and say it's still a normal pointer. So this introduces a 'struct encoded_page' pointer that cannot be used for anything else than to encode a real page pointer and a couple of extra bits in the low bits. That way the compiler can trivially track the state of the pointer and you just explicitly encode and decode the extra bits. Note that this makes the alignment of 'struct page' explicit even for the case where CONFIG_HAVE_ALIGNED_STRUCT_PAGE is not set. That is entirely redundant in almost all cases, since the page structure already contains several word-sized entries. However, on m68k, the alignment of even 32-bit data is just 16 bits, and as such in theory the alignment of 'struct page' could be too. So let's just make it very very explicit that the alignment needs to be at least 32 bits, giving us a guarantee of two unused low bits in the pointer. Now, in practice, our page struct array is aligned much more than that anyway, even on m68k, and our existing code in mm/mlock.c obviously already depended on that. But since the whole point of this change is to be careful about the type system when hiding extra bits in the pointer, let's also be explicit about the assumptions we make. NOTE! This is being very careful in another way too: it has a build-time assertion that the 'flags' added to the page pointer actually fit in the two bits. That means that this helper must be inlined, and can only be used in contexts where the compiler can statically determine that the value fits in the available bits. Link: https://lore.kernel.org/all/Y2tKixpO4RO6DgW5@tuxmaker.boeblingen.de.ibm.com/ Cc: Alexander Gordeev Acked-by: Johannes Weiner Acked-by: Hugh Dickins Signed-off-by: Linus Torvalds Reviewed-by: David Hildenbrand --- include/linux/mm_types.h | 34 +++++++++++++++++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..0a38fcb08d85 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -67,7 +67,7 @@ struct mem_cgroup; #ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE #define _struct_page_alignment __aligned(2 * sizeof(unsigned long)) #else -#define _struct_page_alignment +#define _struct_page_alignment __aligned(sizeof(unsigned long)) #endif struct page { @@ -241,6 +241,38 @@ struct page { #endif } _struct_page_alignment; +/** + * struct encoded_page - a nonexistent type marking this pointer + * + * An 'encoded_page' pointer is a pointer to a regular 'struct page', but + * with the low bits of the pointer indicating extra context-dependent + * information. Not super-common, but happens in mmu_gather and mlock + * handling, and this acts as a type system check on that use. + * + * We only really have two guaranteed bits in general, although you could + * play with 'struct page' alignment (see CONFIG_HAVE_ALIGNED_STRUCT_PAGE) + * for more. + * + * Use the supplied helper functions to endcode/decode the pointer and bits. + */ +struct encoded_page; +#define ENCODE_PAGE_BITS 3ul +static __always_inline struct encoded_page *encode_page(struct page *page, unsigned long flags) +{ + BUILD_BUG_ON(flags > ENCODE_PAGE_BITS); + return (struct encoded_page *)(flags | (unsigned long)page); +} + +static inline unsigned long encoded_page_flags(struct encoded_page *page) +{ + return ENCODE_PAGE_BITS & (unsigned long)page; +} + +static inline struct page *encoded_page_ptr(struct encoded_page *page) +{ + return (struct page *)(~ENCODE_PAGE_BITS & (unsigned long)page); +} + /** * struct folio - Represents a contiguous set of bytes. * @flags: Identical to the page flags. From patchwork Wed Nov 9 20:30:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 17810 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp548268wru; Wed, 9 Nov 2022 12:35:43 -0800 (PST) X-Google-Smtp-Source: AMsMyM62OwrI2Op62fRtZ7wBigdPrJmlwrYkNbH2/eHcNhbfWDQvsHjSNeTXB+zbWMC7oQmbOT0G X-Received: by 2002:a17:90b:3e81:b0:213:f86b:f94b with SMTP id rj1-20020a17090b3e8100b00213f86bf94bmr52329124pjb.134.1668026143177; Wed, 09 Nov 2022 12:35:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668026143; cv=none; d=google.com; s=arc-20160816; b=mMGQPFzYWgcIam7SUunBht6ALmFR1VtEuMZx6S7u5P7RIVQ0vvm/agWlS9cUqAZDnG KKoyvDcWHxcWFJA9vTp0rtJ5GKuT9DK8sqKDHebhAhiLt4VRxydG7hYrfLdOPuJLzbz0 5qeR1ulPgl5J73BRzIW9x54+fMZrEAWI6zl5mWq9ZZxE5kaTG//hBYVwhu+GG+P+qooZ NfI0EYPdK3m39QwEV5pQplh92pzeRj0DCHnNw2+04zDSMyuCyn04vsOwuDlIZhoZZIUw RcriTLr1dWkiGsW43/+nohbKiaAfw9D6GddaeuQO/e4tyvV6D1mD5YAoy3AfVXe075ni J3vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UkzXwuIPer6m1qfUHwUxQj1lq9ihfcZ5svVTS/LLtro=; b=aSoZ/5JJo8UeSb9NtqEi4rdB3/kDkaCBKYvsAS2BoZafvt7z2fRxFuY6pVP1pfcLbz 9qGleb7PNZHFimuvnT7onKZYdnSzoLsQmeyT5zhlBBCBxY2Yb1iUlNKGz+YUA16d1rW5 9uHd0krK7XoHZPLEfxeqZWlC1mswXLbqT96vD7/l0UDhephiNvWW9J8L17V3tVa1cT0W rJyvhZETGnWQnHdiQrrwx9FIRujfFQotuvjHRoBOqelS51x2G1XqNvDZ8/+06Z+K1ClG 4WS0J6L6A45+qsioa9vot9YVszWh1WckZMbWERAzzihrZnXiL5L57yob5FrmTuL/WlVI 6qww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=xAEEZ1gj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s15-20020a65690f000000b0046ebaed8e45si18983990pgq.62.2022.11.09.12.35.29; Wed, 09 Nov 2022 12:35:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=xAEEZ1gj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231466AbiKIUa6 (ORCPT + 99 others); Wed, 9 Nov 2022 15:30:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231362AbiKIUa4 (ORCPT ); Wed, 9 Nov 2022 15:30:56 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 166ADB4B4 for ; Wed, 9 Nov 2022 12:30:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A4EA761CCC for ; Wed, 9 Nov 2022 20:30:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EFF72C433D6; Wed, 9 Nov 2022 20:30:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1668025854; bh=jfWc6kn68kAyUZF8S+7X4sveku1vOvDNUz/xy6wAir4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=xAEEZ1gj7l/8wAdz869fjwgcq9wqP+Dk9DLHIoqpv7MpspKrUddN8gJBa9flxuM9G Vrz2Zp2MCnXa6CWwwcVkFSlbSQHlIrFSXzHlH1b8ADAoxLYHHvXiB9fJOStTLi+dBv CJTPJDnlQrjEPOrNEJgV2mPcJCmjITlERtnVSK7U= From: Linus Torvalds To: Hugh Dickins , Johannes Weiner , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/4] mm: teach release_pages() to take an array of encoded page pointers too Date: Wed, 9 Nov 2022 12:30:49 -0800 Message-Id: <20221109203051.1835763-2-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.38.1.284.gfd9468d787 In-Reply-To: <20221109203051.1835763-1-torvalds@linux-foundation.org> References: <20221109203051.1835763-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749052181119906021?= X-GMAIL-MSGID: =?utf-8?q?1749052181119906021?= release_pages() already could take either an array of page pointers, or an array of folio pointers. Expand it to also accept an array of encoded page pointers, which is what both the existing mlock() use and the upcoming mmu_gather use of encoded page pointers wants. Note that release_pages() won't actually use, or react to, any extra encoded bits. Instead, this is very much a case of "I have walked the array of encoded pages and done everything the extra bits tell me to do, now release it all". Also, while the "either page or folio pointers" dual use was handled with a cast of the pointer in "release_folios()", this takes a slightly different approach and uses the "transparent union" attribute to describe the set of arguments to the function: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html which has been supported by gcc forever, but the kernel hasn't used before. That allows us to avoid using various wrappers with casts, and just use the same function regardless of use. Acked-by: Johannes Weiner Acked-by: Hugh Dickins Signed-off-by: Linus Torvalds Reviewed-by: David Hildenbrand --- include/linux/mm.h | 21 +++++++++++++++++++-- mm/swap.c | 16 ++++++++++++---- 2 files changed, 31 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8bbcccbc5565..d9fb5c3e3045 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1179,7 +1179,24 @@ static inline void folio_put_refs(struct folio *folio, int refs) __folio_put(folio); } -void release_pages(struct page **pages, int nr); +/** + * release_pages - release an array of pages or folios + * + * This just releases a simple array of multiple pages, and + * accepts various different forms of said page array: either + * a regular old boring array of pages, an array of folios, or + * an array of encoded page pointers. + * + * The transparent union syntax for this kind of "any of these + * argument types" is all kinds of ugly, so look away. + */ +typedef union { + struct page **pages; + struct folio **folios; + struct encoded_page **encoded_pages; +} release_pages_arg __attribute__ ((__transparent_union__)); + +void release_pages(release_pages_arg, int nr); /** * folios_put - Decrement the reference count on an array of folios. @@ -1195,7 +1212,7 @@ void release_pages(struct page **pages, int nr); */ static inline void folios_put(struct folio **folios, unsigned int nr) { - release_pages((struct page **)folios, nr); + release_pages(folios, nr); } static inline void put_page(struct page *page) diff --git a/mm/swap.c b/mm/swap.c index 955930f41d20..596ed226ddb8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -968,22 +968,30 @@ void lru_cache_disable(void) /** * release_pages - batched put_page() - * @pages: array of pages to release + * @arg: array of pages to release * @nr: number of pages * - * Decrement the reference count on all the pages in @pages. If it + * Decrement the reference count on all the pages in @arg. If it * fell to zero, remove the page from the LRU and free it. + * + * Note that the argument can be an array of pages, encoded pages, + * or folio pointers. We ignore any encoded bits, and turn any of + * them into just a folio that gets free'd. */ -void release_pages(struct page **pages, int nr) +void release_pages(release_pages_arg arg, int nr) { int i; + struct encoded_page **encoded = arg.encoded_pages; LIST_HEAD(pages_to_free); struct lruvec *lruvec = NULL; unsigned long flags = 0; unsigned int lock_batch; for (i = 0; i < nr; i++) { - struct folio *folio = page_folio(pages[i]); + struct folio *folio; + + /* Turn any of the argument types into a folio */ + folio = page_folio(encoded_page_ptr(encoded[i])); /* * Make sure the IRQ-safe lock-holding time does not get From patchwork Wed Nov 9 20:30:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 17808 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp548033wru; Wed, 9 Nov 2022 12:35:06 -0800 (PST) X-Google-Smtp-Source: AMsMyM4AFdsmO83OWf6iB+EDN8DyaG+PIL2q/XbmdfwgJ5bIYyU8B8UB2j1oqgOP4yvxueGkbiA1 X-Received: by 2002:a17:902:9004:b0:187:4889:7dbb with SMTP id a4-20020a170902900400b0018748897dbbmr40499707plp.137.1668026105910; Wed, 09 Nov 2022 12:35:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668026105; cv=none; d=google.com; s=arc-20160816; b=TBl9WzylrfKvmYDJuO+G/ESAMQA+o5R/b09r5icM8/Cu8QVGOgGdP8/uV8hF98eRGN 03O/8lRcD5w+9pnJ9nV379IWu4BGpKIcU2+tMowDuvn5np640dPHOlhZWlz9ttmMwFE4 d4zdE01FwoNbMskCIe4od1J+9McynlstkCH8IwFzIcrognCjTgVTXdUT3LPAmabG7oFE y1M1HkrZ6KAtWHAFgXoEcx5KVUuZ3e13r8mbtPWQPx/ZBxkJjUT1jS6UmeGaaWrzq5RK 19AoX6pPW4biihwMvxyLZVqgeJd92ckHh2Kih+nfj5RcgRAsvL8A+5RUOsSAMYHsK1LW 4/0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1w7qOfNEb//iyPAwGdSwYk943lhxRgKvOH6mHjX8TeQ=; b=gyM3yDt2Gb9KpL0Sh4APJWVSIBnmqRsXikmwT6aWVjMDfIe021HEH0EiNTCStov7mx PU+upMFvoMBVa4Imm37UkomkjflXFZR/d2jEkrc6dqB/cEHWxLgKaZPtnX+tl0Ro9wZd 1js6sNzj8P1LRZ7bWQkzSl5gE5I/+PKhymSsXUnQz2XC1Fa2AF3jEa6wvYnnQRNqUV9h 1FEvYU8diHiT7UAYp4Q8bLoOJ4y7orFNlt7qJIhnUEGAeuUrTCNZ7eFATPYqakJ5+UaB d32qj5RqdA48qJX5f/KUwtOqKI/444KEpi0GTcRXGJYOFFSwIrT3k1tKWi1vbE5pWiZW HKXA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=WwJZxzRC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ob11-20020a17090b390b00b002006b213af9si2687454pjb.32.2022.11.09.12.34.51; Wed, 09 Nov 2022 12:35:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=WwJZxzRC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231616AbiKIUbJ (ORCPT + 99 others); Wed, 9 Nov 2022 15:31:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231448AbiKIUa6 (ORCPT ); Wed, 9 Nov 2022 15:30:58 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17CB0B4B4 for ; Wed, 9 Nov 2022 12:30:57 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9553E61CCC for ; Wed, 9 Nov 2022 20:30:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96BA4C433C1; Wed, 9 Nov 2022 20:30:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1668025856; bh=9qnPfKu+ea9PpoXLpkszgFs1oCfj0WSOJLlCNmwflk8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WwJZxzRCX83iGs+iws+YZjXJsPg0sQJi3yu9tC5b/48iMIwFlDbBrhivKre2Xpon0 4plCoOTe0Z2d74jMZmcOu9hQTyZeD+MztCQC7QMfDBLa5E7RRWm77+esvvw8dWkEa7 oiSboib4EnilVP+ctGl8S4QC26wBQnwBILQOwAXs= From: Linus Torvalds To: Hugh Dickins , Johannes Weiner , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/4] mm: mmu_gather: prepare to gather encoded page pointers with flags Date: Wed, 9 Nov 2022 12:30:50 -0800 Message-Id: <20221109203051.1835763-3-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.38.1.284.gfd9468d787 In-Reply-To: <20221109203051.1835763-1-torvalds@linux-foundation.org> References: <20221109203051.1835763-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749052142126586818?= X-GMAIL-MSGID: =?utf-8?q?1749052142126586818?= This is purely a preparatory patch that makes all the data structures ready for encoding flags with the mmu_gather page pointers. The code currently always sets the flag to zero and doesn't use it yet, but now it's tracking the type state along. The next step will be to actually start using it. Acked-by: Johannes Weiner Acked-by: Hugh Dickins Signed-off-by: Linus Torvalds --- arch/s390/include/asm/tlb.h | 8 +++++--- include/asm-generic/tlb.h | 9 +++++---- include/linux/swap.h | 2 +- mm/mmu_gather.c | 8 ++++---- mm/swap_state.c | 11 ++++------- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 3a5c8fb590e5..05142226d65d 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -25,7 +25,8 @@ void __tlb_remove_table(void *_table); static inline void tlb_flush(struct mmu_gather *tlb); static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size); + struct encoded_page *page, + int page_size); #define tlb_flush tlb_flush #define pte_free_tlb pte_free_tlb @@ -42,9 +43,10 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, * has already been freed, so just do free_page_and_swap_cache. */ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) + struct encoded_page *page, + int page_size) { - free_page_and_swap_cache(page); + free_page_and_swap_cache(encoded_page_ptr(page)); return false; } diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 492dce43236e..e5cd34393372 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -242,7 +242,7 @@ struct mmu_gather_batch { struct mmu_gather_batch *next; unsigned int nr; unsigned int max; - struct page *pages[]; + struct encoded_page *encoded_pages[]; }; #define MAX_GATHER_BATCH \ @@ -256,7 +256,8 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) -extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, +extern bool __tlb_remove_page_size(struct mmu_gather *tlb, + struct encoded_page *page, int page_size); #endif @@ -431,13 +432,13 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) static inline void tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) { - if (__tlb_remove_page_size(tlb, page, page_size)) + if (__tlb_remove_page_size(tlb, encode_page(page, 0), page_size)) tlb_flush_mmu(tlb); } static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) { - return __tlb_remove_page_size(tlb, page, PAGE_SIZE); + return __tlb_remove_page_size(tlb, encode_page(page, 0), PAGE_SIZE); } /* tlb_remove_page diff --git a/include/linux/swap.h b/include/linux/swap.h index a18cf4b7c724..40e418e3461b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -470,7 +470,7 @@ static inline unsigned long total_swapcache_pages(void) extern void free_swap_cache(struct page *page); extern void free_page_and_swap_cache(struct page *); -extern void free_pages_and_swap_cache(struct page **, int); +extern void free_pages_and_swap_cache(struct encoded_page **, int); /* linux/mm/swapfile.c */ extern atomic_long_t nr_swap_pages; extern long total_swap_pages; diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index add4244e5790..f44cc8a5b581 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -48,7 +48,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) struct mmu_gather_batch *batch; for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - struct page **pages = batch->pages; + struct encoded_page **pages = batch->encoded_pages; do { /* @@ -77,7 +77,7 @@ static void tlb_batch_list_free(struct mmu_gather *tlb) tlb->local.next = NULL; } -bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) +bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, int page_size) { struct mmu_gather_batch *batch; @@ -92,13 +92,13 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ * Add the page and check if we are full. If so * force a flush. */ - batch->pages[batch->nr++] = page; + batch->encoded_pages[batch->nr++] = page; if (batch->nr == batch->max) { if (!tlb_next_batch(tlb)) return true; batch = tlb->active; } - VM_BUG_ON_PAGE(batch->nr > batch->max, page); + VM_BUG_ON_PAGE(batch->nr > batch->max, encoded_page_ptr(page)); return false; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 438d0676c5be..8bf08c313872 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -303,15 +303,12 @@ void free_page_and_swap_cache(struct page *page) * Passed an array of pages, drop them all from swapcache and then release * them. They are removed from the LRU and freed if this is their last use. */ -void free_pages_and_swap_cache(struct page **pages, int nr) +void free_pages_and_swap_cache(struct encoded_page **pages, int nr) { - struct page **pagep = pages; - int i; - lru_add_drain(); - for (i = 0; i < nr; i++) - free_swap_cache(pagep[i]); - release_pages(pagep, nr); + for (int i = 0; i < nr; i++) + free_swap_cache(encoded_page_ptr(pages[i])); + release_pages(pages, nr); } static inline bool swap_use_vma_readahead(void) From patchwork Wed Nov 9 20:30:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 17809 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp548121wru; Wed, 9 Nov 2022 12:35:22 -0800 (PST) X-Google-Smtp-Source: AMsMyM7I+gI/C7mYiY/p3I7Pe974KHE1ypJt0gOuyJfCoP+8m+2CYGAPpBKTgXmPBlnJxGEz6elk X-Received: by 2002:a62:ab10:0:b0:563:321a:9155 with SMTP id p16-20020a62ab10000000b00563321a9155mr1263998pff.46.1668026122275; Wed, 09 Nov 2022 12:35:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668026122; cv=none; d=google.com; s=arc-20160816; b=l6Pf1NFYY8c4kSqIItDFIx4F3uGybjLnbqaRJNpZpbci9ET8XY7V5U9Ldc6ikuNM3V LJ2LizTYoEGVxE35peAIqY5hxJbm++yZ+QQPHobpfs9tEhd6XR323q6/VDk9JAKKB+kP 2SBO+iiHx+4Q0aeBaT6/cevHjaQ4QcePT3789MxmTlJFbfWo6/qPY55eyudcWIJOGCwb WMUxmXLJFopIHxGjk/ZxdrejqscvcsfCgi9KfXb318tpnlrR3TG0JR+CD40K/xnoaaTB vW2YVrJITCQQpjVm7um+sKfc5RJyl3ShD6BqNL1l8gGMCajakct9ptehkQbcbr/UIhHF bNPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=U4J1AKE1ePcmHomIeN61qXVvnutWkTnt3SCeHoG4Eio=; b=ho+i9sOcfgrN56jzpXdkSbowyiCj7+ndZp0lToAuBtqLV94+fLkY6FtGWNtHPCQU/h RRI96R3r96JydD3Jusr4+cL0cSinNfgqYxAzlZCX7RDLFe2XbKwMp1Wte0a0I3YCvJIO x5fA8cwV4c53yh4kjpvNRWtJ4eBuUo3sIzEerdZRL/Z3usXCwOQ3dvv/IOI+GSDELPBo 5p/U7Jnx08r8r+5Gu1QXqqEu4ZYonW+xoJclu45sGjp7yVNuPjRmVWkfHtEM00i+iPU3 ywnDuaAKCeFak1JBLv8/uOUyXs+bNUBBLmub1PKDx//J/6R1/MRSOpl43kHQnlH81etD peaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=ySqgloro; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h22-20020a056a00231600b0056bff73985fsi19815656pfh.3.2022.11.09.12.35.08; Wed, 09 Nov 2022 12:35:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=ySqgloro; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231649AbiKIUbN (ORCPT + 99 others); Wed, 9 Nov 2022 15:31:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231527AbiKIUbC (ORCPT ); Wed, 9 Nov 2022 15:31:02 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B72C61DA5E for ; Wed, 9 Nov 2022 12:31:00 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2D012B82004 for ; Wed, 9 Nov 2022 20:30:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A720C433D6; Wed, 9 Nov 2022 20:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1668025857; bh=DygGn50YiKaYslenqOtWfgKtrp8wBO+gru+y3RImfHw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ySqgloro4f7St7rg/4K9JaWDGH7EOeNkCuvbjH4JdIdaIRJEnUcBpyRM3YFWo+irM ++y2qAF5OTta6iis+zsLtX6WcDEzyPgNkuw3WUffwdkgEOeP5UMWgI33yc8KWAP3Im 0WsEo4j21ZVYtUzF9cA3A6yiPiM/fLuoUgX5XJY8= From: Linus Torvalds To: Hugh Dickins , Johannes Weiner , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Nadav Amit , Will Deacon , Aneesh Kumar , Nick Piggin , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Peter Zijlstra , Gerald Schaefer Subject: [PATCH 4/4] mm: delay page_remove_rmap() until after the TLB has been flushed Date: Wed, 9 Nov 2022 12:30:51 -0800 Message-Id: <20221109203051.1835763-4-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.38.1.284.gfd9468d787 In-Reply-To: <20221109203051.1835763-1-torvalds@linux-foundation.org> References: <20221109203051.1835763-1-torvalds@linux-foundation.org> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749052159522228241?= X-GMAIL-MSGID: =?utf-8?q?1749052159522228241?= When we remove a page table entry, we are very careful to only free the page after we have flushed the TLB, because other CPUs could still be using the page through stale TLB entries until after the flush. However, we have removed the rmap entry for that page early, which means that functions like folio_mkclean() would end up not serializing with the page table lock because the page had already been made invisible to rmap. And that is a problem, because while the TLB entry exists, we could end up with the following situation: (a) one CPU could come in and clean it, never seeing our mapping of the page (b) another CPU could continue to use the stale and dirty TLB entry and continue to write to said page resulting in a page that has been dirtied, but then marked clean again, all while another CPU might have dirtied it some more. End result: possibly lost dirty data. This extends our current TLB gather infrastructure to optionally track a "should I do a delayed page_remove_rmap() for this page after flushing the TLB". It uses the newly introduced 'encoded page pointer' to do that without having to keep separate data around. Note, this is complicated by a couple of issues: - we want to delay the rmap removal, but not past the page table lock, because that simplifies the memcg accounting - only SMP configurations want to delay TLB flushing, since on UP there are obviously no remote TLBs to worry about, and the page table lock means there are no preemption issues either - s390 has its own mmu_gather model that doesn't delay TLB flushing, and as a result also does not want the delayed rmap. As such, we can treat S390 like the UP case and use a common fallback for the "no delays" case. - we can track an enormous number of pages in our mmu_gather structure, with MAX_GATHER_BATCH_COUNT batches of MAX_TABLE_BATCH pages each, all set up to be approximately 10k pending pages. We do not want to have a huge number of batched pages that we then need to check for delayed rmap handling inside the page table lock. Particularly that last point results in a noteworthy detail, where the normal page batch gathering is limited once we have delayed rmaps pending, in such a way that only the last batch (the so-called "active batch") in the mmu_gather structure can have any delayed entries. NOTE! While the "possibly lost dirty data" sounds catastrophic, for this all to happen you need to have a user thread doing either madvise() with MADV_DONTNEED or a full re-mmap() of the area concurrently with another thread continuing to use said mapping. So arguably this is about user space doing crazy things, but from a VM consistency standpoint it's better if we track the dirty bit properly even when user space goes off the rails. Reported-and-tested-by: Nadav Amit Link: https://lore.kernel.org/all/B88D3073-440A-41C7-95F4-895D3F657EF2@gmail.com/ Cc: Will Deacon Cc: Aneesh Kumar Cc: Andrew Morton Cc: Nick Piggin Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Cc: Peter Zijlstra (Intel) Cc: Gerald Schaefer # s390 Acked-by: Johannes Weiner Acked-by: Hugh Dickins Signed-off-by: Linus Torvalds --- arch/s390/include/asm/tlb.h | 3 +++ include/asm-generic/tlb.h | 31 +++++++++++++++++++++++++++++-- mm/memory.c | 23 +++++++++++++++++------ mm/mmu_gather.c | 31 +++++++++++++++++++++++++++++++ 4 files changed, 80 insertions(+), 8 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 05142226d65d..b91f4a9b044c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -41,6 +41,9 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, * Release the page cache reference for a pte removed by * tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page * has already been freed, so just do free_page_and_swap_cache. + * + * s390 doesn't delay rmap removal, so there is nothing encoded in + * the page pointer. */ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index e5cd34393372..154c774d6307 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -259,6 +259,28 @@ struct mmu_gather_batch { extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, int page_size); + +#ifdef CONFIG_SMP +/* + * This both sets 'delayed_rmap', and returns true. It would be an inline + * function, except we define it before the 'struct mmu_gather'. + */ +#define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true) +extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma); +#endif + +#endif + +/* + * We have a no-op version of the rmap removal that doesn't + * delay anything. That is used on S390, which flushes remote + * TLBs synchronously, and on UP, which doesn't have any + * remote TLBs to flush and is not preemptible due to this + * all happening under the page table lock. + */ +#ifndef tlb_delay_rmap +#define tlb_delay_rmap(tlb) (false) +static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { } #endif /* @@ -291,6 +313,11 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * Do we have pending delayed rmap removals? + */ + unsigned int delayed_rmap : 1; + /* * at which levels have we cleared entries? */ @@ -436,9 +463,9 @@ static inline void tlb_remove_page_size(struct mmu_gather *tlb, tlb_flush_mmu(tlb); } -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) +static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page, unsigned int flags) { - return __tlb_remove_page_size(tlb, encode_page(page, 0), PAGE_SIZE); + return __tlb_remove_page_size(tlb, encode_page(page, flags), PAGE_SIZE); } /* tlb_remove_page diff --git a/mm/memory.c b/mm/memory.c index f88c351aecd4..60a0f44f6e72 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1432,6 +1432,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { + unsigned int delay_rmap; + page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; @@ -1443,20 +1445,26 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (unlikely(!page)) continue; + delay_rmap = 0; if (!PageAnon(page)) { if (pte_dirty(ptent)) { - force_flush = 1; set_page_dirty(page); + if (tlb_delay_rmap(tlb)) { + delay_rmap = 1; + force_flush = 1; + } } if (pte_young(ptent) && likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); } rss[mm_counter(page)]--; - page_remove_rmap(page, vma, false); - if (unlikely(page_mapcount(page) < 0)) - print_bad_pte(vma, addr, ptent, page); - if (unlikely(__tlb_remove_page(tlb, page))) { + if (!delay_rmap) { + page_remove_rmap(page, vma, false); + if (unlikely(page_mapcount(page) < 0)) + print_bad_pte(vma, addr, ptent, page); + } + if (unlikely(__tlb_remove_page(tlb, page, delay_rmap))) { force_flush = 1; addr += PAGE_SIZE; break; @@ -1513,8 +1521,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_leave_lazy_mmu_mode(); /* Do the actual TLB flush before dropping ptl */ - if (force_flush) + if (force_flush) { tlb_flush_mmu_tlbonly(tlb); + if (tlb->delayed_rmap) + tlb_flush_rmaps(tlb, vma); + } pte_unmap_unlock(start_pte, ptl); /* diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index f44cc8a5b581..38592fba3826 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -19,6 +20,10 @@ static bool tlb_next_batch(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; + /* No more batching if we have delayed rmaps pending */ + if (tlb->delayed_rmap) + return false; + batch = tlb->active; if (batch->next) { tlb->active = batch->next; @@ -43,6 +48,31 @@ static bool tlb_next_batch(struct mmu_gather *tlb) return true; } +/** + * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB + * @tlb: the current mmu_gather + * + * Note that because of how tlb_next_batch() above works, we will + * never start new batches with pending delayed rmaps, so we only + * need to walk through the current active batch. + */ +void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + struct mmu_gather_batch *batch; + + batch = tlb->active; + for (int i = 0; i < batch->nr; i++) { + struct encoded_page *enc = batch->encoded_pages[i]; + + if (encoded_page_flags(enc)) { + struct page *page = encoded_page_ptr(enc); + page_remove_rmap(page, vma, false); + } + } + + tlb->delayed_rmap = 0; +} + static void tlb_batch_pages_flush(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; @@ -286,6 +316,7 @@ static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, tlb->active = &tlb->local; tlb->batch_count = 0; #endif + tlb->delayed_rmap = 0; tlb_table_init(tlb); #ifdef CONFIG_MMU_GATHER_PAGE_SIZE