From patchwork Fri Jul 14 16:17:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 120557 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp2632046vqm; Fri, 14 Jul 2023 09:57:00 -0700 (PDT) X-Google-Smtp-Source: APBJJlFCg2IxmAy3MYCGzY6W/5hBrRANy8dlrnZ1qnTxoWf1PoxJbI2exQNB+70g4uXoHdXywvaw X-Received: by 2002:a17:906:a0c2:b0:992:9756:6a22 with SMTP id bh2-20020a170906a0c200b0099297566a22mr3689642ejb.48.1689353819771; Fri, 14 Jul 2023 09:56:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689353819; cv=none; d=google.com; s=arc-20160816; b=t6fmisqLbUC6Iavv/WoA+7H0nnn6wPy/W1d2b2887WOrdZi0A06xEaKUGXB0KbOEsQ IN9cF5/ZnCdNe2sL3nsfmbZKvIw9P3J/i3av4zIGZS5D7VYf24wowcFNfyQKxS5i1RJE z7yARA9joquOcPW1QOhvKNwYzmGljmVuBjYIzfyRq55yjlFI9tiZ5ql9uk1TELUaimlb UAaJ/+AiV8ERnbZqD1HLvP1QO7rsIlUlGfYKqhF4iTbNRTK/GYfd/8C1JlirXFGq0EFm tz1aEVQOdCoNeylunMgsy6fiIUYKbZMhBIkRkv/SziarGHLLulwt+gYoftncGBB2bZQT DpYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=MtFe+SBpb58GfXPkNa/WZIUSJbi6BRi5IT0DzQBQLj0=; fh=KUn9/czOF/CUegSsbpumpb8eTeZtYrbN2VF58YsSKuI=; b=aN6gqsQCDUBEmRRKnhZFLwTKwgYMGN45CBYAlXjwjefZo9zd9o9/g3lgcXf9qmpsxU FQzM9F+zZ74UD6a2S3hd3+ZzkijRAk/fOH/Uo0OsNiKUlMm3+xfL8OoVNLLpUcT8HbiI h1QqL5hWbXLgaNEjC4KzJMNB2/0Lv3UQStSf11Br8fs/NT1b+qEEH5zcFbSEclso0MP2 XhEf92y0VweKITStf4UStqzUxk8Q+8oW0ywIRwMCBz/MaQyrXsaZMQiIyD29QpqCePtt 2ZDE5wbN0h5OZhJcHd4NnZEZYzS6N8ufmW/SXr3RELRUjA7d+Id/r4BfTmktQDLJiP0O YCzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w17-20020a1709067c9100b0098e2aa0bb92si9307491ejo.137.2023.07.14.09.56.35; Fri, 14 Jul 2023 09:56:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236309AbjGNQRu (ORCPT + 99 others); Fri, 14 Jul 2023 12:17:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236293AbjGNQRr (ORCPT ); Fri, 14 Jul 2023 12:17:47 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 604FC30EA for ; Fri, 14 Jul 2023 09:17:44 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 56B861570; Fri, 14 Jul 2023 09:18:26 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 089E33F740; Fri, 14 Jul 2023 09:17:41 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 1/4] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Date: Fri, 14 Jul 2023 17:17:30 +0100 Message-Id: <20230714161733.4144503-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230714160407.4142030-1-ryan.roberts@arm.com> References: <20230714160407.4142030-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771415871159641388 X-GMAIL-MSGID: 1771415871159641388 In preparation for FLEXIBLE_THP support, improve folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be passed to it. In this case, all contained pages are accounted using the order-0 folio (or base page) scheme. Signed-off-by: Ryan Roberts Reviewed-by: Yu Zhao Reviewed-by: Yin Fengwei --- mm/rmap.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) -- 2.25.1 diff --git a/mm/rmap.c b/mm/rmap.c index 0c0d8857dfce..f293d072368a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1278,31 +1278,45 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, * This means the inc-and-test can be bypassed. * The folio does not have to be locked. * - * If the folio is large, it is accounted as a THP. As the folio + * If the folio is pmd-mappable, it is accounted as a THP. As the folio * is new, it's assumed to be mapped exclusively by a single process. */ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { - int nr; + int nr = folio_nr_pages(folio); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); + VM_BUG_ON_VMA(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end, vma); __folio_set_swapbacked(folio); - if (likely(!folio_test_pmd_mappable(folio))) { + if (!folio_test_large(folio)) { /* increment count (starts at -1) */ atomic_set(&folio->_mapcount, 0); - nr = 1; + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); + } else if (!folio_test_pmd_mappable(folio)) { + int i; + + for (i = 0; i < nr; i++) { + struct page *page = folio_page(folio, i); + + /* increment count (starts at -1) */ + atomic_set(&page->_mapcount, 0); + __page_set_anon_rmap(folio, page, vma, + address + (i << PAGE_SHIFT), 1); + } + + /* increment count (starts at 0) */ + atomic_set(&folio->_nr_pages_mapped, nr); } else { /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED); - nr = folio_nr_pages(folio); + __page_set_anon_rmap(folio, &folio->page, vma, address, 1); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); } __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); - __page_set_anon_rmap(folio, &folio->page, vma, address, 1); } /** From patchwork Fri Jul 14 16:17:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 120566 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp2641736vqm; Fri, 14 Jul 2023 10:12:21 -0700 (PDT) X-Google-Smtp-Source: APBJJlFdw8YXil49fIce3xg2+k0ZyA6TXIFWDCOLODj06FrlrimB0fww8KwXgwlfDAUAz5fH//qo X-Received: by 2002:a2e:b609:0:b0:2b6:e2c2:d234 with SMTP id r9-20020a2eb609000000b002b6e2c2d234mr4026396ljn.33.1689354741020; Fri, 14 Jul 2023 10:12:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689354740; cv=none; d=google.com; s=arc-20160816; b=WqYbNlbxHFSlda1JjerXSWQue4m4NiR+Pf9R7dZsNS6nVFqplG5C5spIBOLs65AHY2 dd07xg/+pQDHxy2biUrfItRApRLrNallfCXslRTtSuzEi9v+7TJt8HLAgb6HVN430vXh jnPIAXMu+Gil1djPPBi3KkDt+asmEOLxYG7oZwM6dXfwOOSzZSeviBkalYvHCuADTyaX Wc1nSqY6wuUDI812U5dlMI4fOtjo1GqZ+MKUophYJ1OcVCz7kyldUGw93m5QpeTJMcch RWE3jP3Olty2KdrtolBE0D2lYoE7G4X20rqViGRTLTIJHDnPp/BsjLI0B5fihprb8EE/ TSNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=RwsmSgL54KvIcN9UUxItjXrDFbdgmMPuqsBrxLvJGfQ=; fh=KUn9/czOF/CUegSsbpumpb8eTeZtYrbN2VF58YsSKuI=; b=SzRdjeLpQFBxd3cGUxPK+H9F0DBwUlpaqeaS0c33t40cGmLXt2vTODcwhVnR3+eDEE PvmQmsaBc6ebIsbLrfqM/vLHdYkHcWV+gvAK6+14fVltmB0alorT7spebLoMiMXpWAYu 97St+0JR4raKfM88OhhDIGIZqIgPetA1x5DOhgoVNvc3zVMRWtF62ugu8P61gf7DtgtM XZBrv1GcxEB5bwwAJZjtu224j2fF7XgNv/mA8GxvtavyxAKMUcLyDwD8rIOM0VZYcVDd FGH9TwMbebZIDyJRVz6FRjWqfccho+kE6mYbFJ1+0SYnQjTTKN+5vugKLh6M56zrJTp8 Wh8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qk3-20020a170906d9c300b009928faf13e4si9332304ejb.70.2023.07.14.10.11.56; Fri, 14 Jul 2023 10:12:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236134AbjGNQSJ (ORCPT + 99 others); Fri, 14 Jul 2023 12:18:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236296AbjGNQRt (ORCPT ); Fri, 14 Jul 2023 12:17:49 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F3B0535B3 for ; Fri, 14 Jul 2023 09:17:46 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EB8E1576; Fri, 14 Jul 2023 09:18:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 527963F740; Fri, 14 Jul 2023 09:17:44 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 2/4] mm: Default implementation of arch_wants_pte_order() Date: Fri, 14 Jul 2023 17:17:31 +0100 Message-Id: <20230714161733.4144503-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230714160407.4142030-1-ryan.roberts@arm.com> References: <20230714160407.4142030-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771416836787531126 X-GMAIL-MSGID: 1771416836787531126 arch_wants_pte_order() can be overridden by the arch to return the preferred folio order for pte-mapped memory. This is useful as some architectures (e.g. arm64) can coalesce TLB entries when the physical memory is suitably contiguous. The first user for this hint will be FLEXIBLE_THP, which aims to allocate large folios for anonymous memory to reduce page faults and other per-page operation costs. Here we add the default implementation of the function, used when the architecture does not define it, which returns -1, implying that the HW has no preference. In this case, mm will choose it's own default order. Signed-off-by: Ryan Roberts Reviewed-by: Yu Zhao Reviewed-by: Yin Fengwei --- include/linux/pgtable.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 5063b482e34f..2a1d83775837 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -313,6 +313,19 @@ static inline bool arch_has_hw_pte_young(void) } #endif +#ifndef arch_wants_pte_order +/* + * Returns preferred folio order for pte-mapped memory. Must be in range [0, + * PMD_SHIFT-PAGE_SHIFT) and must not be order-1 since THP requires large folios + * to be at least order-2. Negative value implies that the HW has no preference + * and mm will choose it's own default order. + */ +static inline int arch_wants_pte_order(void) +{ + return -1; +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, From patchwork Fri Jul 14 16:17:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 120563 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp2640922vqm; Fri, 14 Jul 2023 10:11:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlGjW/Yn8iyk+dN0kNkWcOaoztanVLHZh6wJHUjyhJs2VjZRadgqy1kiSOu1ZQxjPex7scfy X-Received: by 2002:a17:90a:cf8f:b0:264:85:f4b8 with SMTP id i15-20020a17090acf8f00b002640085f4b8mr4735337pju.17.1689354663231; Fri, 14 Jul 2023 10:11:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689354663; cv=none; d=google.com; s=arc-20160816; b=r5ZdLPNzTbnC3rFfzGJ2mvioUfoNuicc2lrMsCwLs6QaJmrt1xdwP40DbsVfX4WRe6 NUVWP2+I2NVCxCAHcykO6RRto7t3Ns+eAnPZuEZE6qfgBzP3dya1dWCWp4pg++cWjhXw NXC/4NZDXG3ZAv1JOEFvZhuDgDjlnyLD2UoycKcKVger44JjeaqfzOY4Aea1WN6Lbxov kGROLLaYMS6ls2OYpImf35ZGGzNXUpmZp+/Fu5G9VOHpXplnzWcI7f75iDIaCHG+DSf4 kn18A5p7Ww/uAw6v4oLjUJ7NjYinzRannkBWBVaWuwiHCu7bBIjUlK46RSU4gNfkz8XF z6FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=tX8967EIh0HI9YMP7qJcM0Jgu1MCSasQRcYW7+VsDaw=; fh=KUn9/czOF/CUegSsbpumpb8eTeZtYrbN2VF58YsSKuI=; b=fOLLhGyvPRxRg2ZRojrSDnalusdgT8LVgMIj3Mj8NBGxC2EJW1KfS4RFj9VndMFnpI oHg9ejhCVH+h9ImhkDNcKq1Qw02nRnOY/dli5z6AlgdrxuhQMUU1qJILJ7WPRnVaCSZ7 ICOWILrXg/smPBqMgzQZzuKwPkPH6qO657C0mKptY1W3z4n2J5SkGfEY9yb+Ar5Zhqzc m4NCAFlSnP85Jhl7HJ0GrxJgN2wQv5dAQH0frBYhp+Ds6mRqL3HyihsyZ1em8hU3U0H5 BuoMwmIGbG7UgpW3d+365eWx9BqsrXkR/dqvi/MLpmtAlvfziSb705/ETHpDM/byOXWJ yKeg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bf4-20020a17090b0b0400b00263f3c1bb86si1487203pjb.158.2023.07.14.10.10.50; Fri, 14 Jul 2023 10:11:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235900AbjGNQSO (ORCPT + 99 others); Fri, 14 Jul 2023 12:18:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236395AbjGNQRx (ORCPT ); Fri, 14 Jul 2023 12:17:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0D89D30C6 for ; Fri, 14 Jul 2023 09:17:49 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0E83215A1; Fri, 14 Jul 2023 09:18:31 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C00F3F740; Fri, 14 Jul 2023 09:17:46 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance Date: Fri, 14 Jul 2023 17:17:32 +0100 Message-Id: <20230714161733.4144503-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230714160407.4142030-1-ryan.roberts@arm.com> References: <20230714160407.4142030-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771416755403278318 X-GMAIL-MSGID: 1771416755403278318 Introduce FLEXIBLE_THP feature, which allows anonymous memory to be allocated in large folios of a determined order. All pages of the large folio are pte-mapped during the same page fault, significantly reducing the number of page faults. The number of per-page operations (e.g. ref counting, rmap management lru list management) are also significantly reduced since those ops now become per-folio. The new behaviour is hidden behind the new FLEXIBLE_THP Kconfig, which defaults to disabled for now; The long term aim is for this to defaut to enabled, but there are some risks around internal fragmentation that need to be better understood first. When enabled, the folio order is determined as such: For a vma, process or system that has explicitly disabled THP, we continue to allocate order-0. THP is most likely disabled to avoid any possible internal fragmentation so we honour that request. Otherwise, the return value of arch_wants_pte_order() is used. For vmas that have not explicitly opted-in to use transparent hugepages (e.g. where thp=madvise and the vma does not have MADV_HUGEPAGE), then arch_wants_pte_order() is limited by the new cmdline parameter, `flexthp_unhinted_max`. This allows for a performance boost without requiring any explicit opt-in from the workload while allowing the sysadmin to tune between performance and internal fragmentation. arch_wants_pte_order() can be overridden by the architecture if desired. Some architectures (e.g. arm64) can coalsece TLB entries if a contiguous set of ptes map physically contigious, naturally aligned memory, so this mechanism allows the architecture to optimize as required. If the preferred order can't be used (e.g. because the folio would breach the bounds of the vma, or because ptes in the region are already mapped) then we fall back to a suitable lower order; first PAGE_ALLOC_COSTLY_ORDER, then order-0. Signed-off-by: Ryan Roberts --- .../admin-guide/kernel-parameters.txt | 10 + mm/Kconfig | 10 + mm/memory.c | 187 ++++++++++++++++-- 3 files changed, 190 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a1457995fd41..405d624e2191 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1497,6 +1497,16 @@ See Documentation/admin-guide/sysctl/net.rst for fb_tunnels_only_for_init_ns + flexthp_unhinted_max= + [KNL] Requires CONFIG_FLEXIBLE_THP enabled. The maximum + folio size that will be allocated for an anonymous vma + that has neither explicitly opted in nor out of using + transparent hugepages. The size must be a power-of-2 in + the range [PAGE_SIZE, PMD_SIZE). A larger size improves + performance by reducing page faults, while a smaller + size reduces internal fragmentation. Default: max(64K, + PAGE_SIZE). Format: size[KMG]. + floppy= [HW] See Documentation/admin-guide/blockdev/floppy.rst. diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..26c5e51ef11d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -848,6 +848,16 @@ config READ_ONLY_THP_FOR_FS support of file THPs will be developed in the next few release cycles. +config FLEXIBLE_THP + bool "Flexible order THP" + depends on TRANSPARENT_HUGEPAGE + default n + help + Use large (bigger than order-0) folios to back anonymous memory where + possible, even for pte-mapped memory. This reduces the number of page + faults, as well as other per-page overheads to improve performance for + many workloads. + endif # TRANSPARENT_HUGEPAGE # diff --git a/mm/memory.c b/mm/memory.c index 01f39e8144ef..e8bc729efb9d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4050,6 +4050,148 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) return ret; } +static bool vmf_pte_range_changed(struct vm_fault *vmf, int nr_pages) +{ + int i; + + if (nr_pages == 1) + return vmf_pte_changed(vmf); + + for (i = 0; i < nr_pages; i++) { + if (!pte_none(ptep_get_lockless(vmf->pte + i))) + return true; + } + + return false; +} + +#ifdef CONFIG_FLEXIBLE_THP +static int flexthp_unhinted_max_order = + ilog2(SZ_64K > PAGE_SIZE ? SZ_64K : PAGE_SIZE) - PAGE_SHIFT; + +static int __init parse_flexthp_unhinted_max(char *s) +{ + unsigned long long size = memparse(s, NULL); + + if (!is_power_of_2(size) || size < PAGE_SIZE || size > PMD_SIZE) { + pr_warn("flexthp: flexthp_unhinted_max=%s must be power-of-2 between PAGE_SIZE (%lu) and PMD_SIZE (%lu), ignoring\n", + s, PAGE_SIZE, PMD_SIZE); + return 1; + } + + flexthp_unhinted_max_order = ilog2(size) - PAGE_SHIFT; + + /* THP machinery requires at least 3 struct pages for meta data. */ + if (flexthp_unhinted_max_order == 1) + flexthp_unhinted_max_order--; + + return 1; +} + +__setup("flexthp_unhinted_max=", parse_flexthp_unhinted_max); + +static int anon_folio_order(struct vm_area_struct *vma) +{ + int order; + + /* + * If THP is explicitly disabled for either the vma, the process or the + * system, then this is very likely intended to limit internal + * fragmentation; in this case, don't attempt to allocate a large + * anonymous folio. + * + * Else, if the vma is eligible for thp, allocate a large folio of the + * size preferred by the arch. Or if the arch requested a very small + * size or didn't request a size, then use PAGE_ALLOC_COSTLY_ORDER, + * which still meets the arch's requirements but means we still take + * advantage of SW optimizations (e.g. fewer page faults). + * + * Finally if thp is enabled but the vma isn't eligible, take the + * arch-preferred size and limit it to the flexthp_unhinted_max cmdline + * parameter. This allows a sysadmin to tune performance vs internal + * fragmentation. + */ + + if ((vma->vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags) || + !hugepage_flags_enabled()) + order = 0; + else { + order = max(arch_wants_pte_order(), PAGE_ALLOC_COSTLY_ORDER); + + if (!hugepage_vma_check(vma, vma->vm_flags, false, true, true)) + order = min(order, flexthp_unhinted_max_order); + } + + return order; +} + +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) +{ + int i; + gfp_t gfp; + pte_t *pte; + unsigned long addr; + struct vm_area_struct *vma = vmf->vma; + int prefer = anon_folio_order(vma); + int orders[] = { + prefer, + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDER : 0, + 0, + }; + + *folio = NULL; + + if (vmf_orig_pte_uffd_wp(vmf)) + goto fallback; + + for (i = 0; orders[i]; i++) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); + if (addr >= vma->vm_start && + addr + (PAGE_SIZE << orders[i]) <= vma->vm_end) + break; + } + + if (!orders[i]) + goto fallback; + + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); + if (!pte) + return -EAGAIN; + + for (; orders[i]; i++) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); + vmf->pte = pte + pte_index(addr); + if (!vmf_pte_range_changed(vmf, 1 << orders[i])) + break; + } + + vmf->pte = NULL; + pte_unmap(pte); + + gfp = vma_thp_gfp_mask(vma); + + for (; orders[i]; i++) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); + *folio = vma_alloc_folio(gfp, orders[i], vma, addr, true); + if (*folio) { + clear_huge_page(&(*folio)->page, addr, 1 << orders[i]); + return 0; + } + } + +fallback: + *folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); + return *folio ? 0 : -ENOMEM; +} +#else +static inline int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) +{ + *folio = vma_alloc_zeroed_movable_folio(vmf->vma, vmf->address); + return *folio ? 0 : -ENOMEM; +} +#endif + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -4057,11 +4199,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) { + int i = 0; + int nr_pages = 1; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); struct vm_area_struct *vma = vmf->vma; struct folio *folio; vm_fault_t ret = 0; pte_t entry; + unsigned long addr; /* File mapping without ->vm_ops ? */ if (vma->vm_flags & VM_SHARED) @@ -4101,10 +4246,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); + ret = alloc_anon_folio(vmf, &folio); + if (unlikely(ret == -EAGAIN)) + return 0; if (!folio) goto oom; + nr_pages = folio_nr_pages(folio); + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); + if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; folio_throttle_swaprate(folio, GFP_KERNEL); @@ -4116,17 +4266,12 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) */ __folio_mark_uptodate(folio); - entry = mk_pte(&folio->page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) - entry = pte_mkwrite(pte_mkdirty(entry)); - - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; - if (vmf_pte_changed(vmf)) { - update_mmu_tlb(vma, vmf->address, vmf->pte); + if (vmf_pte_range_changed(vmf, nr_pages)) { + for (i = 0; i < nr_pages; i++) + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); goto release; } @@ -4141,16 +4286,24 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) return handle_userfault(vmf, VM_UFFD_MISSING); } - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - folio_add_new_anon_rmap(folio, vma, vmf->address); + folio_ref_add(folio, nr_pages - 1); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + folio_add_new_anon_rmap(folio, vma, addr); folio_add_lru_vma(folio, vma); + + for (i = 0; i < nr_pages; i++) { + entry = mk_pte(folio_page(folio, i), vma->vm_page_prot); + entry = pte_sw_mkyoung(entry); + if (vma->vm_flags & VM_WRITE) + entry = pte_mkwrite(pte_mkdirty(entry)); setpte: - if (uffd_wp) - entry = pte_mkuffd_wp(entry); - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); + if (uffd_wp) + entry = pte_mkuffd_wp(entry); + set_pte_at(vma->vm_mm, addr + PAGE_SIZE * i, vmf->pte + i, entry); - /* No need to invalidate - it was non-present before */ - update_mmu_cache(vma, vmf->address, vmf->pte); + /* No need to invalidate - it was non-present before */ + update_mmu_cache(vma, addr + PAGE_SIZE * i, vmf->pte + i); + } unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); From patchwork Fri Jul 14 16:17:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 120558 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp2632447vqm; Fri, 14 Jul 2023 09:58:05 -0700 (PDT) X-Google-Smtp-Source: APBJJlFHmj3UVf+K127jq4tS2RuVv7CIWZZurac0dx3wn+yE8QJh2tJsYsRbYH7mXZZEDNsDECWS X-Received: by 2002:a2e:6e13:0:b0:2b6:fe3c:c3af with SMTP id j19-20020a2e6e13000000b002b6fe3cc3afmr4482211ljc.27.1689353885581; Fri, 14 Jul 2023 09:58:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689353885; cv=none; d=google.com; s=arc-20160816; b=EMD9ysHm2AGspZvOSZ3oDm8VaHQ8/yL1heSLK4H24jE4f0V+ys1eJLRcSly12Ygrq5 3L7fA+VuMBRrawAou+lnFZE/y8NTQ5eQgAJjTW9vcn0g6/KoXcxiklaUixbmRIZ5RYs2 ZaQlMeFV0zmaCXjA6k4o0FSR9PzAZAH/Pk9kftYdJIMAJgAzU3sCvsCYfvCDTgutHokA L/PrH2dth5UW1TBvKF9/v7ejYqPrUY8OBrjPFWKSfJhYjerbZvXbP+ASRc1E8MzLCwRI 8T/oiQPbDIJpGiAmwiHMEvauttMqNU6iXomaMK1AMepN9urXNB0MpGGWaLjCHBYu+eFy ikAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=71smc1TZNhMZlMSAnG9fgcjZlzgTrk3pagdzxHJYIq4=; fh=KUn9/czOF/CUegSsbpumpb8eTeZtYrbN2VF58YsSKuI=; b=tNV4qJXoEcbiyc/JvOubMYi6SLu8l+Iqnq1OdOn+b8RhtgAjavgmfQMTUeeKYPeBOZ NzTiuvmgL4+WH3dNzLdEZkVDec9yMT83HBPZTn0KTF3jpr6ncBvrs9IPsraLoJ9CKEbn miULVsxbG2fJ7QQQ5Rxl7Tw+YjxfzyKraJQuEbQRSJeiYA0lAsTwgo3iLzFRiuqVVUFM U5V67/x5/BfukGAvFO0fYfCWtcjEKwrfD1fEcX3xSGMhZ6m1BPr7ctSht0d3LVddjBDt 0joX+abhjZb/TiKK/ZFzFO/ec1A7yNSWNiJmkV/kS6LYMm7wiTV9QOqEqIPxT+ewy5MP EXXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f14-20020a170906560e00b00988be9d8c53si9422009ejq.946.2023.07.14.09.57.41; Fri, 14 Jul 2023 09:58:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235586AbjGNQSL (ORCPT + 99 others); Fri, 14 Jul 2023 12:18:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236379AbjGNQRx (ORCPT ); Fri, 14 Jul 2023 12:17:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5E96A30DC for ; Fri, 14 Jul 2023 09:17:51 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5789215BF; Fri, 14 Jul 2023 09:18:33 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0B8643F740; Fri, 14 Jul 2023 09:17:48 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Matthew Wilcox , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 4/4] arm64: mm: Override arch_wants_pte_order() Date: Fri, 14 Jul 2023 17:17:33 +0100 Message-Id: <20230714161733.4144503-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230714160407.4142030-1-ryan.roberts@arm.com> References: <20230714160407.4142030-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771415939662805343 X-GMAIL-MSGID: 1771415939662805343 Define an arch-specific override of arch_wants_pte_order() so that when FLEXIBLE_THP is enabled, large folios will be allocated for anonymous memory with an order that is compatible with arm64's contpte mappings. Signed-off-by: Ryan Roberts Reviewed-by: Yu Zhao --- arch/arm64/include/asm/pgtable.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0bd18de9fd97..d00bb26fe28f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1106,6 +1106,12 @@ extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma, extern void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t new_pte); + +#define arch_wants_pte_order arch_wants_pte_order +static inline int arch_wants_pte_order(void) +{ + return CONT_PTE_SHIFT - PAGE_SHIFT; +} #endif /* !__ASSEMBLY__ */ #endif /* __ASM_PGTABLE_H */