From patchwork Fri Oct 21 10:11:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 6609 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp608656wrr; Fri, 21 Oct 2022 03:14:48 -0700 (PDT) X-Google-Smtp-Source: AMsMyM78e7pe7ekdNk2WjHZRXX6ILYkgwPKwhx/vI1OOcyByvdHQGeQU0pCw/OrmwF8Yv3oS3EYw X-Received: by 2002:a17:906:cc4d:b0:78d:fb86:3979 with SMTP id mm13-20020a170906cc4d00b0078dfb863979mr15223744ejb.421.1666347288347; Fri, 21 Oct 2022 03:14:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666347288; cv=none; d=google.com; s=arc-20160816; b=DozYt7BuKH0JySdBFR1qkAR/kyqAHFcq+HBE4lpVI5+2hZQleRbnDMJe09HZDTCo0V 0j55qXoRvWgxKU7XfdcPJP+3KFWKB0H+6sK7bWPmtgDZX1z7/jnv2hcK66sx3ufsDMqv L+gIsJi18+ZHtffRCb4CwyVXJzUUooHGarT8pyAS393aDkyZRjbsE3PbAQeFgn9V//fY oBTPtxti9RLdJzI5OsFYkEqC49jdSLM9qNecg7uLN7VxruNmYiTewn/a0fgHdbqMOjzL /Jy2EqbeNpt+KG1SGFikOLC1ng1liW6bLRekEGsnpONdeK5sZm4ixaAxBCPcEgM4yr2R 2I/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nRQc9H/82kxOKI2FpppEsEOSQhzrh5bTImBzbqeGf2g=; b=HZebfoEWQhnrJHuQYk2uq/XUNgoAn7UrnI95KI4Nv+Y/kbjL7257QAc9C7vVKZFWK1 12x30/yP/GcW+9f//mnHn/d4pYbLZkTsedYZ0giFliK13t07pwJl3dTU+qXVMToguPJZ n6bqMw2tuFwAUP7BwTquCw+Xlzrb/YIG6byK3DAvTHJoG6fbIBZJcUnyF8WDvtu1U3SV KguIMR/NQvyPQJQjTysSN2dSrZiLN3LAISRbKsMvYB20RRXLhlme9JtEAKm6uAPR/7JM VHacNqI3h6WP8zdZCr5oOZfofN53mQx0hHEQnUD/LXXK68ljrKN4njF3ttYlWOZ4B1Nf uh+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JQj3pDlH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t25-20020a1709064f1900b00783d969f337si15672094eju.307.2022.10.21.03.14.23; Fri, 21 Oct 2022 03:14:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=JQj3pDlH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230336AbiJUKMk (ORCPT + 99 others); Fri, 21 Oct 2022 06:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230251AbiJUKMi (ORCPT ); Fri, 21 Oct 2022 06:12:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BD037AB12 for ; Fri, 21 Oct 2022 03:12:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666347155; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nRQc9H/82kxOKI2FpppEsEOSQhzrh5bTImBzbqeGf2g=; b=JQj3pDlH+Skurj50/0/yVO8Y0o8VP+o/S4WEn4RGNxh0qOmdZRPg+5ESOELoCRxlj4GCxr dF/2eohpENEx3s06MRZbMSg3qpNdnvnWIxdw8ZTWGyogfVvchWEKPQb7c8v03IJDFP0utM aYci/S8AIdAr2kszGgrZTOXG2+WIvZ0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-563-uUm3cRV3OvSB9KP2Undt3w-1; Fri, 21 Oct 2022 06:12:34 -0400 X-MC-Unique: uUm3cRV3OvSB9KP2Undt3w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 66259857D0A; Fri, 21 Oct 2022 10:12:17 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.99]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C8A040C95B0; Fri, 21 Oct 2022 10:12:04 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Shuah Khan , Hugh Dickins , Vlastimil Babka , Peter Xu , Andrea Arcangeli , "Matthew Wilcox (Oracle)" , Jason Gunthorpe , John Hubbard Subject: [PATCH v2 2/9] mm/ksm: simplify break_ksm() to not rely on VM_FAULT_WRITE Date: Fri, 21 Oct 2022 12:11:34 +0200 Message-Id: <20221021101141.84170-3-david@redhat.com> In-Reply-To: <20221021101141.84170-1-david@redhat.com> References: <20221021101141.84170-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747291774238105456?= X-GMAIL-MSGID: =?utf-8?q?1747291774238105456?= Now that GUP no longer requires VM_FAULT_WRITE, break_ksm() is the sole remaining user of VM_FAULT_WRITE. As we also want to stop triggering a fake write fault and instead use FAULT_FLAG_UNSHARE -- similar to GUP-triggered unsharing when taking a R/O pin on a shared anonymous page (including KSM pages), let's stop relying on VM_FAULT_WRITE. Let's rework break_ksm() to not rely on the return value of handle_mm_fault() anymore to figure out whether COW-breaking was successful. Simply perform another follow_page() lookup to verify the result. While this makes break_ksm() slightly less efficient, we can simplify handle_mm_fault() a little and easily switch to FAULT_FLAG_UNSHARE without introducing similar KSM-specific behavior for FAULT_FLAG_UNSHARE. In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in a performance degradation of ~4% -- 5% (old: ~5250 MiB/s, new: ~5010 MiB/s). I don't think that we particularly care about that performance drop when unmerging. If it ever turns out to be an actual performance issue, we can think about a better alternative for FAULT_FLAG_UNSHARE -- let's just keep it simple for now. Acked-by: Peter Xu Signed-off-by: David Hildenbrand --- mm/ksm.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index c19fcca9bc03..b884a22f3c3c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -440,26 +440,27 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) vm_fault_t ret = 0; do { + bool ksm_page = false; + cond_resched(); page = follow_page(vma, addr, FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); if (IS_ERR_OR_NULL(page)) break; if (PageKsm(page)) - ret = handle_mm_fault(vma, addr, - FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, - NULL); - else - ret = VM_FAULT_WRITE; + ksm_page = true; put_page(page); - } while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM))); + + if (!ksm_page) + return 0; + ret = handle_mm_fault(vma, addr, + FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + NULL); + } while (!(ret & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM))); /* - * We must loop because handle_mm_fault() may back out if there's - * any difficulty e.g. if pte accessed bit gets updated concurrently. - * - * VM_FAULT_WRITE is what we have been hoping for: it indicates that - * COW has been broken, even if the vma does not permit VM_WRITE; - * but note that a concurrent fault might break PageKsm for us. + * We must loop until we no longer find a KSM page because + * handle_mm_fault() may back out if there's any difficulty e.g. if + * pte accessed bit gets updated concurrently. * * VM_FAULT_SIGBUS could occur if we race with truncation of the * backing file, which also invalidates anonymous pages: that's