From patchwork Mon May 1 23:11:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 89199 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp234444vqo; Mon, 1 May 2023 16:18:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ43QnD9frtidMwxQghU44IHdL/NVWfNwg6rCZ3tFWbDxwYKKSn+orhuoqSgxxrRGijyeEqh X-Received: by 2002:a05:6a20:431b:b0:f0:d4fc:f0c with SMTP id h27-20020a056a20431b00b000f0d4fc0f0cmr19989891pzk.24.1682983115740; Mon, 01 May 2023 16:18:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682983115; cv=none; d=google.com; s=arc-20160816; b=iUbucLX2X2pRzpSpSxsE7vAuWuyc7E7o3tXHZKSxZrHtuoE+srN6lMhD5VkRGs7rkJ f225Lb0fl9oDEeOD/cSxqf6r3wFEa2aaxvYFzTJo75yEUDdxgC3W9FdGtooWdvQwbbmd UdUYOwTr73rQYIKBTC6QW1VQ3uCQQxgxn19J+Sou+wSgf6Tw07fClXg+dRdbBlYqBYOf oM9qEseDOLMFbBNomegLfgnkO4aRJQZ62ZvQRKym9dJYsf1rWgvLpjU2pCtE33FCihGc OOtjh9pDEmmxnM4UvyNC8bc134BUcddQU+cTw+KP4Q4tDhDqEiFrYRa0Xl4AGyP2gz/w M22g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GEh6qe0LJpI9PjPri6EZKbTs+kciBcl89x6Vk+ogtyI=; b=OEp5Fu7DT1J/9NDVRT+1oB3Lqg9zFjNDEpbg09PyiJ8rwafydvrcR6dwaFMvjJAhpK Osr83l6/IBn+cJQnteIYOUdgaNEgaZ5iBApVt8ym4dV6iwHgYWG9w80b7EejEBWBSTPm OMY23fju7c/6lV4y68c/Bvuq2zWzvsgV6k3fYCNA2cmUgu7bmRTIHkz91wBn8tCnsS9O bmQTq77FEzLA0p3EP+0AlSTauvnXOQ4QMFTsrrQAc92Z0y5lHBJQ76SNKlvH0y/pH4Js NBRRQX3aKc3Pec/ciW20E3RfQ2I08G2bjC5bMStILqkFF+DRyt3ZD6YtFMna8sokskb2 9ozg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=bzO+phID; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b29-20020a631b1d000000b0051f047acf17si27649037pgb.525.2023.05.01.16.18.21; Mon, 01 May 2023 16:18:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=bzO+phID; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233273AbjEAXOM (ORCPT + 99 others); Mon, 1 May 2023 19:14:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232772AbjEAXOE (ORCPT ); Mon, 1 May 2023 19:14:04 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 760D335AB; Mon, 1 May 2023 16:14:02 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f173af665fso17773605e9.3; Mon, 01 May 2023 16:14:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682982841; x=1685574841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GEh6qe0LJpI9PjPri6EZKbTs+kciBcl89x6Vk+ogtyI=; b=bzO+phIDhZm01pNiDmRQE159p3eKOVlUDO/3QxC8/RctqJpbXZnaVIpEAJg2kLWaGZ dnKuVBsQKHIg3eewcIKpe/i1Qi0LXqTLSO6xstdN0i0qwKTqSVPoxRqTcFkYZqnrWN8s BwAl9/kyFFP0SBnYSScNfV46//XP1iI377+oI4tGBUQQUCBgZyk+UhFmutC3SsELFk4s dSSb0wRgKAPqlh6y+1KGTbO2nlsc0MSHm6cX8gyDnGSY2sqMsX6aiBygYA5WHISuwk2h lHxquT0gZOSo1LMhA05m98W3aCOwkV6Bb62cwCowJVAaOJdz4VBZmx7ZJVBD24EDl4yb 8lew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682982841; x=1685574841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GEh6qe0LJpI9PjPri6EZKbTs+kciBcl89x6Vk+ogtyI=; b=PwxU5Ny3BAwyNlSjYyVLAp9FUkgWlLEGkwi2qDx4cMdGZj0Lrwkttjx/T/r2UKiV1b TnA36f+lk9Ra5KdaMnLMOQt+0sfdhM3WnIKYnpN7kSu+e66vWfTOp0rXf/rpMPtxrHKI tVBueYiT+Gfw6jUzpCMKfESNdOx5RMFf7Gx/8BbPzGDcDGR6IkSEg0e5DI5qGRo6RTjI 1E0UCn/R2Eeb/b7PDlDetE4tJkwtmTA8z7sKlki1AsRQWWduhR0sKpgSNPJ5LT8ZrNRO bxjrk6jSVYk9G07hGzqNZL1zFcxjdD46liCXhAxQxYOHmJVz98DuXJTGgJwf4KqImU85 2p7g== X-Gm-Message-State: AC+VfDwSUttQkwqmncECO3aE4Py8s3ESg9NEz9vSiMviI0gACWCjVSeA +l3TJjwu9tMUQl3yFJO5Veo= X-Received: by 2002:a1c:f20b:0:b0:3f3:1299:5625 with SMTP id s11-20020a1cf20b000000b003f312995625mr10545865wmc.30.1682982840628; Mon, 01 May 2023 16:14:00 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm48948904wmn.2.2023.05.01.16.13.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 16:13:59 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Lorenzo Stoakes Subject: [PATCH v6 1/3] mm/mmap: separate writenotify and dirty tracking logic Date: Tue, 2 May 2023 00:11:47 +0100 Message-Id: <72a90af5a9e4445a33ae44efa710f112c2694cb1.1682981880.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764735703567793923?= X-GMAIL-MSGID: =?utf-8?q?1764735703567793923?= vma_wants_writenotify() is specifically intended for setting PTE page table flags, accounting for existing PTE flag state and whether that might already be read-only while mixing this check with a check whether the filesystem performs dirty tracking. Separate out the notions of dirty tracking and a PTE write notify checking in order that we can invoke the dirty tracking check from elsewhere. Note that this change introduces a very small duplicate check of the separated out vm_ops_needs_writenotify(). This is necessary to avoid making vma_needs_dirty_tracking() needlessly complicated (e.g. passing a check_writenotify flag or having it assume this check was already performed). This is such a small check that it doesn't seem too egregious to do this. Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Reviewed-by: Mika Penttilä Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe --- include/linux/mm.h | 1 + mm/mmap.c | 36 +++++++++++++++++++++++++++--------- 2 files changed, 28 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 27ce77080c79..7b1d4e7393ef 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2422,6 +2422,7 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) +bool vma_needs_dirty_tracking(struct vm_area_struct *vma); int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot); static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma) { diff --git a/mm/mmap.c b/mm/mmap.c index 5522130ae606..295c5f2e9bd9 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1475,6 +1475,31 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg) } #endif /* __ARCH_WANT_SYS_OLD_MMAP */ +/* Do VMA operations imply write notify is required? */ +static bool vm_ops_needs_writenotify(const struct vm_operations_struct *vm_ops) +{ + return vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite); +} + +/* + * Does this VMA require the underlying folios to have their dirty state + * tracked? + */ +bool vma_needs_dirty_tracking(struct vm_area_struct *vma) +{ + /* Does the filesystem need to be notified? */ + if (vm_ops_needs_writenotify(vma->vm_ops)) + return true; + + /* Specialty mapping? */ + if (vma->vm_flags & VM_PFNMAP) + return false; + + /* Can the mapping track the dirty pages? */ + return vma->vm_file && vma->vm_file->f_mapping && + mapping_can_writeback(vma->vm_file->f_mapping); +} + /* * Some shared mappings will want the pages marked read-only * to track write events. If so, we'll downgrade vm_page_prot @@ -1484,14 +1509,13 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg) int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot) { vm_flags_t vm_flags = vma->vm_flags; - const struct vm_operations_struct *vm_ops = vma->vm_ops; /* If it was private or non-writable, the write bit is already clear */ if ((vm_flags & (VM_WRITE|VM_SHARED)) != ((VM_WRITE|VM_SHARED))) return 0; /* The backer wishes to know when pages are first written to? */ - if (vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite)) + if (vm_ops_needs_writenotify(vma->vm_ops)) return 1; /* The open routine did something to the protections that pgprot_modify @@ -1511,13 +1535,7 @@ int vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot) if (userfaultfd_wp(vma)) return 1; - /* Specialty mapping? */ - if (vm_flags & VM_PFNMAP) - return 0; - - /* Can the mapping track the dirty pages? */ - return vma->vm_file && vma->vm_file->f_mapping && - mapping_can_writeback(vma->vm_file->f_mapping); + return vma_needs_dirty_tracking(vma); } /* From patchwork Mon May 1 23:11:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 89201 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp234514vqo; Mon, 1 May 2023 16:18:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5KnfFP3hhtNAVHRn/N0k8Mcq7j9Cm8uNQVcDIgQPrMRTk6eCjZdfDLncLtQ8GsZZXlb/WX X-Received: by 2002:a17:90a:1b4e:b0:24d:e97a:8846 with SMTP id q72-20020a17090a1b4e00b0024de97a8846mr7887829pjq.24.1682983126101; Mon, 01 May 2023 16:18:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682983126; cv=none; d=google.com; s=arc-20160816; b=OJzE3HcSu9LyU2kcEr3NgIA1H2KJBRYg7RExatuNxwF5RTpX8QLBcO/uyv01MmI4Ug r/7IcX2FsEWPeVrB89QtNGY6+IADCkyplvg1pa+lSsvzHCtJCPbDCI+J2qtJV4BZ+YHK 864HMcthAGKi8SsCSK85Ff4JDv+j2cJF86k1/yd4eWqlDznV1LiyyghSExxFhYciLCOP w00dGfMskm+DuDw0mvBo+yGdcbemAiFwT9J6yNLwiO9Dbxb7QVfD/cVgA5lVotgulxia JTXeoK8xZUB5+mIJL51hfG9lh1+nOwPEs3tiOgu8wiZ1yiG91oBoKjmD5xyEq28HSvFf acFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=n8RA7X/L/leNHn/1HMTSlACeb8+DsSawTG5COyZPU+Q=; b=U6FWx3lf9QNBv4ZxdhDHCQZs7pPmkV65nfJyEhRza/Ln89Zdj3arJfqcjBGfVRDxp+ FyP/8nF1XISEdsWsJeghe1dKeNtI9BvtXJoaIpNnDQjgzNUXYOdA7/sYplsdic4pUNGQ Bc2qV0daXd2bjl/M/W4HKSDFNfsUq+xxAoYEMWT6hpkFeimObbP5bUgmLfIKNQRMWhYY AumhxkTxlJxhgqoFjFRawKVB1kxcJWkINhG5RAA8QMYiwFse9szYEkyPL9d6kP+MGnQ/ b55seq8aJnkDOsKRO8QKqqApO7u3uM29e18nMI4aNaX/hwUQusHIFgSNnv1fZmK2N51n 2nmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=DhyL19lI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t64-20020a638143000000b0051b930ef848si12448383pgd.142.2023.05.01.16.18.31; Mon, 01 May 2023 16:18:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=DhyL19lI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233337AbjEAXOR (ORCPT + 99 others); Mon, 1 May 2023 19:14:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233209AbjEAXOH (ORCPT ); Mon, 1 May 2023 19:14:07 -0400 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEFF635B0; Mon, 1 May 2023 16:14:04 -0700 (PDT) Received: by mail-wr1-x42e.google.com with SMTP id ffacd0b85a97d-3062678861fso1509727f8f.0; Mon, 01 May 2023 16:14:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682982843; x=1685574843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=n8RA7X/L/leNHn/1HMTSlACeb8+DsSawTG5COyZPU+Q=; b=DhyL19lI5vv/dMIWTaHqB2GR/UHBozo3SfDGWzIAIgkjz7Beb/a4fc45GHtEpbQ4bY JSbVJrm43TKbzpnEOl1DeJURI7jehJMpGR/+lJIE/2BUb8Jzt84GByKPWbQ9UZ1xVj8L Fu+s/PoUjbIHkAZr/B6ycJGlMK3mvR9/lvSReVGMUiGUVnayQLCbzRLjuGPINpt3Ws1C KV0uvaws+leQ7Hi3SXDRBYA0+K/Gf+WcHlqa42S3tra5dOStz08FmavmeenpUyL/V3aO mBnY1ERuASpJ2PSyedZWdLCQc/BOSyr8aMJxq0pPYz22BjNTon4wssFISwNAwiScMPc0 fDkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682982843; x=1685574843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n8RA7X/L/leNHn/1HMTSlACeb8+DsSawTG5COyZPU+Q=; b=KrqWUT7U2cczJOvjGnfsy4zF1eCHqlo0Zh0qqJCvZLD34idmFWggrlpEBbxNtcT4ND mtJ/LFBNTd6If/DGi7LFLOsw4gLqNZ8pkvM5fBE1zhUyjktPCAAYgB7YJOEf1D5qnQ// bm06zKyPkr/VLWFIXufg923d4mUGD2P/ztDyWTbzrrfmILIsLbOnZfHHFBo+6LNrRErz hpYfjVyCgb6/OZ2LZuME0n9srKI6VkEzcnakGjT/6YxS92UiyJGG9qeoWVHPatcWQGLD vZCmvf0uRofuMziXXdvBa5T+FS/pIUB/qhtWi7Qx6EwaHMLLDZtwZwvzHO/HRoi1JxGm SRKw== X-Gm-Message-State: AC+VfDzDPpB++elu72rR/WQW69OwugZHSQFIWiFfV5o3VEjTwM/joT4e UBqCxxT4zb2EjaojlPQ9YyU= X-Received: by 2002:a05:6000:108f:b0:306:379e:d161 with SMTP id y15-20020a056000108f00b00306379ed161mr97121wrw.5.1682982842856; Mon, 01 May 2023 16:14:02 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm48948904wmn.2.2023.05.01.16.14.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 16:14:02 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Lorenzo Stoakes Subject: [PATCH v6 2/3] mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to file-backed mappings Date: Tue, 2 May 2023 00:11:48 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764735714407308458?= X-GMAIL-MSGID: =?utf-8?q?1764735714407308458?= Writing to file-backed mappings which require folio dirty tracking using GUP is a fundamentally broken operation, as kernel write access to GUP mappings do not adhere to the semantics expected by a file system. A GUP caller uses the direct mapping to access the folio, which does not cause write notify to trigger, nor does it enforce that the caller marks the folio dirty. The problem arises when, after an initial write to the folio, writeback results in the folio being cleaned and then the caller, via the GUP interface, writes to the folio again. As a result of the use of this secondary, direct, mapping to the folio no write notify will occur, and if the caller does mark the folio dirty, this will be done so unexpectedly. For example, consider the following scenario:- 1. A folio is written to via GUP which write-faults the memory, notifying the file system and dirtying the folio. 2. Later, writeback is triggered, resulting in the folio being cleaned and the PTE being marked read-only. 3. The GUP caller writes to the folio, as it is mapped read/write via the direct mapping. 4. The GUP caller, now done with the page, unpins it and sets it dirty (though it does not have to). This results in both data being written to a folio without writenotify, and the folio being dirtied unexpectedly (if the caller decides to do so). This issue was first reported by Jan Kara [1] in 2018, where the problem resulted in file system crashes. This is only relevant when the mappings are file-backed and the underlying file system requires folio dirty tracking. File systems which do not, such as shmem or hugetlb, are not at risk and therefore can be written to without issue. Unfortunately this limitation of GUP has been present for some time and requires future rework of the GUP API in order to provide correct write access to such mappings. However, for the time being we introduce this check to prevent the most egregious case of this occurring, use of the FOLL_LONGTERM pin. These mappings are considerably more likely to be written to after folios are cleaned and thus simply must not be permitted to do so. This patch changes only the slow-path GUP functions, a following patch adapts the GUP-fast path along similar lines. [1]:https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz/ Suggested-by: Jason Gunthorpe Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Reviewed-by: Mika Penttilä Reviewed-by: Jan Kara Reviewed-by: Jason Gunthorpe --- mm/gup.c | 41 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index ff689c88a357..0f09dec0906c 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -959,16 +959,51 @@ static int faultin_page(struct vm_area_struct *vma, return 0; } +/* + * Writing to file-backed mappings which require folio dirty tracking using GUP + * is a fundamentally broken operation, as kernel write access to GUP mappings + * do not adhere to the semantics expected by a file system. + * + * Consider the following scenario:- + * + * 1. A folio is written to via GUP which write-faults the memory, notifying + * the file system and dirtying the folio. + * 2. Later, writeback is triggered, resulting in the folio being cleaned and + * the PTE being marked read-only. + * 3. The GUP caller writes to the folio, as it is mapped read/write via the + * direct mapping. + * 4. The GUP caller, now done with the page, unpins it and sets it dirty + * (though it does not have to). + * + * This results in both data being written to a folio without writenotify, and + * the folio being dirtied unexpectedly (if the caller decides to do so). + */ +static bool writeable_file_mapping_allowed(struct vm_area_struct *vma, + unsigned long gup_flags) +{ + /* If we aren't pinning then no problematic write can occur. */ + if (!(gup_flags & (FOLL_GET | FOLL_PIN))) + return true; + + /* We limit this check to the most egregious case - a long term pin. */ + if (!(gup_flags & FOLL_LONGTERM)) + return true; + + /* If the VMA requires dirty tracking then GUP will be problematic. */ + return vma_needs_dirty_tracking(vma); +} + static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) { vm_flags_t vm_flags = vma->vm_flags; int write = (gup_flags & FOLL_WRITE); int foreign = (gup_flags & FOLL_REMOTE); + bool vma_anon = vma_is_anonymous(vma); if (vm_flags & (VM_IO | VM_PFNMAP)) return -EFAULT; - if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma)) + if ((gup_flags & FOLL_ANON) && !vma_anon) return -EFAULT; if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) @@ -978,6 +1013,10 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) return -EFAULT; if (write) { + if (!vma_anon && + !writeable_file_mapping_allowed(vma, gup_flags)) + return -EFAULT; + if (!(vm_flags & VM_WRITE)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; From patchwork Mon May 1 23:11:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 89197 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp233917vqo; Mon, 1 May 2023 16:17:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5qgAhg06ff0/poaDHY8g9FcV1VFtqqAuYvWoDwxUxaQo2TruDQH9jDsp6GVMBYeCCQ8udu X-Received: by 2002:a17:90a:cc5:b0:24b:9460:6b19 with SMTP id 5-20020a17090a0cc500b0024b94606b19mr15407113pjt.9.1682983033472; Mon, 01 May 2023 16:17:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682983033; cv=none; d=google.com; s=arc-20160816; b=YqtzuuAyvToBfFOvXySOfdk0QVeSNn4tln9qfsl02Gm75wti6aX/njZxvI1fbrUG+b eHrbCSTNF8/ODbxiCKPT5IzZ9LzoSgo+SagGdLPNPXjcwhuyA7gNCeaVL6xjajRm32Jj H76YgtAkVty2+mQCKGeeLWUq2ccubxkkp6q8G2iRxQL5UF8RUlXl48k79QxMXwI5vsOE sUw53NJUhEd0iQWSYoZwSXDY4VDlgDpsxfUSJrKIctbijDeZBbWLop5+fJasS9iEFjJU R0/F/M/6EnNTIed0rJS20gfeln7ObF/bpaIxn9Hdu26BxI9KRGlT7f0EmzJP2LDpL01I 6lWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wRhgM2f/NNBjKearO/NDkRN71s+dJvkB6KmKE0+fglE=; b=RoWxj3cSn0seTNxI1fucmJfWHTFLyq7KN1tpxmwHZSss5iflLnWKUVV2Pi6Quy+a+U orpZya3tFn/JehMDDLx6q7UdTxSg+BzSt/1nxixPOcXp9kDVxALZz3PRZCNMpeTXHKG7 Z3TtbXN+SRU4HWpFoCoCGOhQYOZCHfuFfj2M3tJi7o9yeVSaY0XLnDl9bjjKZ014Qsw5 twlcvzxGD240sXmX2ngVk9nsticPmGF6W9sJUdHJAN5XtWv+x091Z29LjfAYHL5TaygK fxUynBdREC9gnLyZopSDieF4SXhljBKhKX1bIjcbf8sDbW+109nKUz2Hyo8Z47na/dFV k1JA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=h3d1JCm3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b3-20020a63d303000000b0050bd8dd1b5asi24496631pgg.43.2023.05.01.16.17.00; Mon, 01 May 2023 16:17:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=h3d1JCm3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233294AbjEAXOX (ORCPT + 99 others); Mon, 1 May 2023 19:14:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233268AbjEAXOL (ORCPT ); Mon, 1 May 2023 19:14:11 -0400 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D32E430D6; Mon, 1 May 2023 16:14:05 -0700 (PDT) Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-3f173af665fso17773825e9.3; Mon, 01 May 2023 16:14:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682982845; x=1685574845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wRhgM2f/NNBjKearO/NDkRN71s+dJvkB6KmKE0+fglE=; b=h3d1JCm3iW2z0aACklXgWWCO4MKQczbrEYZLhV5UTGlqwNU8+x2a69SksVV6fFkkpP 9OJUQfPoggG8g3z5O3KVuR98nqBLvmYUuaC0a4izUN5LxTigM0Z0rh99VXcLo8b/U6B/ 5tnFNzZJL7o+g+IwfpqOsNe/86z3439mfh6kYR5oFwkYBWpY69ip3Uymuu57feZ35hKV kckLQGI1b0jAAXB3APjiwpkcOah/gHMq6g7Jqn8tOBUWE/LrW5oyBd/4Heqdw5T5AljL ngo4witQc1CDkb3PRIuQiYsupzYeglyFVKiBv2m7g+XftXRrWSLTJcyuI7O9xCwgRkmf lC5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682982845; x=1685574845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wRhgM2f/NNBjKearO/NDkRN71s+dJvkB6KmKE0+fglE=; b=YyXM1oEXh4S8jzFqHRn775ddbQiGKaZo6/xhy5Cl4tPq/Xy9roGrG6KyV17MAI/y6A zsefGEaEoX9wsgPYRM60GAfTfp6XTMh8P4TQ2uxXeWU0/rv/IgCttMD/k+WwOYjzy1g7 RLVL5gKfvyfMoWxxPmqGxY1GfXNli3G82iOW8ecXpJaptkQFngEFU08b6cEXyAutzVcz v9vhG0jyH4yWbo9FFMSaFq83UU5C4vCbrZ3TedOd3L84d+DxxPLMWgxHnrXVUZrQQrGT J6RbvwOzO2QZI8/x16AfDs+anQIdmWDmFto0rP/icf/fS18cfY1pYQl37AbRRL1KDuIf UPUA== X-Gm-Message-State: AC+VfDz3+Rz1XtpEKdMDsnaRsEqWf5ojuyTQD2tx2Lsrbm0g7tPq8Yq1 JOFmG0OUTyV/bdchZyqCZEw= X-Received: by 2002:a7b:c408:0:b0:3eb:42fc:fb30 with SMTP id k8-20020a7bc408000000b003eb42fcfb30mr10475842wmi.32.1682982845060; Mon, 01 May 2023 16:14:05 -0700 (PDT) Received: from lucifer.home (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.googlemail.com with ESMTPSA id v9-20020a05600c444900b003f173be2ccfsm48948904wmn.2.2023.05.01.16.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 May 2023 16:14:04 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: Jason Gunthorpe , Jens Axboe , Matthew Wilcox , Dennis Dalessandro , Leon Romanovsky , Christian Benvenuti , Nelson Escobar , Bernard Metzler , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , Bjorn Topel , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Christian Brauner , Richard Cochran , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , linux-fsdevel@vger.kernel.org, linux-perf-users@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, Oleg Nesterov , Jason Gunthorpe , John Hubbard , Jan Kara , "Kirill A . Shutemov" , Pavel Begunkov , Mika Penttila , David Hildenbrand , Dave Chinner , Theodore Ts'o , Peter Xu , Lorenzo Stoakes Subject: [PATCH v6 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings Date: Tue, 2 May 2023 00:11:49 +0100 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764735617259949771?= X-GMAIL-MSGID: =?utf-8?q?1764735617259949771?= Writing to file-backed dirty-tracked mappings via GUP is inherently broken as we cannot rule out folios being cleaned and then a GUP user writing to them again and possibly marking them dirty unexpectedly. This is especially egregious for long-term mappings (as indicated by the use of the FOLL_LONGTERM flag), so we disallow this case in GUP-fast as we have already done in the slow path. We have access to less information in the fast path as we cannot examine the VMA containing the mapping, however we can determine whether the folio is anonymous and then whitelist known-good mappings - specifically hugetlb and shmem mappings. While we obtain a stable folio for this check, the mapping might not be, as a truncate could nullify it at any time. Since doing so requires mappings to be zapped, we can synchronise against a TLB shootdown operation. For some architectures TLB shootdown is synchronised by IPI, against which we are protected as the GUP-fast operation is performed with interrupts disabled. However, other architectures which specify CONFIG_MMU_GATHER_RCU_TABLE_FREE use an RCU lock for this operation. In these instances, we acquire an RCU lock while performing our checks. If we cannot get a stable mapping, we fall back to the slow path, as otherwise we'd have to walk the page tables again and it's simpler and more effective to just fall back. It's important to note that there are no APIs allowing users to specify FOLL_FAST_ONLY for a PUP-fast let alone with FOLL_LONGTERM, so we can always rely on the fact that if we fail to pin on the fast path, the code will fall back to the slow path which can perform the more thorough check. Suggested-by: David Hildenbrand Suggested-by: Kirill A . Shutemov Signed-off-by: Lorenzo Stoakes Reviewed-by: John Hubbard Signed-off-by: Lorenzo Stoakes --- mm/gup.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 85 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 0f09dec0906c..431618048a03 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -95,6 +96,77 @@ static inline struct folio *try_get_folio(struct page *page, int refs) return folio; } +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE +static bool stabilise_mapping_rcu(struct folio *folio) +{ + struct address_space *mapping = READ_ONCE(folio->mapping); + + rcu_read_lock(); + + return mapping == READ_ONCE(folio->mapping); +} + +static void unlock_rcu(void) +{ + rcu_read_unlock(); +} +#else +static bool stabilise_mapping_rcu(struct folio *) +{ + return true; +} + +static void unlock_rcu(void) +{ +} +#endif + +/* + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM | + * FOLL_WRITE pin is permitted for a specific folio. + * + * This assumes the folio is stable and pinned. + * + * Writing to pinned file-backed dirty tracked folios is inherently problematic + * (see comment describing the writeable_file_mapping_allowed() function). We + * therefore try to avoid the most egregious case of a long-term mapping doing + * so. + * + * This function cannot be as thorough as that one as the VMA is not available + * in the fast path, so instead we whitelist known good cases. + * + * The folio is stable, but the mapping might not be. When truncating for + * instance, a zap is performed which triggers TLB shootdown. IRQs are disabled + * so we are safe from an IPI, but some architectures use an RCU lock for this + * operation, so we acquire an RCU lock to ensure the mapping is stable. + */ +static bool folio_longterm_write_pin_allowed(struct folio *folio) +{ + bool ret; + + /* hugetlb mappings do not require dirty tracking. */ + if (folio_test_hugetlb(folio)) + return true; + + if (stabilise_mapping_rcu(folio)) { + struct address_space *mapping = folio_mapping(folio); + + /* + * Neither anonymous nor shmem-backed folios require + * dirty tracking. + */ + ret = folio_test_anon(folio) || + (mapping && shmem_mapping(mapping)); + } else { + /* If the mapping is unstable, fallback to the slow path. */ + ret = false; + } + + unlock_rcu(); + + return ret; +} + /** * try_grab_folio() - Attempt to get or pin a folio. * @page: pointer to page to be grabbed @@ -123,6 +195,8 @@ static inline struct folio *try_get_folio(struct page *page, int refs) */ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) { + bool is_longterm = flags & FOLL_LONGTERM; + if (unlikely(!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page))) return NULL; @@ -136,8 +210,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) * right zone, so fail and let the caller fall back to the slow * path. */ - if (unlikely((flags & FOLL_LONGTERM) && - !is_longterm_pinnable_page(page))) + if (unlikely(is_longterm && !is_longterm_pinnable_page(page))) return NULL; /* @@ -148,6 +221,16 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) if (!folio) return NULL; + /* + * Can this folio be safely pinned? We need to perform this + * check after the folio is stabilised. + */ + if ((flags & FOLL_WRITE) && is_longterm && + !folio_longterm_write_pin_allowed(folio)) { + folio_put_refs(folio, refs); + return NULL; + } + /* * When pinning a large folio, use an exact count to track it. *