From patchwork Wed Jan 11 08:31:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 41821 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3201018wrt; Wed, 11 Jan 2023 00:42:29 -0800 (PST) X-Google-Smtp-Source: AMrXdXsLprk5wWeFJ/1dBcMdrduV4wOCtd8QgcjOKUG/eiOQVe0m6CoJGK/7vpJDJ91p0QXz5YdW X-Received: by 2002:a17:907:d004:b0:83f:1e04:b776 with SMTP id va4-20020a170907d00400b0083f1e04b776mr58634541ejc.40.1673426548863; Wed, 11 Jan 2023 00:42:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673426548; cv=none; d=google.com; s=arc-20160816; b=FEhow9GzpHNBK84cddCfutwcpeb0nu8FYuWJdGl6N/qRuLu6E61av2T2UAGL+tmGJe eRDSNv5dZNxXO5SVDgkSSkmCkbALu1NWYjPNe8YwfzHcMVlPD6exmMS299XqwMG1s8X7 OIhsq3Nkz6Y20yX6DzD+y8bfs71uYPf/BaULS5BAPa4sQ6xlnp81yLtRYXPMWLGCIRx2 snmsBYfQcerT3VfzO4AtYy9eOpay03c2+Fs0zHngFBf8/Tpldq2nYQ7mzU405nv8gYuf fvrzQaYu4pY/tURY2Dl6l+V+usPe7yATn2kRZCcXdRqtQDIjj4GKl4yBsoD08iGw6+Z7 gcDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=eQgHSC0v+wR1qBSLoLf71Q+a7BtxI5yJasONstZoqws=; b=mwIQK3Mo9hpBn5B+y2yXNRoHt8EedHXSeluD2CfkCrkYr7i7cGBZB1sWS/iMmVoSNv A20i2klCW5RTPp090E/g7eodFiul4LYS9XfbunL+ENekbV/xhjbqb696CTrGoE84qlU3 Hahr8LdLB92pF4Yo1XHCf8Lh+LzCgZrfgpAFwVTKwlGSdUBALN4kIP5n7B7Ow56IEbmi evAJdo2xS80SHQ4EcNq1I1n4sZSukiwG7WmRmLs67hEmI+SourbhqehVenAAhUWbuDci /eDdfGqgB+rP8mc3HwkT5fRzgtwrh8L4JGbGiUNSjKShU9C3qCOHC6C/wsN8GktaqOFy O+XA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wy7-20020a170906fe0700b007c0eb33879asi13371849ejb.667.2023.01.11.00.42.05; Wed, 11 Jan 2023 00:42:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236087AbjAKIc2 (ORCPT + 99 others); Wed, 11 Jan 2023 03:32:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235982AbjAKIcO (ORCPT ); Wed, 11 Jan 2023 03:32:14 -0500 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69443AE66; Wed, 11 Jan 2023 00:32:08 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VZMSm9k_1673425924; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0VZMSm9k_1673425924) by smtp.aliyun-inc.com; Wed, 11 Jan 2023 16:32:05 +0800 From: Jingbo Xu To: xiang@kernel.org, chao@kernel.org, linux-erofs@lists.ozlabs.org Cc: huyue2@coolpad.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v2 6/7] erofs: add helper checking if page cache sharing shall be enabled Date: Wed, 11 Jan 2023 16:31:57 +0800 Message-Id: <20230111083158.23462-7-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20230111083158.23462-1-jefflexu@linux.alibaba.com> References: <20230111083158.23462-1-jefflexu@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754714917171954548?= X-GMAIL-MSGID: =?utf-8?q?1754714917171954548?= Erofs supports chunk deduplication to reduce disk usage. Furthermore we can make inodes share page cache of these deduplicated chunks to reduce the memory usage. This shall be much usable in container scenarios as deduplication is requisite for container image. This can be achieved by managing page cache of deduplicated chunks in blob's address space. In this way, all inodes sharing the deduplicated chunk will refer to and share the page cache in the blob's address space. So far there are some restrictions for enabling this feature. The page cache sharing feature also supports .mmap(). The reverse mapping requires that one vma can not be shared among inodes and can be linked to only one inode. As the vma will be finally linked to the blob's address space when page cache sharing enabled, the restriction of the reverse mapping actually requires that the mapped file area can not be mapped to multiple blobs. Thus page cache sharing can only be enabled for those files mapped to one blob. The chunk based data layout guarantees that a chunk will not cross the device (blob) boundary. Thus in chunk based data layout, those files smaller than the chunk size shall be guaranteed to be mapped to one blob. As chunk size is tunable at a per-file basis, this restriction can be relaxed at image building phase. As long as we ensure that the file can not be deduplicated, the file's chunk size can be set to a reasonable value larger than the file size, so that the page cache sharing feature can be enabled on this file later. The second restriction is that EROFS_BLKSIZ mus be multiples of PAGE_SIZE to avoid data leakage. Otherwise unrelated data may be exposed at the end of the last page, since file's data is arranged in unit of EROFS_BLKSIZ in the image. Considering all these restrictions, add a helper checking if page cache sharing shall be enabled for specific file. Signed-off-by: Jingbo Xu --- fs/erofs/internal.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h index 7c6a7a2d9acf..adf6be08b47c 100644 --- a/fs/erofs/internal.h +++ b/fs/erofs/internal.h @@ -368,6 +368,29 @@ static inline unsigned int erofs_inode_datalayout(unsigned int value) EROFS_I_DATALAYOUT_BITS); } +static inline bool erofs_can_share_page(struct inode *inode) +{ + struct erofs_inode *vi = EROFS_I(inode); + struct erofs_sb_info *sbi = EROFS_SB(inode->i_sb); + + /* enable page cache sharing only in share domain mode */ + if (!erofs_is_fscache_mode(inode->i_sb) || !sbi->domain_id) + return false; + + if (vi->datalayout != EROFS_INODE_CHUNK_BASED) + return false; + + /* avoid crossing multi devicces/blobs */ + if (inode->i_size > 1UL << vi->chunkbits) + return false; + + /* avoid data leakage in mmap routine */ + if (EROFS_BLKSIZ % PAGE_SIZE) + return false; + + return true; +} + /* * Different from grab_cache_page_nowait(), reclaiming is never triggered * when allocating new pages.