Message ID | 20230621164557.3510324-4-willy@infradead.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp4526672vqr; Wed, 21 Jun 2023 10:27:43 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4wbyp5G2iOSKRu003GEzWGZfbq9We59Myu4oJauFVUltBjGFhoMPY7j/rYJK2aI++yMS9K X-Received: by 2002:a05:6a20:3ca4:b0:10b:7400:cef7 with SMTP id b36-20020a056a203ca400b0010b7400cef7mr20860463pzj.17.1687368462680; Wed, 21 Jun 2023 10:27:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687368462; cv=none; d=google.com; s=arc-20160816; b=imME8rxzbikE2q3k9a4ncY9gDVBVw5Ld+w3foIrCLnABWFiWTnPCbodupMJ7uioxQY KGeZTL5BxqPyF2r6NVyNYvqg9/aKX8MmwqwZY0QPjxYI8XvKmkl9xGkJq8drU4vXPrxg 6qRz55g/CLnE30mpxX1JJlZX3x0hqigxd0xyF+0YnGQH1Ns3632Ketkaw7kiVutl4plg pZx2uHTZF0JsJIr2YcNU2N3gHCC9C1qb267QmstN08aRx74O23Qd2mKDUjtySbmTLhV3 5NZIJe+yl2+HjvWgvnQ+jGcXnMY73x/rbFo2h2PTopMbaW6pKO/VCfSp9eyFR9s53vMs lplg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jTDGdnu+fG/6kslyclH2srz/sHr6MJEFetnoOEAqThE=; b=kAyPasUDJGukzbRiQgJCWIi4FjFWPmnNiJlI0SZ+3/IvILY5g8fdzWoEfsL8E4RhPp tbod6rbU9kdpP6wT2Is1PjlmEe5OwhAQsrOp6MHf0KAgqk2Oh2N5IENe35TeAZe3gIJA iGWC9765tVbjTQSZnWLGxcpJMxcGUAVNh6S/oa+jMYwNZGAzU0FXvNQY2HdixfFQ+l9J jkrDvIY+H+s1IqemW9KB6xk+DJFf4DMUx0XdvP7bbP+P+n24sVsKerzeUt6ZUO7ZgnWv JKYTTOX+Vs+LsJyhwl7XCVdVMDkAP1Y0g0QXxfcaBfqKzhtMsuSQiiokuuUCv89/rhpy eMvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=J7syLfgL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x18-20020aa79412000000b0066a03fdfcd0si3240085pfo.164.2023.06.21.10.27.29; Wed, 21 Jun 2023 10:27:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=J7syLfgL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230210AbjFUQrN (ORCPT <rfc822;maxin.john@gmail.com> + 99 others); Wed, 21 Jun 2023 12:47:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbjFUQqZ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 21 Jun 2023 12:46:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53169E9; Wed, 21 Jun 2023 09:46:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jTDGdnu+fG/6kslyclH2srz/sHr6MJEFetnoOEAqThE=; b=J7syLfgLtqZEvF+FaGdNCpIxYK FGrBsHrwiUgwuOgUEvOX2i7PWXG/ZXhqiAhW6CpfyR4l4rDqNGULpmdUDJAIMFqrXnlWMyJuuuQew CRIwuxDz6BodPG9A2vL0dc3r9jSUjJ096kbZOKhyXbwTEF5WnPbOv35W1YxmyI5xeO3eW9SZeu6Ui elEI4SaI8VG431yTXK/Agfd1OvARTxSno98kcl4XPxpk7V8Q46NmOwcGLYx7LMoOCYaku65I0hGaM vfxZSH7R76ZxcVPpw10enxxDEKsn/iEIr1hxmLFS6RNnFgRQiKGzIC7i3HzHCE4ZskLC1FZi9nT0k 4UQzvpAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qC0y1-00EjDk-FK; Wed, 21 Jun 2023 16:46:01 +0000 From: "Matthew Wilcox (Oracle)" <willy@infradead.org> To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org, linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH 03/13] scatterlist: Add sg_set_folio() Date: Wed, 21 Jun 2023 17:45:47 +0100 Message-Id: <20230621164557.3510324-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230621164557.3510324-1-willy@infradead.org> References: <20230621164557.3510324-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769334072984543822?= X-GMAIL-MSGID: =?utf-8?q?1769334072984543822?= |
Series |
Remove pagevecs
|
|
Commit Message
Matthew Wilcox
June 21, 2023, 4:45 p.m. UTC
This wrapper for sg_set_page() lets drivers add folios to a scatterlist
more easily. We could, perhaps, do better by using a different page
in the folio if offset is larger than UINT_MAX, but let's hope we get
a better data structure than this before we need to care about such
large folios.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/scatterlist.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
Comments
在 2023/6/22 0:45, Matthew Wilcox (Oracle) 写道: > This wrapper for sg_set_page() lets drivers add folios to a scatterlist > more easily. We could, perhaps, do better by using a different page > in the folio if offset is larger than UINT_MAX, but let's hope we get > a better data structure than this before we need to care about such > large folios. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > include/linux/scatterlist.h | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index ec46d8e8e49d..77df3d7b18a6 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -141,6 +141,30 @@ static inline void sg_set_page(struct scatterlist *sg, struct page *page, > sg->length = len; > } > > +/** > + * sg_set_folio - Set sg entry to point at given folio > + * @sg: SG entry > + * @folio: The folio > + * @len: Length of data > + * @offset: Offset into folio > + * > + * Description: > + * Use this function to set an sg entry pointing at a folio, never assign > + * the folio directly. We encode sg table information in the lower bits > + * of the folio pointer. See sg_page() for looking up the page belonging > + * to an sg entry. > + * > + **/ > +static inline void sg_set_folio(struct scatterlist *sg, struct folio *folio, > + size_t len, size_t offset) > +{ > + WARN_ON_ONCE(len > UINT_MAX); > + WARN_ON_ONCE(offset > UINT_MAX); > + sg_assign_page(sg, &folio->page); > + sg->offset = offset; > + sg->length = len; > +} > + https://elixir.bootlin.com/linux/latest/source/lib/scatterlist.c#L451 Does the following function have folio version? " int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, struct page **pages, unsigned int n_pages, unsigned int offset, unsigned long size, unsigned int max_segment, unsigned int left_pages, gfp_t gfp_mask) " Thanks a lot. Zhu Yanjun > static inline struct page *sg_page(struct scatterlist *sg) > { > #ifdef CONFIG_DEBUG_SG
On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote: > Does the following function have folio version? > > " > int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, > struct page **pages, unsigned int n_pages, unsigned int offset, > unsigned long size, unsigned int max_segment, > unsigned int left_pages, gfp_t gfp_mask) > " No -- I haven't needed to convert anything that uses sg_alloc_append_table_from_pages() yet. It doesn't look like it should be _too_ hard to add a folio version.
在 2023/7/30 19:18, Matthew Wilcox 写道: > On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote: >> Does the following function have folio version? >> >> " >> int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, >> struct page **pages, unsigned int n_pages, unsigned int offset, >> unsigned long size, unsigned int max_segment, >> unsigned int left_pages, gfp_t gfp_mask) >> " > No -- I haven't needed to convert anything that uses > sg_alloc_append_table_from_pages() yet. It doesn't look like it should > be _too_ hard to add a folio version. In many places, this function is used. So this function needs the folio version. Another problem, after folio is used, I want to know the performance after folio is implemented. How to make tests to get the performance? Thanks a lot. Zhu Yanjun
On Sun, Jul 30, 2023 at 09:57:06PM +0800, Zhu Yanjun wrote: > > 在 2023/7/30 19:18, Matthew Wilcox 写道: > > On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote: > > > Does the following function have folio version? > > > > > > " > > > int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append, > > > struct page **pages, unsigned int n_pages, unsigned int offset, > > > unsigned long size, unsigned int max_segment, > > > unsigned int left_pages, gfp_t gfp_mask) > > > " > > No -- I haven't needed to convert anything that uses > > sg_alloc_append_table_from_pages() yet. It doesn't look like it should > > be _too_ hard to add a folio version. > > In many places, this function is used. So this function needs the folio > version. It's not used in very many places. But the first one that I see it used (drivers/infiniband/core/umem.c), you can't do a straightforward folio conversion: pinned = pin_user_pages_fast(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), gup_flags, page_list); ... ret = sg_alloc_append_table_from_pages( &umem->sgt_append, page_list, pinned, 0, pinned << PAGE_SHIFT, ib_dma_max_seg_size(device), npages, GFP_KERNEL); That can't be converted to folios. The GUP might start in the middle of the folio, and we have no way to communicate that. This particular usage really needs the phyr work that Jason is doing so we can efficiently communicate physically contiguous ranges from GUP to sg. > Another problem, after folio is used, I want to know the performance after > folio is implemented. > > How to make tests to get the performance? You know what you're working on ... I wouldn't know how best to test your code.
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index ec46d8e8e49d..77df3d7b18a6 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -141,6 +141,30 @@ static inline void sg_set_page(struct scatterlist *sg, struct page *page, sg->length = len; } +/** + * sg_set_folio - Set sg entry to point at given folio + * @sg: SG entry + * @folio: The folio + * @len: Length of data + * @offset: Offset into folio + * + * Description: + * Use this function to set an sg entry pointing at a folio, never assign + * the folio directly. We encode sg table information in the lower bits + * of the folio pointer. See sg_page() for looking up the page belonging + * to an sg entry. + * + **/ +static inline void sg_set_folio(struct scatterlist *sg, struct folio *folio, + size_t len, size_t offset) +{ + WARN_ON_ONCE(len > UINT_MAX); + WARN_ON_ONCE(offset > UINT_MAX); + sg_assign_page(sg, &folio->page); + sg->offset = offset; + sg->length = len; +} + static inline struct page *sg_page(struct scatterlist *sg) { #ifdef CONFIG_DEBUG_SG