Message ID | 20221123193519.3948105-1-tjmercier@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2993675wrr; Wed, 23 Nov 2022 11:51:25 -0800 (PST) X-Google-Smtp-Source: AA0mqf5s82vxFSBQETIV09I/tW5yAbFnr1GMglVLiCNjZfvk/owAWFH6od3THtuM9PQnFNPYKgSF X-Received: by 2002:a62:3181:0:b0:572:149c:e1ba with SMTP id x123-20020a623181000000b00572149ce1bamr12628736pfx.19.1669233085415; Wed, 23 Nov 2022 11:51:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669233085; cv=none; d=google.com; s=arc-20160816; b=AwW0fmQY8K/F5MQCfUpEA6fxL42BSNyMKQjvrITjpNmn5F78wUoTbK180Jqmdg1r+X zmzEyapCEdCJJmjhPPU8J26yQUtLtr19miZ4itZuP/TEQaneVANR7linaOFLQ3IY2k72 X92Xz0TJ+hnBwwh/dT9X0tKQDbe304/1PuSIQ28HYeGrSwcmBdUQWOghWA6Fvn60zCmy 4HBAdvXXFU2ULceNBflVTHkhCGqA3Hr5EoKmqjFNf1Z1hfu0hXAQ1mQk0aVeCjwnyOaL BhEo5YQndYnrDNqq7P47v61MNJiaGhBENQq0dRTAeQ5w7QnaeRLqhWt8GoCem4K98zzg QzUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=jLDp8vw7aMAjJeFpfKO4YBrTiEXBAR482/YtAEl8kLM=; b=1HZv80WEOdVmDb1lnfT6YpWTGUiUacX1Lic5o6lGfSm8AFdTuT6E9QfVEjpN3qioHy 6EXidcbL831D5fK4prOnMD1jB5zuV3/7afOTp0YjFPZEjCVvOCsGC2TKsjxYZ3em6jWM zarnSLOC72Ql3/tCSRY+lmGcn3WMI2ko4/BHzkL2BjSyFA1IuMBk/uKkCpANwZTXztaY ajabkJp2s1HKLjs7pYKpxcgwuB3n0mQgPgFqJIu/41OH+Pb2arPHzDiZsHhN6kYoxpl4 hw+vSfOjAuZNSMzN3Wrj/OMAOU1ztkRcxcDSCESRHwvnpOeVT+UBRLtGtnNVdQFOLZsp Te6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=StbBjyKl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h62-20020a638341000000b00477b0e071f6si4139252pge.223.2022.11.23.11.51.12; Wed, 23 Nov 2022 11:51:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=StbBjyKl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235150AbiKWTfd (ORCPT <rfc822;fengqi706@gmail.com> + 99 others); Wed, 23 Nov 2022 14:35:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238981AbiKWTfa (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 23 Nov 2022 14:35:30 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7202664563 for <linux-kernel@vger.kernel.org>; Wed, 23 Nov 2022 11:35:29 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id s68-20020a257747000000b006f0a255dcaaso1230308ybc.2 for <linux-kernel@vger.kernel.org>; Wed, 23 Nov 2022 11:35:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=jLDp8vw7aMAjJeFpfKO4YBrTiEXBAR482/YtAEl8kLM=; b=StbBjyKlYV+4Fmi48HCaiw3/kUn+TaFFIfxysSI2GTDPFB4sGjwArYVCQo4RB9c5TX 7fs1A+RanhrUPWfJYtvyGHz1PosxUfw5Bx6zKDtbdIooKvA4GatIoDKYGmAdREAts4is Ai+ArGpnq8dNBk5B3PKZJoXLMXHu9rO8mt+b0c9dGeXqUwiJEzg1l8mdwr/nPbLottoe +1tn1HtMzl/3eyxWwr4KK4A31t9E1O97eOi+FnVrqQIKXiahGZkWdk70j/WrGx1NpEOQ v/WC0RC37QYEW69ewihq+gQ+3swcSlaKxOmdT91SarBzzAHFtUsJ5CMrLBmvNZtASpkK OIFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=jLDp8vw7aMAjJeFpfKO4YBrTiEXBAR482/YtAEl8kLM=; b=ildHXfI4JI2NEVD1O5p4dRWl+0GK33F3PD83TS1+cbPd6vnTpR/NL23WBhGXiKt8RD YI+o5xhi4bp56xtFMIba4sn15dz0YmQCtD2VPJbzRyFhi/WD0pY70OgwP67xmVf7fVPd D8CKyeosHL5TrlKETYHfzIc+kagJ7POU2kJiXKBOswSl2iwGTm8a8i42vCxJW6dieT3Y 1GHlKtOm5hwIoPd52ClM0H0BGArhL8AuljQfhz9+CHSJbjKrj18RfokMHvfsLdsGqz7s Xh13Vlh8s+hmJwKPVPTnxtPOuahwF6IZXHT8SftE6a3TV0m9IY4zjLN2gToFFRzd/v7m LffQ== X-Gm-Message-State: ANoB5pnVb1a/5dssdBL01gHe7jmqArIQLbOhF1l6ktBQed/v3+1oFtv2 Na6uipMO63eJ0qIEUEB+iAJhtuCpSZo1kWs= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a25:8249:0:b0:6dd:b521:a8f2 with SMTP id d9-20020a258249000000b006ddb521a8f2mr8883552ybn.380.1669232128681; Wed, 23 Nov 2022 11:35:28 -0800 (PST) Date: Wed, 23 Nov 2022 19:35:18 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221123193519.3948105-1-tjmercier@google.com> Subject: [PATCH] dma-buf: A collection of typo and documentation fixes From: "T.J. Mercier" <tjmercier@google.com> To: Sumit Semwal <sumit.semwal@linaro.org>, " =?utf-8?q?Christian_K=C3=B6nig?= " <christian.koenig@amd.com> Cc: "T.J. Mercier" <tjmercier@google.com>, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750317751661357496?= X-GMAIL-MSGID: =?utf-8?q?1750317751661357496?= |
Series |
dma-buf: A collection of typo and documentation fixes
|
|
Commit Message
T.J. Mercier
Nov. 23, 2022, 7:35 p.m. UTC
I've been collecting these typo fixes for a while and it feels like
time to send them in.
Signed-off-by: T.J. Mercier <tjmercier@google.com>
---
drivers/dma-buf/dma-buf.c | 14 +++++++-------
include/linux/dma-buf.h | 6 +++---
2 files changed, 10 insertions(+), 10 deletions(-)
Comments
On 11/23/22 11:35, T.J. Mercier wrote: > I've been collecting these typo fixes for a while and it feels like > time to send them in. > > Signed-off-by: T.J. Mercier <tjmercier@google.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Thanks. (although I would prefer to see CPU instead of cpu, but that's no reason to hold up this patch) > --- > drivers/dma-buf/dma-buf.c | 14 +++++++------- > include/linux/dma-buf.h | 6 +++--- > 2 files changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index dd0f83ee505b..614ccd208af4 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > * > * @dmabuf: [in] buffer which is moving > * > - * Informs all attachmenst that they need to destroy and recreated all their > + * Informs all attachments that they need to destroy and recreate all their > * mappings. > */ > void dma_buf_move_notify(struct dma_buf *dmabuf) > @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > /** > * DOC: cpu access > * > - * There are mutliple reasons for supporting CPU access to a dma buffer object: > + * There are multiple reasons for supporting CPU access to a dma buffer object: > * > * - Fallback operations in the kernel, for example when a device is connected > * over USB and the kernel needs to shuffle the data around first before > - * sending it away. Cache coherency is handled by braketing any transactions > + * sending it away. Cache coherency is handled by bracketing any transactions > * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() > * access. > * > @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > * replace ION buffers mmap support was needed. > * > * There is no special interfaces, userspace simply calls mmap on the dma-buf > - * fd. But like for CPU access there's a need to braket the actual access, > + * fd. But like for CPU access there's a need to bracket the actual access, > * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that > * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must > * be restarted. > @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > * preparations. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to prepare cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * After the cpu access is complete the caller should call > - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is > + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is > * it guaranteed to be coherent with other DMA access. > * > * This function will also wait for any DMA transactions tracked through > @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); > * actions. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to complete cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * This terminates CPU access started with dma_buf_begin_cpu_access(). > * > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..1d61a4f6db35 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -330,7 +330,7 @@ struct dma_buf { > * @lock: > * > * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > + * vmap/unmap. Note that in many cases this is superseded by > * dma_resv_lock() on @resv. > */ > struct mutex lock; > @@ -365,7 +365,7 @@ struct dma_buf { > */ > const char *name; > > - /** @name_lock: Spinlock to protect name acces for read access. */ > + /** @name_lock: Spinlock to protect name access for read access. */ > spinlock_t name_lock; > > /** > @@ -402,7 +402,7 @@ struct dma_buf { > * anything the userspace API considers write access. > * > * - Drivers may just always add a write fence, since that only > - * causes unecessarily synchronization, but no correctness issues. > + * causes unnecessary synchronization, but no correctness issues. > * > * - Some drivers only expose a synchronous userspace API with no > * pipelining across drivers. These do not set any fences for their
Am 23.11.22 um 20:35 schrieb T.J. Mercier: > I've been collecting these typo fixes for a while and it feels like > time to send them in. > > Signed-off-by: T.J. Mercier <tjmercier@google.com> Acked-by: Christian König <christian.koenig@amd.com> > --- > drivers/dma-buf/dma-buf.c | 14 +++++++------- > include/linux/dma-buf.h | 6 +++--- > 2 files changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index dd0f83ee505b..614ccd208af4 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > * > * @dmabuf: [in] buffer which is moving > * > - * Informs all attachmenst that they need to destroy and recreated all their > + * Informs all attachments that they need to destroy and recreate all their > * mappings. > */ > void dma_buf_move_notify(struct dma_buf *dmabuf) > @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > /** > * DOC: cpu access > * > - * There are mutliple reasons for supporting CPU access to a dma buffer object: > + * There are multiple reasons for supporting CPU access to a dma buffer object: > * > * - Fallback operations in the kernel, for example when a device is connected > * over USB and the kernel needs to shuffle the data around first before > - * sending it away. Cache coherency is handled by braketing any transactions > + * sending it away. Cache coherency is handled by bracketing any transactions > * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() > * access. > * > @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > * replace ION buffers mmap support was needed. > * > * There is no special interfaces, userspace simply calls mmap on the dma-buf > - * fd. But like for CPU access there's a need to braket the actual access, > + * fd. But like for CPU access there's a need to bracket the actual access, > * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that > * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must > * be restarted. > @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > * preparations. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to prepare cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * After the cpu access is complete the caller should call > - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is > + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is > * it guaranteed to be coherent with other DMA access. > * > * This function will also wait for any DMA transactions tracked through > @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); > * actions. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to complete cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * This terminates CPU access started with dma_buf_begin_cpu_access(). > * > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..1d61a4f6db35 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -330,7 +330,7 @@ struct dma_buf { > * @lock: > * > * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > + * vmap/unmap. Note that in many cases this is superseded by > * dma_resv_lock() on @resv. > */ > struct mutex lock; > @@ -365,7 +365,7 @@ struct dma_buf { > */ > const char *name; > > - /** @name_lock: Spinlock to protect name acces for read access. */ > + /** @name_lock: Spinlock to protect name access for read access. */ > spinlock_t name_lock; > > /** > @@ -402,7 +402,7 @@ struct dma_buf { > * anything the userspace API considers write access. > * > * - Drivers may just always add a write fence, since that only > - * causes unecessarily synchronization, but no correctness issues. > + * causes unnecessary synchronization, but no correctness issues. > * > * - Some drivers only expose a synchronous userspace API with no > * pipelining across drivers. These do not set any fences for their
Hi T.J, On Wed, Nov 23, 2022 at 07:35:18PM +0000, T.J. Mercier wrote: > I've been collecting these typo fixes for a while and it feels like > time to send them in. > > Signed-off-by: T.J. Mercier <tjmercier@google.com> > --- > drivers/dma-buf/dma-buf.c | 14 +++++++------- > include/linux/dma-buf.h | 6 +++--- > 2 files changed, 10 insertions(+), 10 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index dd0f83ee505b..614ccd208af4 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > * > * @dmabuf: [in] buffer which is moving > * > - * Informs all attachmenst that they need to destroy and recreated all their > + * Informs all attachments that they need to destroy and recreate all their > * mappings. > */ > void dma_buf_move_notify(struct dma_buf *dmabuf) > @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > /** > * DOC: cpu access > * > - * There are mutliple reasons for supporting CPU access to a dma buffer object: > + * There are multiple reasons for supporting CPU access to a dma buffer object: > * > * - Fallback operations in the kernel, for example when a device is connected > * over USB and the kernel needs to shuffle the data around first before > - * sending it away. Cache coherency is handled by braketing any transactions > + * sending it away. Cache coherency is handled by bracketing any transactions > * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() > * access. > * > @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > * replace ION buffers mmap support was needed. > * > * There is no special interfaces, userspace simply calls mmap on the dma-buf > - * fd. But like for CPU access there's a need to braket the actual access, > + * fd. But like for CPU access there's a need to bracket the actual access, > * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that > * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must > * be restarted. > @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > * preparations. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to prepare cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * After the cpu access is complete the caller should call > - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is > + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is > * it guaranteed to be coherent with other DMA access. > * > * This function will also wait for any DMA transactions tracked through > @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); > * actions. Coherency is only guaranteed in the specified range for the > * specified access direction. > * @dmabuf: [in] buffer to complete cpu access for. > - * @direction: [in] length of range for cpu access. > + * @direction: [in] direction of access. > * > * This terminates CPU access started with dma_buf_begin_cpu_access(). > * > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..1d61a4f6db35 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -330,7 +330,7 @@ struct dma_buf { > * @lock: > * > * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > + * vmap/unmap. Note that in many cases this is superseded by > * dma_resv_lock() on @resv. > */ > struct mutex lock; > @@ -365,7 +365,7 @@ struct dma_buf { > */ > const char *name; > > - /** @name_lock: Spinlock to protect name acces for read access. */ > + /** @name_lock: Spinlock to protect name access for read access. */ > spinlock_t name_lock; > > /** > @@ -402,7 +402,7 @@ struct dma_buf { > * anything the userspace API considers write access. > * > * - Drivers may just always add a write fence, since that only > - * causes unecessarily synchronization, but no correctness issues. > + * causes unnecessary synchronization, but no correctness issues. > * > * - Some drivers only expose a synchronous userspace API with no > * pipelining across drivers. These do not set any fences for their > -- > 2.38.1.584.g0f3c55d4c2-goog > Looks good to me. Reviewed-by: Tommaso Merciai <tommaso.merciai@amarulasolutions.com> Thanks & Regards, Tommaso
On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote: > Am 23.11.22 um 20:35 schrieb T.J. Mercier: > > I've been collecting these typo fixes for a while and it feels like > > time to send them in. > > > > Signed-off-by: T.J. Mercier <tjmercier@google.com> > > Acked-by: Christian König <christian.koenig@amd.com> Will you also push this? I think tj doesn't have commit rights yet, and I somehow can't see the patch locally (I guess it's stuck in moderation). -Daniel > > > --- > > drivers/dma-buf/dma-buf.c | 14 +++++++------- > > include/linux/dma-buf.h | 6 +++--- > > 2 files changed, 10 insertions(+), 10 deletions(-) > > > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > > index dd0f83ee505b..614ccd208af4 100644 > > --- a/drivers/dma-buf/dma-buf.c > > +++ b/drivers/dma-buf/dma-buf.c > > @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > > * > > * @dmabuf: [in] buffer which is moving > > * > > - * Informs all attachmenst that they need to destroy and recreated all their > > + * Informs all attachments that they need to destroy and recreate all their > > * mappings. > > */ > > void dma_buf_move_notify(struct dma_buf *dmabuf) > > @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > > /** > > * DOC: cpu access > > * > > - * There are mutliple reasons for supporting CPU access to a dma buffer object: > > + * There are multiple reasons for supporting CPU access to a dma buffer object: > > * > > * - Fallback operations in the kernel, for example when a device is connected > > * over USB and the kernel needs to shuffle the data around first before > > - * sending it away. Cache coherency is handled by braketing any transactions > > + * sending it away. Cache coherency is handled by bracketing any transactions > > * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() > > * access. > > * > > @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > > * replace ION buffers mmap support was needed. > > * > > * There is no special interfaces, userspace simply calls mmap on the dma-buf > > - * fd. But like for CPU access there's a need to braket the actual access, > > + * fd. But like for CPU access there's a need to bracket the actual access, > > * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that > > * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must > > * be restarted. > > @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > > * preparations. Coherency is only guaranteed in the specified range for the > > * specified access direction. > > * @dmabuf: [in] buffer to prepare cpu access for. > > - * @direction: [in] length of range for cpu access. > > + * @direction: [in] direction of access. > > * > > * After the cpu access is complete the caller should call > > - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is > > + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is > > * it guaranteed to be coherent with other DMA access. > > * > > * This function will also wait for any DMA transactions tracked through > > @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); > > * actions. Coherency is only guaranteed in the specified range for the > > * specified access direction. > > * @dmabuf: [in] buffer to complete cpu access for. > > - * @direction: [in] length of range for cpu access. > > + * @direction: [in] direction of access. > > * > > * This terminates CPU access started with dma_buf_begin_cpu_access(). > > * > > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > > index 71731796c8c3..1d61a4f6db35 100644 > > --- a/include/linux/dma-buf.h > > +++ b/include/linux/dma-buf.h > > @@ -330,7 +330,7 @@ struct dma_buf { > > * @lock: > > * > > * Used internally to serialize list manipulation, attach/detach and > > - * vmap/unmap. Note that in many cases this is superseeded by > > + * vmap/unmap. Note that in many cases this is superseded by > > * dma_resv_lock() on @resv. > > */ > > struct mutex lock; > > @@ -365,7 +365,7 @@ struct dma_buf { > > */ > > const char *name; > > - /** @name_lock: Spinlock to protect name acces for read access. */ > > + /** @name_lock: Spinlock to protect name access for read access. */ > > spinlock_t name_lock; > > /** > > @@ -402,7 +402,7 @@ struct dma_buf { > > * anything the userspace API considers write access. > > * > > * - Drivers may just always add a write fence, since that only > > - * causes unecessarily synchronization, but no correctness issues. > > + * causes unnecessary synchronization, but no correctness issues. > > * > > * - Some drivers only expose a synchronous userspace API with no > > * pipelining across drivers. These do not set any fences for their >
Am 24.11.22 um 10:05 schrieb Daniel Vetter: > On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote: >> Am 23.11.22 um 20:35 schrieb T.J. Mercier: >>> I've been collecting these typo fixes for a while and it feels like >>> time to send them in. >>> >>> Signed-off-by: T.J. Mercier <tjmercier@google.com> >> Acked-by: Christian König <christian.koenig@amd.com> > Will you also push this? I think tj doesn't have commit rights yet, and I > somehow can't see the patch locally (I guess it's stuck in moderation). I was just about to complain that this doesn't apply cleanly to drm-misc-next. Trivial problem, one of the typos was just removed by Dimitry a few weeks ago. I've fixed that up locally and pushed the result, but nevertheless please make sure that DMA-buf patches are based on the drm branches. Thanks, Christian. > -Daniel > >>> --- >>> drivers/dma-buf/dma-buf.c | 14 +++++++------- >>> include/linux/dma-buf.h | 6 +++--- >>> 2 files changed, 10 insertions(+), 10 deletions(-) >>> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >>> index dd0f83ee505b..614ccd208af4 100644 >>> --- a/drivers/dma-buf/dma-buf.c >>> +++ b/drivers/dma-buf/dma-buf.c >>> @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); >>> * >>> * @dmabuf: [in] buffer which is moving >>> * >>> - * Informs all attachmenst that they need to destroy and recreated all their >>> + * Informs all attachments that they need to destroy and recreate all their >>> * mappings. >>> */ >>> void dma_buf_move_notify(struct dma_buf *dmabuf) >>> @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); >>> /** >>> * DOC: cpu access >>> * >>> - * There are mutliple reasons for supporting CPU access to a dma buffer object: >>> + * There are multiple reasons for supporting CPU access to a dma buffer object: >>> * >>> * - Fallback operations in the kernel, for example when a device is connected >>> * over USB and the kernel needs to shuffle the data around first before >>> - * sending it away. Cache coherency is handled by braketing any transactions >>> + * sending it away. Cache coherency is handled by bracketing any transactions >>> * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() >>> * access. >>> * >>> @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); >>> * replace ION buffers mmap support was needed. >>> * >>> * There is no special interfaces, userspace simply calls mmap on the dma-buf >>> - * fd. But like for CPU access there's a need to braket the actual access, >>> + * fd. But like for CPU access there's a need to bracket the actual access, >>> * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that >>> * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must >>> * be restarted. >>> @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, >>> * preparations. Coherency is only guaranteed in the specified range for the >>> * specified access direction. >>> * @dmabuf: [in] buffer to prepare cpu access for. >>> - * @direction: [in] length of range for cpu access. >>> + * @direction: [in] direction of access. >>> * >>> * After the cpu access is complete the caller should call >>> - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is >>> + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is >>> * it guaranteed to be coherent with other DMA access. >>> * >>> * This function will also wait for any DMA transactions tracked through >>> @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); >>> * actions. Coherency is only guaranteed in the specified range for the >>> * specified access direction. >>> * @dmabuf: [in] buffer to complete cpu access for. >>> - * @direction: [in] length of range for cpu access. >>> + * @direction: [in] direction of access. >>> * >>> * This terminates CPU access started with dma_buf_begin_cpu_access(). >>> * >>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h >>> index 71731796c8c3..1d61a4f6db35 100644 >>> --- a/include/linux/dma-buf.h >>> +++ b/include/linux/dma-buf.h >>> @@ -330,7 +330,7 @@ struct dma_buf { >>> * @lock: >>> * >>> * Used internally to serialize list manipulation, attach/detach and >>> - * vmap/unmap. Note that in many cases this is superseeded by >>> + * vmap/unmap. Note that in many cases this is superseded by >>> * dma_resv_lock() on @resv. >>> */ >>> struct mutex lock; >>> @@ -365,7 +365,7 @@ struct dma_buf { >>> */ >>> const char *name; >>> - /** @name_lock: Spinlock to protect name acces for read access. */ >>> + /** @name_lock: Spinlock to protect name access for read access. */ >>> spinlock_t name_lock; >>> /** >>> @@ -402,7 +402,7 @@ struct dma_buf { >>> * anything the userspace API considers write access. >>> * >>> * - Drivers may just always add a write fence, since that only >>> - * causes unecessarily synchronization, but no correctness issues. >>> + * causes unnecessary synchronization, but no correctness issues. >>> * >>> * - Some drivers only expose a synchronous userspace API with no >>> * pipelining across drivers. These do not set any fences for their
On Thu, Nov 24, 2022 at 1:43 AM Christian König <ckoenig.leichtzumerken@gmail.com> wrote: > > Am 24.11.22 um 10:05 schrieb Daniel Vetter: > > On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote: > >> Am 23.11.22 um 20:35 schrieb T.J. Mercier: > >>> I've been collecting these typo fixes for a while and it feels like > >>> time to send them in. > >>> > >>> Signed-off-by: T.J. Mercier <tjmercier@google.com> > >> Acked-by: Christian König <christian.koenig@amd.com> > > Will you also push this? I think tj doesn't have commit rights yet, and I > > somehow can't see the patch locally (I guess it's stuck in moderation). > > I was just about to complain that this doesn't apply cleanly to > drm-misc-next. > > Trivial problem, one of the typos was just removed by Dimitry a few > weeks ago. > > I've fixed that up locally and pushed the result, but nevertheless > please make sure that DMA-buf patches are based on the drm branches. > I'm sorry, this was on top of a random spot in Linus's 6.1-rc5. (84368d882b96 Merge tag 'soc-fixes-6.1-3') I'm not sure why I did that, but I suspect it was after a fresh git pull. I have too many repos. Thanks all for the reviews. > Thanks, > Christian. > > > -Daniel > > > >>> --- > >>> drivers/dma-buf/dma-buf.c | 14 +++++++------- > >>> include/linux/dma-buf.h | 6 +++--- > >>> 2 files changed, 10 insertions(+), 10 deletions(-) > >>> > >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > >>> index dd0f83ee505b..614ccd208af4 100644 > >>> --- a/drivers/dma-buf/dma-buf.c > >>> +++ b/drivers/dma-buf/dma-buf.c > >>> @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > >>> * > >>> * @dmabuf: [in] buffer which is moving > >>> * > >>> - * Informs all attachmenst that they need to destroy and recreated all their > >>> + * Informs all attachments that they need to destroy and recreate all their > >>> * mappings. > >>> */ > >>> void dma_buf_move_notify(struct dma_buf *dmabuf) > >>> @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > >>> /** > >>> * DOC: cpu access > >>> * > >>> - * There are mutliple reasons for supporting CPU access to a dma buffer object: > >>> + * There are multiple reasons for supporting CPU access to a dma buffer object: > >>> * > >>> * - Fallback operations in the kernel, for example when a device is connected > >>> * over USB and the kernel needs to shuffle the data around first before > >>> - * sending it away. Cache coherency is handled by braketing any transactions > >>> + * sending it away. Cache coherency is handled by bracketing any transactions > >>> * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() > >>> * access. > >>> * > >>> @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); > >>> * replace ION buffers mmap support was needed. > >>> * > >>> * There is no special interfaces, userspace simply calls mmap on the dma-buf > >>> - * fd. But like for CPU access there's a need to braket the actual access, > >>> + * fd. But like for CPU access there's a need to bracket the actual access, > >>> * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that > >>> * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must > >>> * be restarted. > >>> @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > >>> * preparations. Coherency is only guaranteed in the specified range for the > >>> * specified access direction. > >>> * @dmabuf: [in] buffer to prepare cpu access for. > >>> - * @direction: [in] length of range for cpu access. > >>> + * @direction: [in] direction of access. > >>> * > >>> * After the cpu access is complete the caller should call > >>> - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is > >>> + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is > >>> * it guaranteed to be coherent with other DMA access. > >>> * > >>> * This function will also wait for any DMA transactions tracked through > >>> @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); > >>> * actions. Coherency is only guaranteed in the specified range for the > >>> * specified access direction. > >>> * @dmabuf: [in] buffer to complete cpu access for. > >>> - * @direction: [in] length of range for cpu access. > >>> + * @direction: [in] direction of access. > >>> * > >>> * This terminates CPU access started with dma_buf_begin_cpu_access(). > >>> * > >>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > >>> index 71731796c8c3..1d61a4f6db35 100644 > >>> --- a/include/linux/dma-buf.h > >>> +++ b/include/linux/dma-buf.h > >>> @@ -330,7 +330,7 @@ struct dma_buf { > >>> * @lock: > >>> * > >>> * Used internally to serialize list manipulation, attach/detach and > >>> - * vmap/unmap. Note that in many cases this is superseeded by > >>> + * vmap/unmap. Note that in many cases this is superseded by > >>> * dma_resv_lock() on @resv. > >>> */ > >>> struct mutex lock; > >>> @@ -365,7 +365,7 @@ struct dma_buf { > >>> */ > >>> const char *name; > >>> - /** @name_lock: Spinlock to protect name acces for read access. */ > >>> + /** @name_lock: Spinlock to protect name access for read access. */ > >>> spinlock_t name_lock; > >>> /** > >>> @@ -402,7 +402,7 @@ struct dma_buf { > >>> * anything the userspace API considers write access. > >>> * > >>> * - Drivers may just always add a write fence, since that only > >>> - * causes unecessarily synchronization, but no correctness issues. > >>> + * causes unnecessary synchronization, but no correctness issues. > >>> * > >>> * - Some drivers only expose a synchronous userspace API with no > >>> * pipelining across drivers. These do not set any fences for their >
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); * * @dmabuf: [in] buffer which is moving * - * Informs all attachmenst that they need to destroy and recreated all their + * Informs all attachments that they need to destroy and recreate all their * mappings. */ void dma_buf_move_notify(struct dma_buf *dmabuf) @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /** * DOC: cpu access * - * There are mutliple reasons for supporting CPU access to a dma buffer object: + * There are multiple reasons for supporting CPU access to a dma buffer object: * * - Fallback operations in the kernel, for example when a device is connected * over USB and the kernel needs to shuffle the data around first before - * sending it away. Cache coherency is handled by braketing any transactions + * sending it away. Cache coherency is handled by bracketing any transactions * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() * access. * @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); * replace ION buffers mmap support was needed. * * There is no special interfaces, userspace simply calls mmap on the dma-buf - * fd. But like for CPU access there's a need to braket the actual access, + * fd. But like for CPU access there's a need to bracket the actual access, * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must * be restarted. @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, * preparations. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to prepare cpu access for. - * @direction: [in] length of range for cpu access. + * @direction: [in] direction of access. * * After the cpu access is complete the caller should call - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is * it guaranteed to be coherent with other DMA access. * * This function will also wait for any DMA transactions tracked through @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); * actions. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to complete cpu access for. - * @direction: [in] length of range for cpu access. + * @direction: [in] direction of access. * * This terminates CPU access started with dma_buf_begin_cpu_access(). * diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by + * vmap/unmap. Note that in many cases this is superseded by * dma_resv_lock() on @resv. */ struct mutex lock; @@ -365,7 +365,7 @@ struct dma_buf { */ const char *name; - /** @name_lock: Spinlock to protect name acces for read access. */ + /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock; /** @@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only - * causes unecessarily synchronization, but no correctness issues. + * causes unnecessary synchronization, but no correctness issues. * * - Some drivers only expose a synchronous userspace API with no * pipelining across drivers. These do not set any fences for their