From patchwork Wed Mar 29 07:32:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 76414 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp233543vqo; Wed, 29 Mar 2023 00:47:33 -0700 (PDT) X-Google-Smtp-Source: AKy350YGeg1Jyvv3MnmNeNfSIee3JiQE1hlB8BaAHxV65t2dPhW3XuPncmJ+sAtJZh/0eyHZPSu4 X-Received: by 2002:a17:903:22cb:b0:19d:5fd:11fb with SMTP id y11-20020a17090322cb00b0019d05fd11fbmr20891037plg.23.1680076052769; Wed, 29 Mar 2023 00:47:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680076052; cv=none; d=google.com; s=arc-20160816; b=loIt1bqktcvlUaV2vbJVbSd+KQaNScx20j2T8xOHq2HlBaSgQ25m1e215XRG1SkaJr c5gqJG78l9F+krXp0hk9gozQLnUngBvbRh56F34moJMwvKaAnia4BUpDFpvPE+7WJJVz puZBxWg62dXk/nbzfYBx8ko74EeFrcuscP/71UKomKBKJazXJAplFXoHyWvnriXvT3Fh ofnmC1YXyt043F4Faj1mm2bXXUBDx/MZhYq2PJYWhg2XsjYWE9LzTnXB8cvSLNN1f2wu Ar1w1ZRDvCtjX2uWKjgTmW8cctQlCu6yUhoH3VTV/Tl0aVUgnBpqbq2GdNllQger5Q0i y+7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=kGapG6vF+5aZ6dzonUtCfdnE3MwGx7AwjfhF3KhxmY8=; b=tjaykfsykfwbvGxBjZdi/D8fQAN8C5+ZuG+dpABxHF4zX5iK5I2Zjh5YPWoY5uZNEQ Be+sLyDq3afn8Ie05ax0+2YdpNDZexeUcfU4EFs9zepFMaLQuYR6/gyoPG81LSH7loA8 4oK8MkaO3tn/Ql1LGmpm6NGS/OHGFufteL7rMOqf7Oof2achFUfgEoOlc/T9HoqhEad0 9HtABtUREyKfLhVTMBA62Kg0pSvIgFP1UIgj3s+SUXv11xnvXdih0Q2uSe/Wj+48jUYG u6cKuSTvCYZKPFCW/NyZJqrCgU2ND1m39FotUkamvIO/NShsaKJu1PJMck4TRTqW+Qk9 pvqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MpYoHoxw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d15-20020a170902cecf00b001a1f2843b88si17920585plg.371.2023.03.29.00.47.20; Wed, 29 Mar 2023 00:47:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MpYoHoxw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230063AbjC2HZ3 (ORCPT + 99 others); Wed, 29 Mar 2023 03:25:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230062AbjC2HZE (ORCPT ); Wed, 29 Mar 2023 03:25:04 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B84D33AAB for ; Wed, 29 Mar 2023 00:24:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074689; x=1711610689; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eFmVjsv1DCN0/J2TIleZPCjZ8XDr2WxmZ0GRJxV4ib4=; b=MpYoHoxw9PEqj+2dJV4c16PeWhTrcRwRmIQdY9HLSWfKMY3N2xrT49JR lTRD9KE3HKEDMZ1BDqWm4Ti20HxGK9F6iB2fDK9b5z2oVCiHxBbanriT9 +zIUcBLtQDmGHB8ilsnu0GVqtKB5Ohc4tToxl3/qbIy1KbK2pPuoRAxLu FpvgwGGnbdST+y7sgEKuKOFkQLMajUeWNJztX9qrvrV0zPqAvwhR484jo ppl4rMoL4ZthsV3+fwidt/lfwbtdcLqAhRXaPk4UqIAukHJdayiWwl7OB 2UwtQYklg6WwVmmpGtUafbIOkQHzBTeW+mCrm+RNxIDgV6rgS/LIgGwfk w==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746032" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746032" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160597" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160597" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:34 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: Ira Weiny , "Fabio M . De Francesco" , Zhenyu Wang , Zhao Liu , Dave Hansen Subject: [PATCH v2 5/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c Date: Wed, 29 Mar 2023 15:32:16 +0800 Message-Id: <20230329073220.3982460-6-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.4 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761687426998106214?= X-GMAIL-MSGID: =?utf-8?q?1761687426998106214?= From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration).. With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set() and cpu_get() mainly uses mapping to flush cache and assign the value. There're 2 reasons why cpu_set() and cpu_get() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. cpu_set() and cpu_get() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_set() and cpu_get() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- .../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 3bef1beec7cb..beeb3e12eccc 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -24,7 +24,6 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -34,8 +33,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); @@ -45,7 +43,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) if (needs_clflush & CLFLUSH_AFTER) drm_clflush_virt_range(cpu, sizeof(*cpu)); - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: @@ -57,7 +55,6 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -67,15 +64,14 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); *v = *cpu; - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: