From patchwork Mon Oct 17 09:37:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 3360 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1361860wrs; Mon, 17 Oct 2022 03:04:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7A7ZJRqBM8V6oIbVR6h93ixI+1fQYuK1hpoDwMyhdhMbSgVnzVGmHx2fpBuKOrGrlu4P7g X-Received: by 2002:a17:903:189:b0:183:7473:57f1 with SMTP id z9-20020a170903018900b00183747357f1mr10672857plg.28.1666001095638; Mon, 17 Oct 2022 03:04:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666001095; cv=none; d=google.com; s=arc-20160816; b=LjcPYjB25rfmFsF+PfH5xJK89BfeZcValI/wQFwj0Y5IRDykGHkbR5JAfvKxEGVb35 JegSS/aHdLbj6iIAMjjgdhVHuzRS4EIvfPAeUh8rH9pYJPrDHKBly/ai5PIwHO6O4Fwb 90bAj1KavQGS5Bp1sSIzEhh8xsfBLdN5CjQ04CP2VgVUKJF3PsJSIkvmIYnIRLXPhtIY 7MdELmy3bhvOEJVHuRAjGJWlJ11K882xalOVmE8DriopY8t8WAvnIQ3RL9FiRnItiB9u hOO1ujDAK+wNjL6JiWrmh/iXZNd22aVNEaNJ8qoC1Ij+pfG++N7e8rbvQLxEZDUBm6o3 EkCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jSBMLiMjX5wwR6oTvo8wjcHxq/xfc2sQ4BbSCWaXG+E=; b=DOKUXFA9dba1uo0Y4ajylBxL1c10PzzMgE6yPGULC4Tqd0hVB56k8BdHYAE/Z9DEjL zMBG3/v9AknfkatoRgr3mGH4zvhBqRcNLLX080L2EDmBOanNmP17zTE6zWhlfO4a/wWO 5C7yQgWl4/FamyG+AFq9bDDDLSRxldzb4yvW6VMGsNtBn5Q4USawGzUsZhNqrjFE5g9n GmQIsiFX1PDnInrXGAATqAyC+xFObZ962rBdczWcE3gzcn7Jb4ezhEl66N5Uohq2RFoZ s13REQUi9yzgUQLeEmpVG6YApNhDShRVAiCqjxQZsFvQt5H1QPz16wvv+29eLoQTMTpF YCcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AoE485pc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e20-20020a056a001a9400b00562be402e14si12140542pfv.353.2022.10.17.03.04.42; Mon, 17 Oct 2022 03:04:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AoE485pc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231321AbiJQJcy (ORCPT + 99 others); Mon, 17 Oct 2022 05:32:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231298AbiJQJcp (ORCPT ); Mon, 17 Oct 2022 05:32:45 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28B9446632 for ; Mon, 17 Oct 2022 02:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665999164; x=1697535164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qa17WTfuPuByL/pGxqJjZfPv2fBRDy/8RHd7BwsgD1E=; b=AoE485pcIrMn+qUcfTmyZ55ZUcGRRjDdAIMlg6nPbId6mCkR1PMayGiX yA/4Y5zJ/4Ko50nVGRf6C0M3SKP3bauloEu1bOOZGxc4JbytLDEHqPg+p gBMQOP9+3Oi8N+UHAqoF1GgZyHnpHGS0vY6ySVHZHKUr27xmHJK4cmMoP 3ByHnNk6/fCOW45oruW1uDTra2wTji9BxhkKWv57iDKYou7imQtZPBcno eIHiF2sg4otnbT8Y5UsUF1+hORorSWENHr1fzK/VvLzl3qUwRIJXvG8ys VsVwPgP21LGHWIx9a2oA3DW4gG9xJSylEUucX6MsDd2Tlb2eZJu8TiUsx Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10502"; a="305741608" X-IronPort-AV: E=Sophos;i="5.95,191,1661842800"; d="scan'208";a="305741608" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2022 02:32:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10502"; a="717431281" X-IronPort-AV: E=Sophos;i="5.95,191,1661842800"; d="scan'208";a="717431281" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.132]) by FMSMGA003.fm.intel.com with ESMTP; 17 Oct 2022 02:32:39 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: Ira Weiny , "Fabio M . De Francesco" , Zhenyu Wang , Zhao Liu , Dave Hansen Subject: [PATCH 5/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c Date: Mon, 17 Oct 2022 17:37:21 +0800 Message-Id: <20221017093726.2070674-6-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221017093726.2070674-1-zhao1.liu@linux.intel.com> References: <20221017093726.2070674-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746928764853717456?= X-GMAIL-MSGID: =?utf-8?q?1746928764853717456?= From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1]. The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption. In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set() and cpu_get() mainly uses mapping to flush cache and assign the value. There're 2 reasons why cpu_set() and cpu_get() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe for CPU hotplug when preemption is not disabled. cpu_set() and cpu_get() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global and any issue with cpu's being added or removed can be handled safely. 2. Any context switch caused by preemption or sleep (pagefault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_set() and cpu_get() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message. --- .../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index a666d7e610f5..b12402c74424 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -24,7 +24,6 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -34,8 +33,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); @@ -45,7 +43,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) if (needs_clflush & CLFLUSH_AFTER) drm_clflush_virt_range(cpu, sizeof(*cpu)); - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: @@ -57,7 +55,6 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -67,15 +64,14 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); *v = *cpu; - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: