From patchwork Mon Oct 17 09:37:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 3341 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1355489wrs; Mon, 17 Oct 2022 02:44:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM76pNY6TTZtPr9aoeJxPl8x/+R+MF2qUCD7v4g81Sz1P6DtVlPefrYIhSyKUtdqM6mpIW5m X-Received: by 2002:a63:4949:0:b0:442:b733:2fae with SMTP id y9-20020a634949000000b00442b7332faemr9708501pgk.424.1665999869872; Mon, 17 Oct 2022 02:44:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665999869; cv=none; d=google.com; s=arc-20160816; b=Da832FzSnctW7huJaWugTeUxTyLgLm7mp4SS6zEMCkRCOut3Bro24ZPEf7EIoLYznv yirfXABdzwF1Q+JiLU1WV7czypD5Pyl9b4Cme6n9h6v5CZ164sa7JGXGeiMEv+eY2Mjj 7JWaqeT18YhJtqd6Jh/sajfRGmW1n8dvdwEQPoQvKpIT6if5fCai88R+yivqTWnmGYKC Dl8RiZQgAh3niV1c+rexB9jX0nGUs7taPnA8rTuYTxwnX8/ljiCNTcBJY7PpYzSDRPam q8MrFsV9OeSHEKTuZGMxeBFHatY4XQ+Do8I+rjUUny/6/hqA1HRHBzDtEu+uqk55u5y7 hVeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PQbkmN4fS23R9NP2D3fOVeKGQIdLXPOuzdLIRk3ajFo=; b=kij89ooaiTIJ0LCvxcjsLfDKiSUhCGy1N1Pze3/q6UkgQJGDsUSF60A3rjac6cRvoQ xOzsz4wwReiK3zMjHrxors+Z4yV6dL5jKhhujEF+RVA47+qcAOU2xHGXAYtugPxsga+5 +g4pGixy5VQGtjhZAVmsv/NQnH4wh1hsLVyTSezAq30r9+0XAWyHlzqeYxLxRh1pRm8f jlSt0C0mnIcs8MW1QfVlqNbtlHpMdl7V6eC1pOUUmMbzXGxP3crfHL7Ey7cedjpqVke9 AhSc/JI3+vmPmFTefckr3G2m4wV2e2sdnZZkO059vejvyQK8LSlu+zgiG6DT1B75RC9h 6DZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="BaZt/XIx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l27-20020a63ba5b000000b0046301a9c718si12741598pgu.21.2022.10.17.02.44.15; Mon, 17 Oct 2022 02:44:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="BaZt/XIx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231340AbiJQJdE (ORCPT + 99 others); Mon, 17 Oct 2022 05:33:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231304AbiJQJcu (ORCPT ); Mon, 17 Oct 2022 05:32:50 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9E0A53D14 for ; Mon, 17 Oct 2022 02:32:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665999168; x=1697535168; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KW/vA0zXivrgHf+ZcWE46VMbBD0S6IoDScJkHoJEmxA=; b=BaZt/XIxFWbxluyydED/EOzxWFV1nuSHg3lFe+2xXBrvc0X6Y2MPZTZn b9ISGTSFA+yWOrmEf60LUA2PV0EdH1BfP88jUYYgz58mG307aSswsvQ9+ 06O19IXRnHQUFU5KqGtXCNA3ujoKMtSUD9bTcmK/jjuDA6t3J9rQxobLd fbjQjvmmGFu+UDnO8xkatMHIBuBsZ2xWgKFZIlh7E9DuLZVJnzG58Jd07 AfYSWh01H10ToNdJ3Glm0c6dGvARQ9+VviuhsJwFzAO2BKyN1Xku6dFga zjSAasr2HyEN9hyoEyUPubzHk8r3OawnAUqvyeW6jF3k+ecJZipVrx/MS g==; X-IronPort-AV: E=McAfee;i="6500,9779,10502"; a="305741620" X-IronPort-AV: E=Sophos;i="5.95,191,1661842800"; d="scan'208";a="305741620" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2022 02:32:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10502"; a="717431302" X-IronPort-AV: E=Sophos;i="5.95,191,1661842800"; d="scan'208";a="717431302" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.132]) by FMSMGA003.fm.intel.com with ESMTP; 17 Oct 2022 02:32:44 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Cc: Ira Weiny , "Fabio M . De Francesco" , Zhenyu Wang , Zhao Liu , Dave Hansen Subject: [PATCH 6/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c Date: Mon, 17 Oct 2022 17:37:22 +0800 Message-Id: <20221017093726.2070674-7-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221017093726.2070674-1-zhao1.liu@linux.intel.com> References: <20221017093726.2070674-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746927479522053303?= X-GMAIL-MSGID: =?utf-8?q?1746927479522053303?= From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1]. The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption. In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and cpu_check() mainly uses mapping to flush cache and check/assign the value. There're 2 reasons why cpu_fill() and cpu_check() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe for CPU hotplug when preemption is not disabled. cpu_fill() and cpu_check() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global and any issue with cpu's being added or removed can be handled safely. 2. Any context switch caused by preemption or sleep (pagefault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_fill() and cpu_check() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message. --- drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index c6ad67b90e8a..736337f23f78 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -466,12 +466,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) for (n = 0; n < real_page_count(obj); n++) { u32 *map; - map = kmap_atomic(i915_gem_object_get_page(obj, n)); + map = kmap_local_page(i915_gem_object_get_page(obj, n)); for (m = 0; m < DW_PER_PAGE; m++) map[m] = value; if (!has_llc) drm_clflush_virt_range(map, PAGE_SIZE); - kunmap_atomic(map); + kunmap_local(map); } i915_gem_object_finish_access(obj); @@ -496,7 +496,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (n = 0; n < real_page_count(obj); n++) { u32 *map; - map = kmap_atomic(i915_gem_object_get_page(obj, n)); + map = kmap_local_page(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) drm_clflush_virt_range(map, PAGE_SIZE); @@ -522,7 +522,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, } out_unmap: - kunmap_atomic(map); + kunmap_local(map); if (err) break; }