From patchwork Wed Oct 19 14:44:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5607 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370419wrs; Wed, 19 Oct 2022 07:54:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7C4kT9S8BDmgEEzSHyvob+SlA03jOsjTzTmBvC0NHAuuAFVobTJ08n01Ax8XOMvfSYuHhi X-Received: by 2002:a17:902:e552:b0:179:e795:71c5 with SMTP id n18-20020a170902e55200b00179e79571c5mr8786572plf.57.1666191265895; Wed, 19 Oct 2022 07:54:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191265; cv=none; d=google.com; s=arc-20160816; b=NvSl6ff4dUpV3brsJhHmoN/KAhwzqbmbJW0eUI0wthYedu0VK1K7rEdLwmglTGAdFF zitu/ILYHrLL8WddQN7vQyaVxh9+DiCS1K0lojr6+jl15oLnaKJHkhysQhR3xJj1BfsX 5kbZJrIhUR3tYXgcCITJnxZPbf9voHTGfuhJIRZfNmLKvfqdRarDukGqBjy+yjrrTork d7MwCxs7moXMkqS2g5xhmhQpPPuay5euIxrnGvjLo1s3LHvRh7Lr9700Dm1gLypOygdQ iVxJmN99DDk0LUqY+pXzB02aQ89U2k4LHBhIadZBSWWJupkCD5InvcQCehdHakEg9ut8 r86A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LKkV5K5rNGundAibUGPmtOynU0ETPEYcazy0gCFkFf4=; b=cn3E6wAQHWIyz3gYystQUtP0Zy5cw+NZyyDhnZ2EF28Mv7gf+7U3YiOVrsBbAAN13X n0BRbYcWo5IY1UMOJhnQdX2wkE4nmHwoHMnYkjKnTlDDh3D8FsbjYb2QZpRmQb9XWIKe mWgzSpBNHLGmkasvl7zT45P977ye55qxARXC1fBwqTNi6zK92I/JSL+VdAuBLKQmdBNq GNTyQSIVQSbIxvrIHRC12vu9GK14dIn50SgNM20gs+D826iQcxLCi+tQqRZs6iz08vrz DIgGAnw+J7n14OCp6U8lYG+Re++AnJbLA6i4eibx06yQ0OM7LAB/55T8wKLay3r34pCJ 7SCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=my7ASZfV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q6-20020a170902dac600b0017f8dfb2308si23574716plx.473.2022.10.19.07.54.10; Wed, 19 Oct 2022 07:54:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=my7ASZfV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231631AbiJSOxk (ORCPT + 99 others); Wed, 19 Oct 2022 10:53:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230175AbiJSOxD (ORCPT ); Wed, 19 Oct 2022 10:53:03 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1DBCA3AA2; Wed, 19 Oct 2022 07:44:56 -0700 (PDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JEgcHD001063; Wed, 19 Oct 2022 14:44:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=LKkV5K5rNGundAibUGPmtOynU0ETPEYcazy0gCFkFf4=; b=my7ASZfVmUtWVWX8Jc55M6UHT6LOJW0gPM1/u2dl8OOivGT90TIBCEYIf+A69tuOx1wH uFwDB3Q1HqDKr+49ypOhB6BsxadY/XKpWJDsxwYZS8VbauAmP8bz7hvP4lmUZts2ES3f nKyPkY0RrGvlbLX+kuQrNIW3woaWXFNlFSFD7yVIvhIQ095ZxntgbivjFJw2SUifDzxM wfA8Zn6WPeTXUBvOh79ELAolvFCWpeaeTDbFz93n5UjnlWTMn9AaM1lZBRRzHWPQudLl KeKpWqDFO3FqIvVI9Kelk+Oj9RE7XxQ/kb9oJpr0R65rMIeMFyIeZqj9QV9CrT70w7Jq 6g== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kak7yg1wk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:42 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEaXtV012198; Wed, 19 Oct 2022 14:44:40 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma06ams.nl.ibm.com with ESMTP id 3kajmrr1uw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:40 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEjAWe41877924 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:45:10 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 12D9D4C044; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 884854C046; Wed, 19 Oct 2022 14:44:36 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:36 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 1/6] s390/ism: Set DMA coherent mask Date: Wed, 19 Oct 2022 16:44:30 +0200 Message-Id: <20221019144435.369902-2-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: kgIwSg_7thUhemU_Qkr748Yhx-njvXWG X-Proofpoint-ORIG-GUID: kgIwSg_7thUhemU_Qkr748Yhx-njvXWG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0 malwarescore=0 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 suspectscore=0 adultscore=0 mlxlogscore=929 bulkscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128172879078978?= X-GMAIL-MSGID: =?utf-8?q?1747128172879078978?= A future change will convert the DMA API implementation from the architecture specific arch/s390/pci/pci_dma.c to using the common code drivers/iommu/dma-iommu.c which the utilizes the same IOMMU hardware through the s390-iommu driver. Unlike the s390 specific DMA API this requires devices to correctly call dma_set_coherent_mask() to be allowed to use IOVAs >2^32 in dma_alloc_coherent(). This was however not done for ISM devices which require such addresses since currently the DMA aperture for PCI devices starts a 2^32 and all calls to dma_alloc_coherent() would thus fail. Signed-off-by: Niklas Schnelle --- drivers/s390/net/ism_drv.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index d34bb6ec1490..5890c32a9e1f 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c @@ -560,6 +560,10 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) goto err_resource; + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) + goto err_resource; + dma_set_seg_boundary(&pdev->dev, SZ_1M - 1); dma_set_max_seg_size(&pdev->dev, SZ_1M); pci_set_master(pdev); From patchwork Wed Oct 19 14:44:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5610 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370781wrs; Wed, 19 Oct 2022 07:55:23 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6e8zFadtf5TMATJ4NDAMqDzXl0JMHWrcns8IZw96Z30I66o3O+ngERQdQvmwDesLdWAKW9 X-Received: by 2002:a17:902:db0b:b0:185:51cc:8113 with SMTP id m11-20020a170902db0b00b0018551cc8113mr8559690plx.64.1666191323277; Wed, 19 Oct 2022 07:55:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191323; cv=none; d=google.com; s=arc-20160816; b=jadwCN00QabgIF/uPV2mdqzyYB0lMGxWF51eOJHV1LKSjmPEsVB3o0oCqXN21VI9Fh n+icmvceqHOliF4/sY7zy/k0m26lhvKFj4xr9yV8d15ar0W4HP4P83WvVHyAGBglhO8Y QhLkR6vYiaSHDvsqLvw/NXuxxs7x9K3cHXEekDkjYLjtyLkiDJmuk1Hq2XOkmprum4Dy l74wbaJvpquc/um7HIXg59iEqMdzL/Oew0Vy1wykwFT5VJGwrK9qhFlrWpfmQ2OMNZvf IHOpJfWI20QbNPJLQqVPyOmeHCErIN5FMAH+ChXohQpQIcMumNR+u46fw8nR8aQlX9cQ QTwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hP0kCovPbWmZTSy7tSN9XNIJQRipnc5HWdbuDbVN2YU=; b=XSl/3YJF0cHLhxMStdV+rr5l0W8dcT7M1ZSUathbJ67VPYTistqNtAUW2BY3R9audv /i9k2bU6Mtm4FGltOMvPPewg0T6VWxzakCW3p6Ar6cGpjlCLatVX59Hlt+/Yq2jMeOh1 44zZkpfgE8i7UaEUV2Skua241t8H+yZBNTjujl8ozzMjnwg76S4kJWuhUsUYsKF3Vzo0 AfoNzWrUwt+ZxfwlvDszhM66/dPIVPhmR4VIohFDIKRxDWxxfMO2zrcjwvq/Jo79E7yf B62PCLecxywjZtkE04TNg+woIwAu2duzAGl5xernFk/kzYHLcRt3x5Xnts0dxV4Mo2Oa 7rlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=qEl5bfgq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t41-20020a056a0013a900b0053abdfb0228si20741581pfg.338.2022.10.19.07.55.09; Wed, 19 Oct 2022 07:55:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=qEl5bfgq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbiJSOyN (ORCPT + 99 others); Wed, 19 Oct 2022 10:54:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231225AbiJSOxJ (ORCPT ); Wed, 19 Oct 2022 10:53:09 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFC38EC506; Wed, 19 Oct 2022 07:44:58 -0700 (PDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JEgkJH001278; Wed, 19 Oct 2022 14:44:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=hP0kCovPbWmZTSy7tSN9XNIJQRipnc5HWdbuDbVN2YU=; b=qEl5bfgqGf28ihyh/cPPxyh1sNZPPGwwXDfVfhlOhuUlTwcmeqwquncH52O1W6+wu3YV gfytbKnEeZrecOWM/+sF6nzlPwRF9KafpnGnHD/LOmT23x2P7z7/cLg/rF7Sy5KbrwEu OIPSmTDcr1KZcndz31bxfy+nHx+x/pz4OsMM1MXyHeVghLYznhcAlK8jrtMVZgyH2qPj y96oDJU+En0cIAJ51N3tA9+0B8d5Jy2Pqj+zFtdZZsfDkRI3braVCV+IRlbiribq6G3+ 40dJevJnvkYDuSnSQK1yOBnxtKoapSSYZT4NpqaWQbh99g+msMJkhUMPLInvNNnCWwsI 9w== Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kak7yg1ww-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:43 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEZDkB017393; Wed, 19 Oct 2022 14:44:40 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma03ams.nl.ibm.com with ESMTP id 3k7mg97bt2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:40 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEjALF44368136 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:45:10 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 900474C044; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1E0374C040; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 2/6] s390/pci: prepare is_passed_through() for dma-iommu Date: Wed, 19 Oct 2022 16:44:31 +0200 Message-Id: <20221019144435.369902-3-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: aWZLz9Xyy5inkK2T3tn0v2aIwu2T7b0O X-Proofpoint-ORIG-GUID: aWZLz9Xyy5inkK2T3tn0v2aIwu2T7b0O X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxscore=0 malwarescore=0 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 suspectscore=0 adultscore=0 mlxlogscore=941 bulkscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128233312091933?= X-GMAIL-MSGID: =?utf-8?q?1747128233312091933?= With the IOMMU always controlled through the IOMMU driver testing for zdev->s390_domain is not a valid indication of the device being passed-through. Instead test if zdev->kzdev is set. Reviewed-by: Pierre Morel Signed-off-by: Niklas Schnelle --- arch/s390/pci/pci_event.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c index b9324ca2eb94..4ef5a6a1d618 100644 --- a/arch/s390/pci/pci_event.c +++ b/arch/s390/pci/pci_event.c @@ -59,9 +59,16 @@ static inline bool ers_result_indicates_abort(pci_ers_result_t ers_res) } } -static bool is_passed_through(struct zpci_dev *zdev) +static bool is_passed_through(struct pci_dev *pdev) { - return zdev->s390_domain; + struct zpci_dev *zdev = to_zpci(pdev); + bool ret; + + mutex_lock(&zdev->kzdev_lock); + ret = !!zdev->kzdev; + mutex_unlock(&zdev->kzdev_lock); + + return ret; } static bool is_driver_supported(struct pci_driver *driver) @@ -176,7 +183,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev) } pdev->error_state = pci_channel_io_frozen; - if (is_passed_through(to_zpci(pdev))) { + if (is_passed_through(pdev)) { pr_info("%s: Cannot be recovered in the host because it is a pass-through device\n", pci_name(pdev)); goto out_unlock; @@ -239,7 +246,7 @@ static void zpci_event_io_failure(struct pci_dev *pdev, pci_channel_state_t es) * we will inject the error event and let the guest recover the device * itself. */ - if (is_passed_through(to_zpci(pdev))) + if (is_passed_through(pdev)) goto out; driver = to_pci_driver(pdev->dev.driver); if (driver && driver->err_handler && driver->err_handler->error_detected) From patchwork Wed Oct 19 14:44:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5609 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370618wrs; Wed, 19 Oct 2022 07:55:01 -0700 (PDT) X-Google-Smtp-Source: AMsMyM70ID1l+EuvO5y8p3XfU12CsbrILRpnRoEZl0PuvVFmj548KMdxSU9OxGYwyrmpjHt6jlDC X-Received: by 2002:a17:902:d504:b0:184:87ca:7856 with SMTP id b4-20020a170902d50400b0018487ca7856mr8889470plg.14.1666191301682; Wed, 19 Oct 2022 07:55:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191301; cv=none; d=google.com; s=arc-20160816; b=FhfO7S38PLl20uz+GdXTVwuZujtP7+WzBP5lHus/wuTU7hw4CwWrXlqQbQslrrYEqC Fy0wUtsBVaI/BTJoSa6sDkN0VqmluytYkUM2UjyVkZANJl/SozMutCRZlqnFp+RgzpEw yQ+6kyoGghKNOGUQJlo0zB41HLS3pKdTyEqRw/cX1tw2C6VqLUxF9HDXzqHmmbAbfhwM MzfV8EgJ0kqq5P0rgGh6Zz21cgb64E2tOBRh1PBU2vN0oRo0iMAx1vdvzwyyJ4It3cHD spkusDo9u+SO5AarQFShqpcTVAAVsTQESfgi9QAEoTOi/BdiRG2kNCnmqvDasHgE+6mp YwZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xxmmySkY4MIAvwgKnvnTINxYEe0HKZB2sa1ythErgzg=; b=mbm41UN+T8oNws73PsEhAYQBdBIAUCkUbXnrdykM2wSdbJLQOu5F18Tpzjda/X0wm8 POdxckTO10k7dng/UIiRPZcuVTwq9geChN9zuQfm8bSmuATh3Uph1kWXSaCjQjGvTG9g cCzaAi2iGAJnUYPxswWqJYd1mfAknZSvKiURfjAvST55HaN1tG35CgIZmYUtjfara+FM H0ncZse4AjMcW8qmJ+srY7E0HR97Oy6mgiVjJUbf1ZPvbKBJISLuqUHQTM/YB+xPDV5F tzwINaCY8TPAm1kPZ+eL3jbfRkRWeJGZ4XIeI03Q4IRs+a/EkRON3z2dacWprjSlHzO4 o2/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=KHwhJHLP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z3-20020a17090ad78300b0020cedba54fcsi138374pju.55.2022.10.19.07.54.48; Wed, 19 Oct 2022 07:55:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=KHwhJHLP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231733AbiJSOx4 (ORCPT + 99 others); Wed, 19 Oct 2022 10:53:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbiJSOxG (ORCPT ); Wed, 19 Oct 2022 10:53:06 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0EB0F41A7; Wed, 19 Oct 2022 07:44:59 -0700 (PDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JDnHoV001230; Wed, 19 Oct 2022 14:44:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=xxmmySkY4MIAvwgKnvnTINxYEe0HKZB2sa1ythErgzg=; b=KHwhJHLPBlmZiUR8vz3heF1DnJXTo1SDtA3AFXnqxtH/maFumG7K+Ea7Ls4uzyKZDb3X cA8fi4qWmnLHvuKxHAWBoui5ffdeGN9L5k8kEznMHlD9BByz1DNJ+aVw5mAx55kQVg7N kAv+PTJaniiuQ7RK8+2oT3DUi2A8OQo9LIw6Sv57Vrv67kS0sL0gdyYddr7VunOZvEYZ zLVpDrsOGkUyIPUWK/Qgwtvghrg9xkTxhwH/kaiq4YMDYI2Dd8FuwVWCv6XblXsg7N1w 8n3QPDYmWOTONLT60XtzMGaDseyUbMdFZd0yE4hpr9CulmUEa33DdIb2bdgbqljPsGup 4A== Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3kajf4jft1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:43 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEZDDY017396; Wed, 19 Oct 2022 14:44:42 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma03ams.nl.ibm.com with ESMTP id 3k7mg97bt4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:41 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEdcKi49873218 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:39:38 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3541B4C040; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9C60D4C046; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:37 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 3/6] s390/pci: Use dma-iommu layer Date: Wed, 19 Oct 2022 16:44:32 +0200 Message-Id: <20221019144435.369902-4-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: VSPUg4G8Qnh1A4dboPwzdJvgVPCO474E X-Proofpoint-GUID: VSPUg4G8Qnh1A4dboPwzdJvgVPCO474E X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 spamscore=0 priorityscore=1501 suspectscore=0 bulkscore=0 malwarescore=0 lowpriorityscore=0 clxscore=1015 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128210818546105?= X-GMAIL-MSGID: =?utf-8?q?1747128210818546105?= While s390 already has a standard IOMMU driver and previous changes have added I/O TLB flushing operations this driver is currently only used for user-space PCI access such as vfio-pci. For the DMA API s390 instead utilizes its own implementation in arch/s390/pci/pci_dma.c which drives the same hardware and shares some code but requires a complex and fragile hand over between DMA API and IOMMU API use of a device and despite code sharing still leads to significant duplication and maintenance effort. Let's utilize the common code DMAP API implementation from drivers/iommu/dma-iommu.c instead allowing us to get rid of arch/s390/pci/pci_dma.c. Signed-off-by: Niklas Schnelle --- .../admin-guide/kernel-parameters.txt | 9 +- arch/s390/include/asm/pci.h | 7 - arch/s390/include/asm/pci_dma.h | 120 +-- arch/s390/pci/Makefile | 2 +- arch/s390/pci/pci.c | 22 +- arch/s390/pci/pci_bus.c | 5 - arch/s390/pci/pci_debug.c | 13 +- arch/s390/pci/pci_dma.c | 732 ------------------ arch/s390/pci/pci_event.c | 2 - arch/s390/pci/pci_sysfs.c | 19 +- drivers/iommu/Kconfig | 3 +- drivers/iommu/s390-iommu.c | 392 +++++++++- 12 files changed, 409 insertions(+), 917 deletions(-) delete mode 100644 arch/s390/pci/pci_dma.c diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index a465d5242774..633312c4f800 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2154,7 +2154,7 @@ forcing Dual Address Cycle for PCI cards supporting greater than 32-bit addressing. - iommu.strict= [ARM64, X86] Configure TLB invalidation behaviour + iommu.strict= [ARM64, X86, S390] Configure TLB invalidation behaviour Format: { "0" | "1" } 0 - Lazy mode. Request that DMA unmap operations use deferred @@ -5387,9 +5387,10 @@ s390_iommu= [HW,S390] Set s390 IOTLB flushing mode strict - With strict flushing every unmap operation will result in - an IOTLB flush. Default is lazy flushing before reuse, - which is faster. + With strict flushing every unmap operation will result + in an IOTLB flush. Default is lazy flushing before + reuse, which is faster. Deprecated, equivalent to + iommu.strict=1. s390_iommu_aperture= [KNL,S390] Specifies the size of the per device DMA address space diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h index b248694e0024..3f74f1cf37df 100644 --- a/arch/s390/include/asm/pci.h +++ b/arch/s390/include/asm/pci.h @@ -159,13 +159,6 @@ struct zpci_dev { unsigned long *dma_table; int tlb_refresh; - spinlock_t iommu_bitmap_lock; - unsigned long *iommu_bitmap; - unsigned long *lazy_bitmap; - unsigned long iommu_size; - unsigned long iommu_pages; - unsigned int next_bit; - struct iommu_device iommu_dev; /* IOMMU core handle */ char res_name[16]; diff --git a/arch/s390/include/asm/pci_dma.h b/arch/s390/include/asm/pci_dma.h index 91e63426bdc5..42d7cc4262ca 100644 --- a/arch/s390/include/asm/pci_dma.h +++ b/arch/s390/include/asm/pci_dma.h @@ -82,116 +82,16 @@ enum zpci_ioat_dtype { #define ZPCI_TABLE_VALID_MASK 0x20 #define ZPCI_TABLE_PROT_MASK 0x200 -static inline unsigned int calc_rtx(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK; -} - -static inline unsigned int calc_sx(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK; -} - -static inline unsigned int calc_px(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> PAGE_SHIFT) & ZPCI_PT_MASK; -} - -static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa) -{ - *entry &= ZPCI_PTE_FLAG_MASK; - *entry |= (pfaa & ZPCI_PTE_ADDR_MASK); -} - -static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto) -{ - *entry &= ZPCI_RTE_FLAG_MASK; - *entry |= (sto & ZPCI_RTE_ADDR_MASK); - *entry |= ZPCI_TABLE_TYPE_RTX; -} - -static inline void set_st_pto(unsigned long *entry, phys_addr_t pto) -{ - *entry &= ZPCI_STE_FLAG_MASK; - *entry |= (pto & ZPCI_STE_ADDR_MASK); - *entry |= ZPCI_TABLE_TYPE_SX; -} - -static inline void validate_rt_entry(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_VALID_MASK; - *entry &= ~ZPCI_TABLE_OFFSET_MASK; - *entry |= ZPCI_TABLE_VALID; - *entry |= ZPCI_TABLE_LEN_RTX; -} - -static inline void validate_st_entry(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_VALID_MASK; - *entry |= ZPCI_TABLE_VALID; -} - -static inline void invalidate_pt_entry(unsigned long *entry) -{ - WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID); - *entry &= ~ZPCI_PTE_VALID_MASK; - *entry |= ZPCI_PTE_INVALID; -} - -static inline void validate_pt_entry(unsigned long *entry) -{ - WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID); - *entry &= ~ZPCI_PTE_VALID_MASK; - *entry |= ZPCI_PTE_VALID; -} - -static inline void entry_set_protected(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_PROT_MASK; - *entry |= ZPCI_TABLE_PROTECTED; -} - -static inline void entry_clr_protected(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_PROT_MASK; - *entry |= ZPCI_TABLE_UNPROTECTED; -} - -static inline int reg_entry_isvalid(unsigned long entry) -{ - return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID; -} - -static inline int pt_entry_isvalid(unsigned long entry) -{ - return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID; -} - -static inline unsigned long *get_rt_sto(unsigned long entry) -{ - if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX) - return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK); - else - return NULL; - -} - -static inline unsigned long *get_st_pto(unsigned long entry) -{ - if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX) - return phys_to_virt(entry & ZPCI_STE_ADDR_MASK); - else - return NULL; -} - -/* Prototypes */ -void dma_free_seg_table(unsigned long); -unsigned long *dma_alloc_cpu_table(void); -void dma_cleanup_tables(unsigned long *); -unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr); -void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags); - -extern const struct dma_map_ops s390_pci_dma_ops; +struct zpci_iommu_ctrs { + atomic64_t mapped_pages; + atomic64_t unmapped_pages; + atomic64_t global_rpcits; + atomic64_t sync_map_rpcits; + atomic64_t sync_rpcits; +}; + +struct zpci_dev; +struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev); #endif diff --git a/arch/s390/pci/Makefile b/arch/s390/pci/Makefile index 5ae31ca9dd44..0547a10406e7 100644 --- a/arch/s390/pci/Makefile +++ b/arch/s390/pci/Makefile @@ -3,7 +3,7 @@ # Makefile for the s390 PCI subsystem. # -obj-$(CONFIG_PCI) += pci.o pci_irq.o pci_dma.o pci_clp.o pci_sysfs.o \ +obj-$(CONFIG_PCI) += pci.o pci_irq.o pci_clp.o pci_sysfs.o \ pci_event.o pci_debug.o pci_insn.o pci_mmio.o \ pci_bus.o pci_kvm_hook.o obj-$(CONFIG_PCI_IOV) += pci_iov.o diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c index ef38b1514c77..6b0fe8761509 100644 --- a/arch/s390/pci/pci.c +++ b/arch/s390/pci/pci.c @@ -124,7 +124,11 @@ int zpci_register_ioat(struct zpci_dev *zdev, u8 dmaas, WARN_ON_ONCE(iota & 0x3fff); fib.pba = base; - fib.pal = limit; + /* Work around off by one in ISM virt device */ + if (zdev->pft == 0x5 && limit > base) + fib.pal = limit + (1 << 12); + else + fib.pal = limit; fib.iota = iota | ZPCI_IOTA_RTTO_FLAG; fib.gd = zdev->gisa; cc = zpci_mod_fc(req, &fib, status); @@ -615,7 +619,6 @@ int pcibios_device_add(struct pci_dev *pdev) pdev->no_vf_scan = 1; pdev->dev.groups = zpci_attr_groups; - pdev->dev.dma_ops = &s390_pci_dma_ops; zpci_map_resources(pdev); for (i = 0; i < PCI_STD_NUM_BARS; i++) { @@ -789,8 +792,6 @@ int zpci_hot_reset_device(struct zpci_dev *zdev) if (zdev->dma_table) rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, virt_to_phys(zdev->dma_table), &status); - else - rc = zpci_dma_init_device(zdev); if (rc) { zpci_disable_device(zdev); return rc; @@ -915,11 +916,6 @@ int zpci_deconfigure_device(struct zpci_dev *zdev) if (zdev->zbus->bus) zpci_bus_remove_device(zdev, false); - if (zdev->dma_table) { - rc = zpci_dma_exit_device(zdev); - if (rc) - return rc; - } if (zdev_enabled(zdev)) { rc = zpci_disable_device(zdev); if (rc) @@ -968,8 +964,6 @@ void zpci_release_device(struct kref *kref) if (zdev->zbus->bus) zpci_bus_remove_device(zdev, false); - if (zdev->dma_table) - zpci_dma_exit_device(zdev); if (zdev_enabled(zdev)) zpci_disable_device(zdev); @@ -1159,10 +1153,6 @@ static int __init pci_base_init(void) if (rc) goto out_irq; - rc = zpci_dma_init(); - if (rc) - goto out_dma; - rc = clp_scan_pci_devices(); if (rc) goto out_find; @@ -1172,8 +1162,6 @@ static int __init pci_base_init(void) return 0; out_find: - zpci_dma_exit(); -out_dma: zpci_irq_exit(); out_irq: zpci_mem_exit(); diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c index 6a8da1b742ae..b15ad15999f8 100644 --- a/arch/s390/pci/pci_bus.c +++ b/arch/s390/pci/pci_bus.c @@ -49,11 +49,6 @@ static int zpci_bus_prepare_device(struct zpci_dev *zdev) rc = zpci_enable_device(zdev); if (rc) return rc; - rc = zpci_dma_init_device(zdev); - if (rc) { - zpci_disable_device(zdev); - return rc; - } } if (!zdev->has_resources) { diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index ca6bd98eec13..60cec57a3907 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -53,9 +53,12 @@ static char *pci_fmt3_names[] = { }; static char *pci_sw_names[] = { - "Allocated pages", +/* TODO "Allocated pages", */ "Mapped pages", "Unmapped pages", + "Global RPCITs", + "Sync Map RPCITs", + "Sync RPCITs", }; static void pci_fmb_show(struct seq_file *m, char *name[], int length, @@ -69,10 +72,14 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length, static void pci_sw_counter_show(struct seq_file *m) { - struct zpci_dev *zdev = m->private; - atomic64_t *counter = &zdev->allocated_pages; + struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private); + atomic64_t *counter; int i; + if (!ctrs) + return; + + counter = &ctrs->mapped_pages; for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], atomic64_read(counter)); diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c deleted file mode 100644 index ea478d11fbd1..000000000000 --- a/arch/s390/pci/pci_dma.c +++ /dev/null @@ -1,732 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright IBM Corp. 2012 - * - * Author(s): - * Jan Glauber - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -static struct kmem_cache *dma_region_table_cache; -static struct kmem_cache *dma_page_table_cache; -static int s390_iommu_strict; -static u64 s390_iommu_aperture; -static u32 s390_iommu_aperture_factor = 1; - -static int zpci_refresh_global(struct zpci_dev *zdev) -{ - return zpci_refresh_trans((u64) zdev->fh << 32, zdev->start_dma, - zdev->iommu_pages * PAGE_SIZE); -} - -unsigned long *dma_alloc_cpu_table(void) -{ - unsigned long *table, *entry; - - table = kmem_cache_alloc(dma_region_table_cache, GFP_ATOMIC); - if (!table) - return NULL; - - for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++) - *entry = ZPCI_TABLE_INVALID; - return table; -} - -static void dma_free_cpu_table(void *table) -{ - kmem_cache_free(dma_region_table_cache, table); -} - -static unsigned long *dma_alloc_page_table(void) -{ - unsigned long *table, *entry; - - table = kmem_cache_alloc(dma_page_table_cache, GFP_ATOMIC); - if (!table) - return NULL; - - for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++) - *entry = ZPCI_PTE_INVALID; - return table; -} - -static void dma_free_page_table(void *table) -{ - kmem_cache_free(dma_page_table_cache, table); -} - -static unsigned long *dma_get_seg_table_origin(unsigned long *rtep) -{ - unsigned long old_rte, rte; - unsigned long *sto; - - rte = READ_ONCE(*rtep); - if (reg_entry_isvalid(rte)) { - sto = get_rt_sto(rte); - } else { - sto = dma_alloc_cpu_table(); - if (!sto) - return NULL; - - set_rt_sto(&rte, virt_to_phys(sto)); - validate_rt_entry(&rte); - entry_clr_protected(&rte); - - old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte); - if (old_rte != ZPCI_TABLE_INVALID) { - /* Somone else was faster, use theirs */ - dma_free_cpu_table(sto); - sto = get_rt_sto(old_rte); - } - } - return sto; -} - -static unsigned long *dma_get_page_table_origin(unsigned long *step) -{ - unsigned long old_ste, ste; - unsigned long *pto; - - ste = READ_ONCE(*step); - if (reg_entry_isvalid(ste)) { - pto = get_st_pto(ste); - } else { - pto = dma_alloc_page_table(); - if (!pto) - return NULL; - set_st_pto(&ste, virt_to_phys(pto)); - validate_st_entry(&ste); - entry_clr_protected(&ste); - - old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste); - if (old_ste != ZPCI_TABLE_INVALID) { - /* Somone else was faster, use theirs */ - dma_free_page_table(pto); - pto = get_st_pto(old_ste); - } - } - return pto; -} - -unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr) -{ - unsigned long *sto, *pto; - unsigned int rtx, sx, px; - - rtx = calc_rtx(dma_addr); - sto = dma_get_seg_table_origin(&rto[rtx]); - if (!sto) - return NULL; - - sx = calc_sx(dma_addr); - pto = dma_get_page_table_origin(&sto[sx]); - if (!pto) - return NULL; - - px = calc_px(dma_addr); - return &pto[px]; -} - -void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags) -{ - unsigned long pte; - - pte = READ_ONCE(*ptep); - if (flags & ZPCI_PTE_INVALID) { - invalidate_pt_entry(&pte); - } else { - set_pt_pfaa(&pte, page_addr); - validate_pt_entry(&pte); - } - - if (flags & ZPCI_TABLE_PROTECTED) - entry_set_protected(&pte); - else - entry_clr_protected(&pte); - - xchg(ptep, pte); -} - -static int __dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa, - dma_addr_t dma_addr, size_t size, int flags) -{ - unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - phys_addr_t page_addr = (pa & PAGE_MASK); - unsigned long *entry; - int i, rc = 0; - - if (!nr_pages) - return -EINVAL; - - if (!zdev->dma_table) - return -EINVAL; - - for (i = 0; i < nr_pages; i++) { - entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr); - if (!entry) { - rc = -ENOMEM; - goto undo_cpu_trans; - } - dma_update_cpu_trans(entry, page_addr, flags); - page_addr += PAGE_SIZE; - dma_addr += PAGE_SIZE; - } - -undo_cpu_trans: - if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) { - flags = ZPCI_PTE_INVALID; - while (i-- > 0) { - page_addr -= PAGE_SIZE; - dma_addr -= PAGE_SIZE; - entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr); - if (!entry) - break; - dma_update_cpu_trans(entry, page_addr, flags); - } - } - return rc; -} - -static int __dma_purge_tlb(struct zpci_dev *zdev, dma_addr_t dma_addr, - size_t size, int flags) -{ - unsigned long irqflags; - int ret; - - /* - * With zdev->tlb_refresh == 0, rpcit is not required to establish new - * translations when previously invalid translation-table entries are - * validated. With lazy unmap, rpcit is skipped for previously valid - * entries, but a global rpcit is then required before any address can - * be re-used, i.e. after each iommu bitmap wrap-around. - */ - if ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID) { - if (!zdev->tlb_refresh) - return 0; - } else { - if (!s390_iommu_strict) - return 0; - } - - ret = zpci_refresh_trans((u64) zdev->fh << 32, dma_addr, - PAGE_ALIGN(size)); - if (ret == -ENOMEM && !s390_iommu_strict) { - /* enable the hypervisor to free some resources */ - if (zpci_refresh_global(zdev)) - goto out; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, irqflags); - bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap, - zdev->lazy_bitmap, zdev->iommu_pages); - bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages); - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, irqflags); - ret = 0; - } -out: - return ret; -} - -static int dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa, - dma_addr_t dma_addr, size_t size, int flags) -{ - int rc; - - rc = __dma_update_trans(zdev, pa, dma_addr, size, flags); - if (rc) - return rc; - - rc = __dma_purge_tlb(zdev, dma_addr, size, flags); - if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) - __dma_update_trans(zdev, pa, dma_addr, size, ZPCI_PTE_INVALID); - - return rc; -} - -void dma_free_seg_table(unsigned long entry) -{ - unsigned long *sto = get_rt_sto(entry); - int sx; - - for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++) - if (reg_entry_isvalid(sto[sx])) - dma_free_page_table(get_st_pto(sto[sx])); - - dma_free_cpu_table(sto); -} - -void dma_cleanup_tables(unsigned long *table) -{ - int rtx; - - if (!table) - return; - - for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++) - if (reg_entry_isvalid(table[rtx])) - dma_free_seg_table(table[rtx]); - - dma_free_cpu_table(table); -} - -static unsigned long __dma_alloc_iommu(struct device *dev, - unsigned long start, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - - return iommu_area_alloc(zdev->iommu_bitmap, zdev->iommu_pages, - start, size, zdev->start_dma >> PAGE_SHIFT, - dma_get_seg_boundary_nr_pages(dev, PAGE_SHIFT), - 0); -} - -static dma_addr_t dma_alloc_address(struct device *dev, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long offset, flags; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags); - offset = __dma_alloc_iommu(dev, zdev->next_bit, size); - if (offset == -1) { - if (!s390_iommu_strict) { - /* global flush before DMA addresses are reused */ - if (zpci_refresh_global(zdev)) - goto out_error; - - bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap, - zdev->lazy_bitmap, zdev->iommu_pages); - bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages); - } - /* wrap-around */ - offset = __dma_alloc_iommu(dev, 0, size); - if (offset == -1) - goto out_error; - } - zdev->next_bit = offset + size; - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); - - return zdev->start_dma + offset * PAGE_SIZE; - -out_error: - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); - return DMA_MAPPING_ERROR; -} - -static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long flags, offset; - - offset = (dma_addr - zdev->start_dma) >> PAGE_SHIFT; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags); - if (!zdev->iommu_bitmap) - goto out; - - if (s390_iommu_strict) - bitmap_clear(zdev->iommu_bitmap, offset, size); - else - bitmap_set(zdev->lazy_bitmap, offset, size); - -out: - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); -} - -static inline void zpci_err_dma(unsigned long rc, unsigned long addr) -{ - struct { - unsigned long rc; - unsigned long addr; - } __packed data = {rc, addr}; - - zpci_err_hex(&data, sizeof(data)); -} - -static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction direction, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long pa = page_to_phys(page) + offset; - int flags = ZPCI_PTE_VALID; - unsigned long nr_pages; - dma_addr_t dma_addr; - int ret; - - /* This rounds up number of pages based on size and offset */ - nr_pages = iommu_num_pages(pa, size, PAGE_SIZE); - dma_addr = dma_alloc_address(dev, nr_pages); - if (dma_addr == DMA_MAPPING_ERROR) { - ret = -ENOSPC; - goto out_err; - } - - /* Use rounded up size */ - size = nr_pages * PAGE_SIZE; - - if (direction == DMA_NONE || direction == DMA_TO_DEVICE) - flags |= ZPCI_TABLE_PROTECTED; - - ret = dma_update_trans(zdev, pa, dma_addr, size, flags); - if (ret) - goto out_free; - - atomic64_add(nr_pages, &zdev->mapped_pages); - return dma_addr + (offset & ~PAGE_MASK); - -out_free: - dma_free_address(dev, dma_addr, nr_pages); -out_err: - zpci_err("map error:\n"); - zpci_err_dma(ret, pa); - return DMA_MAPPING_ERROR; -} - -static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - int npages, ret; - - npages = iommu_num_pages(dma_addr, size, PAGE_SIZE); - dma_addr = dma_addr & PAGE_MASK; - ret = dma_update_trans(zdev, 0, dma_addr, npages * PAGE_SIZE, - ZPCI_PTE_INVALID); - if (ret) { - zpci_err("unmap error:\n"); - zpci_err_dma(ret, dma_addr); - return; - } - - atomic64_add(npages, &zdev->unmapped_pages); - dma_free_address(dev, dma_addr, npages); -} - -static void *s390_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - struct page *page; - phys_addr_t pa; - dma_addr_t map; - - size = PAGE_ALIGN(size); - page = alloc_pages(flag | __GFP_ZERO, get_order(size)); - if (!page) - return NULL; - - pa = page_to_phys(page); - map = s390_dma_map_pages(dev, page, 0, size, DMA_BIDIRECTIONAL, 0); - if (dma_mapping_error(dev, map)) { - __free_pages(page, get_order(size)); - return NULL; - } - - atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages); - if (dma_handle) - *dma_handle = map; - return phys_to_virt(pa); -} - -static void s390_dma_free(struct device *dev, size_t size, - void *vaddr, dma_addr_t dma_handle, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - - size = PAGE_ALIGN(size); - atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages); - s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, 0); - free_pages((unsigned long)vaddr, get_order(size)); -} - -/* Map a segment into a contiguous dma address area */ -static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg, - size_t size, dma_addr_t *handle, - enum dma_data_direction dir) -{ - unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - dma_addr_t dma_addr_base, dma_addr; - int flags = ZPCI_PTE_VALID; - struct scatterlist *s; - phys_addr_t pa = 0; - int ret; - - dma_addr_base = dma_alloc_address(dev, nr_pages); - if (dma_addr_base == DMA_MAPPING_ERROR) - return -ENOMEM; - - dma_addr = dma_addr_base; - if (dir == DMA_NONE || dir == DMA_TO_DEVICE) - flags |= ZPCI_TABLE_PROTECTED; - - for (s = sg; dma_addr < dma_addr_base + size; s = sg_next(s)) { - pa = page_to_phys(sg_page(s)); - ret = __dma_update_trans(zdev, pa, dma_addr, - s->offset + s->length, flags); - if (ret) - goto unmap; - - dma_addr += s->offset + s->length; - } - ret = __dma_purge_tlb(zdev, dma_addr_base, size, flags); - if (ret) - goto unmap; - - *handle = dma_addr_base; - atomic64_add(nr_pages, &zdev->mapped_pages); - - return ret; - -unmap: - dma_update_trans(zdev, 0, dma_addr_base, dma_addr - dma_addr_base, - ZPCI_PTE_INVALID); - dma_free_address(dev, dma_addr_base, nr_pages); - zpci_err("map error:\n"); - zpci_err_dma(ret, pa); - return ret; -} - -static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg, - int nr_elements, enum dma_data_direction dir, - unsigned long attrs) -{ - struct scatterlist *s = sg, *start = sg, *dma = sg; - unsigned int max = dma_get_max_seg_size(dev); - unsigned int size = s->offset + s->length; - unsigned int offset = s->offset; - int count = 0, i, ret; - - for (i = 1; i < nr_elements; i++) { - s = sg_next(s); - - s->dma_length = 0; - - if (s->offset || (size & ~PAGE_MASK) || - size + s->length > max) { - ret = __s390_dma_map_sg(dev, start, size, - &dma->dma_address, dir); - if (ret) - goto unmap; - - dma->dma_address += offset; - dma->dma_length = size - offset; - - size = offset = s->offset; - start = s; - dma = sg_next(dma); - count++; - } - size += s->length; - } - ret = __s390_dma_map_sg(dev, start, size, &dma->dma_address, dir); - if (ret) - goto unmap; - - dma->dma_address += offset; - dma->dma_length = size - offset; - - return count + 1; -unmap: - for_each_sg(sg, s, count, i) - s390_dma_unmap_pages(dev, sg_dma_address(s), sg_dma_len(s), - dir, attrs); - - return ret; -} - -static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg, - int nr_elements, enum dma_data_direction dir, - unsigned long attrs) -{ - struct scatterlist *s; - int i; - - for_each_sg(sg, s, nr_elements, i) { - if (s->dma_length) - s390_dma_unmap_pages(dev, s->dma_address, s->dma_length, - dir, attrs); - s->dma_address = 0; - s->dma_length = 0; - } -} - -int zpci_dma_init_device(struct zpci_dev *zdev) -{ - u8 status; - int rc; - - /* - * At this point, if the device is part of an IOMMU domain, this would - * be a strong hint towards a bug in the IOMMU API (common) code and/or - * simultaneous access via IOMMU and DMA API. So let's issue a warning. - */ - WARN_ON(zdev->s390_domain); - - spin_lock_init(&zdev->iommu_bitmap_lock); - - zdev->dma_table = dma_alloc_cpu_table(); - if (!zdev->dma_table) { - rc = -ENOMEM; - goto out; - } - - /* - * Restrict the iommu bitmap size to the minimum of the following: - * - s390_iommu_aperture which defaults to high_memory - * - 3-level pagetable address limit minus start_dma offset - * - DMA address range allowed by the hardware (clp query pci fn) - * - * Also set zdev->end_dma to the actual end address of the usable - * range, instead of the theoretical maximum as reported by hardware. - * - * This limits the number of concurrently usable DMA mappings since - * for each DMA mapped memory address we need a DMA address including - * extra DMA addresses for multiple mappings of the same memory address. - */ - zdev->start_dma = PAGE_ALIGN(zdev->start_dma); - zdev->iommu_size = min3(s390_iommu_aperture, - ZPCI_TABLE_SIZE_RT - zdev->start_dma, - zdev->end_dma - zdev->start_dma + 1); - zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1; - zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT; - zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8); - if (!zdev->iommu_bitmap) { - rc = -ENOMEM; - goto free_dma_table; - } - if (!s390_iommu_strict) { - zdev->lazy_bitmap = vzalloc(zdev->iommu_pages / 8); - if (!zdev->lazy_bitmap) { - rc = -ENOMEM; - goto free_bitmap; - } - - } - if (zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, - virt_to_phys(zdev->dma_table), &status)) { - rc = -EIO; - goto free_bitmap; - } - - return 0; -free_bitmap: - vfree(zdev->iommu_bitmap); - zdev->iommu_bitmap = NULL; - vfree(zdev->lazy_bitmap); - zdev->lazy_bitmap = NULL; -free_dma_table: - dma_free_cpu_table(zdev->dma_table); - zdev->dma_table = NULL; -out: - return rc; -} - -int zpci_dma_exit_device(struct zpci_dev *zdev) -{ - int cc = 0; - - /* - * At this point, if the device is part of an IOMMU domain, this would - * be a strong hint towards a bug in the IOMMU API (common) code and/or - * simultaneous access via IOMMU and DMA API. So let's issue a warning. - */ - WARN_ON(zdev->s390_domain); - if (zdev_enabled(zdev)) - cc = zpci_unregister_ioat(zdev, 0); - /* - * cc == 3 indicates the function is gone already. This can happen - * if the function was deconfigured/disabled suddenly and we have not - * received a new handle yet. - */ - if (cc && cc != 3) - return -EIO; - - dma_cleanup_tables(zdev->dma_table); - zdev->dma_table = NULL; - vfree(zdev->iommu_bitmap); - zdev->iommu_bitmap = NULL; - vfree(zdev->lazy_bitmap); - zdev->lazy_bitmap = NULL; - zdev->next_bit = 0; - return 0; -} - -static int __init dma_alloc_cpu_table_caches(void) -{ - dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables", - ZPCI_TABLE_SIZE, ZPCI_TABLE_ALIGN, - 0, NULL); - if (!dma_region_table_cache) - return -ENOMEM; - - dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables", - ZPCI_PT_SIZE, ZPCI_PT_ALIGN, - 0, NULL); - if (!dma_page_table_cache) { - kmem_cache_destroy(dma_region_table_cache); - return -ENOMEM; - } - return 0; -} - -int __init zpci_dma_init(void) -{ - s390_iommu_aperture = (u64)virt_to_phys(high_memory); - if (!s390_iommu_aperture_factor) - s390_iommu_aperture = ULONG_MAX; - else - s390_iommu_aperture *= s390_iommu_aperture_factor; - - return dma_alloc_cpu_table_caches(); -} - -void zpci_dma_exit(void) -{ - kmem_cache_destroy(dma_page_table_cache); - kmem_cache_destroy(dma_region_table_cache); -} - -const struct dma_map_ops s390_pci_dma_ops = { - .alloc = s390_dma_alloc, - .free = s390_dma_free, - .map_sg = s390_dma_map_sg, - .unmap_sg = s390_dma_unmap_sg, - .map_page = s390_dma_map_pages, - .unmap_page = s390_dma_unmap_pages, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, - .free_pages = dma_common_free_pages, - /* dma_supported is unconditionally true without a callback */ -}; -EXPORT_SYMBOL_GPL(s390_pci_dma_ops); - -static int __init s390_iommu_setup(char *str) -{ - if (!strcmp(str, "strict")) - s390_iommu_strict = 1; - return 1; -} - -__setup("s390_iommu=", s390_iommu_setup); - -static int __init s390_iommu_aperture_setup(char *str) -{ - if (kstrtou32(str, 10, &s390_iommu_aperture_factor)) - s390_iommu_aperture_factor = 1; - return 1; -} - -__setup("s390_iommu_aperture=", s390_iommu_aperture_setup); diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c index 4ef5a6a1d618..4d9773ef9e0a 100644 --- a/arch/s390/pci/pci_event.c +++ b/arch/s390/pci/pci_event.c @@ -313,8 +313,6 @@ static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh) /* Even though the device is already gone we still * need to free zPCI resources as part of the disable. */ - if (zdev->dma_table) - zpci_dma_exit_device(zdev); if (zdev_enabled(zdev)) zpci_disable_device(zdev); zdev->state = ZPCI_FN_STATE_STANDBY; diff --git a/arch/s390/pci/pci_sysfs.c b/arch/s390/pci/pci_sysfs.c index cae280e5c047..8a7abac51816 100644 --- a/arch/s390/pci/pci_sysfs.c +++ b/arch/s390/pci/pci_sysfs.c @@ -56,6 +56,7 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, struct pci_dev *pdev = to_pci_dev(dev); struct zpci_dev *zdev = to_zpci(pdev); int ret = 0; + u8 status; /* Can't use device_remove_self() here as that would lead us to lock * the pci_rescan_remove_lock while holding the device' kernfs lock. @@ -82,12 +83,6 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, pci_lock_rescan_remove(); if (pci_dev_is_added(pdev)) { pci_stop_and_remove_bus_device(pdev); - if (zdev->dma_table) { - ret = zpci_dma_exit_device(zdev); - if (ret) - goto out; - } - if (zdev_enabled(zdev)) { ret = zpci_disable_device(zdev); /* @@ -105,14 +100,16 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, ret = zpci_enable_device(zdev); if (ret) goto out; - ret = zpci_dma_init_device(zdev); - if (ret) { - zpci_disable_device(zdev); - goto out; + + if (zdev->dma_table) { + ret = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, + virt_to_phys(zdev->dma_table), &status); + if (ret) + zpci_disable_device(zdev); } - pci_rescan_bus(zdev->zbus->bus); } out: + pci_rescan_bus(zdev->zbus->bus); pci_unlock_rescan_remove(); if (kn) sysfs_unbreak_active_protection(kn); diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index dc5f7a156ff5..804fb8f42d61 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -93,7 +93,7 @@ config IOMMU_DEBUGFS choice prompt "IOMMU default domain type" depends on IOMMU_API - default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64 + default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64 || S390 default IOMMU_DEFAULT_DMA_STRICT help Choose the type of IOMMU domain used to manage DMA API usage by @@ -412,6 +412,7 @@ config ARM_SMMU_V3_SVA config S390_IOMMU def_bool y if S390 && PCI depends on S390 && PCI + select IOMMU_DMA select IOMMU_API help Support for the IOMMU API for s390 PCI devices. diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 17738c9f24ef..c2b8a7b96b8e 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -14,15 +14,299 @@ #include #include +#include "dma-iommu.h" + static const struct iommu_ops s390_iommu_ops; +static struct kmem_cache *dma_region_table_cache; +static struct kmem_cache *dma_page_table_cache; + +static u64 s390_iommu_aperture; +static u32 s390_iommu_aperture_factor = 1; + struct s390_domain { struct iommu_domain domain; struct list_head devices; + struct zpci_iommu_ctrs ctrs; unsigned long *dma_table; spinlock_t list_lock; }; +static inline unsigned int calc_rtx(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK; +} + +static inline unsigned int calc_sx(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK; +} + +static inline unsigned int calc_px(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> PAGE_SHIFT) & ZPCI_PT_MASK; +} + +static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa) +{ + *entry &= ZPCI_PTE_FLAG_MASK; + *entry |= (pfaa & ZPCI_PTE_ADDR_MASK); +} + +static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto) +{ + *entry &= ZPCI_RTE_FLAG_MASK; + *entry |= (sto & ZPCI_RTE_ADDR_MASK); + *entry |= ZPCI_TABLE_TYPE_RTX; +} + +static inline void set_st_pto(unsigned long *entry, phys_addr_t pto) +{ + *entry &= ZPCI_STE_FLAG_MASK; + *entry |= (pto & ZPCI_STE_ADDR_MASK); + *entry |= ZPCI_TABLE_TYPE_SX; +} + +static inline void validate_rt_entry(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_VALID_MASK; + *entry &= ~ZPCI_TABLE_OFFSET_MASK; + *entry |= ZPCI_TABLE_VALID; + *entry |= ZPCI_TABLE_LEN_RTX; +} + +static inline void validate_st_entry(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_VALID_MASK; + *entry |= ZPCI_TABLE_VALID; +} + +static inline void invalidate_pt_entry(unsigned long *entry) +{ + WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID); + *entry &= ~ZPCI_PTE_VALID_MASK; + *entry |= ZPCI_PTE_INVALID; +} + +static inline void validate_pt_entry(unsigned long *entry) +{ + WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID); + *entry &= ~ZPCI_PTE_VALID_MASK; + *entry |= ZPCI_PTE_VALID; +} + +static inline void entry_set_protected(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_PROT_MASK; + *entry |= ZPCI_TABLE_PROTECTED; +} + +static inline void entry_clr_protected(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_PROT_MASK; + *entry |= ZPCI_TABLE_UNPROTECTED; +} + +static inline int reg_entry_isvalid(unsigned long entry) +{ + return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID; +} + +static inline int pt_entry_isvalid(unsigned long entry) +{ + return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID; +} + +static inline unsigned long *get_rt_sto(unsigned long entry) +{ + if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX) + return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK); + else + return NULL; +} + +static inline unsigned long *get_st_pto(unsigned long entry) +{ + if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX) + return phys_to_virt(entry & ZPCI_STE_ADDR_MASK); + else + return NULL; +} + +static int __init dma_alloc_cpu_table_caches(void) +{ + dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables", + ZPCI_TABLE_SIZE, + ZPCI_TABLE_ALIGN, + 0, NULL); + if (!dma_region_table_cache) + return -ENOMEM; + + dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables", + ZPCI_PT_SIZE, + ZPCI_PT_ALIGN, + 0, NULL); + if (!dma_page_table_cache) { + kmem_cache_destroy(dma_region_table_cache); + return -ENOMEM; + } + return 0; +} + +static unsigned long *dma_alloc_cpu_table(void) +{ + unsigned long *table, *entry; + + table = kmem_cache_alloc(dma_region_table_cache, GFP_ATOMIC); + if (!table) + return NULL; + + for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++) + *entry = ZPCI_TABLE_INVALID; + return table; +} + +static void dma_free_cpu_table(void *table) +{ + kmem_cache_free(dma_region_table_cache, table); +} + +static void dma_free_page_table(void *table) +{ + kmem_cache_free(dma_page_table_cache, table); +} + +static void dma_free_seg_table(unsigned long entry) +{ + unsigned long *sto = get_rt_sto(entry); + int sx; + + for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++) + if (reg_entry_isvalid(sto[sx])) + dma_free_page_table(get_st_pto(sto[sx])); + + dma_free_cpu_table(sto); +} + +static void dma_cleanup_tables(unsigned long *table) +{ + int rtx; + + if (!table) + return; + + for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++) + if (reg_entry_isvalid(table[rtx])) + dma_free_seg_table(table[rtx]); + + dma_free_cpu_table(table); +} + +static unsigned long *dma_alloc_page_table(void) +{ + unsigned long *table, *entry; + + table = kmem_cache_alloc(dma_page_table_cache, GFP_ATOMIC); + if (!table) + return NULL; + + for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++) + *entry = ZPCI_PTE_INVALID; + return table; +} + +static unsigned long *dma_get_seg_table_origin(unsigned long *rtep) +{ + unsigned long old_rte, rte; + unsigned long *sto; + + rte = READ_ONCE(*rtep); + if (reg_entry_isvalid(rte)) { + sto = get_rt_sto(rte); + } else { + sto = dma_alloc_cpu_table(); + if (!sto) + return NULL; + + set_rt_sto(&rte, virt_to_phys(sto)); + validate_rt_entry(&rte); + entry_clr_protected(&rte); + + old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte); + if (old_rte != ZPCI_TABLE_INVALID) { + /* Somone else was faster, use theirs */ + dma_free_cpu_table(sto); + sto = get_rt_sto(old_rte); + } + } + return sto; +} + +static unsigned long *dma_get_page_table_origin(unsigned long *step) +{ + unsigned long old_ste, ste; + unsigned long *pto; + + ste = READ_ONCE(*step); + if (reg_entry_isvalid(ste)) { + pto = get_st_pto(ste); + } else { + pto = dma_alloc_page_table(); + if (!pto) + return NULL; + set_st_pto(&ste, virt_to_phys(pto)); + validate_st_entry(&ste); + entry_clr_protected(&ste); + + old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste); + if (old_ste != ZPCI_TABLE_INVALID) { + /* Somone else was faster, use theirs */ + dma_free_page_table(pto); + pto = get_st_pto(old_ste); + } + } + return pto; +} + +static unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr) +{ + unsigned long *sto, *pto; + unsigned int rtx, sx, px; + + rtx = calc_rtx(dma_addr); + sto = dma_get_seg_table_origin(&rto[rtx]); + if (!sto) + return NULL; + + sx = calc_sx(dma_addr); + pto = dma_get_page_table_origin(&sto[sx]); + if (!pto) + return NULL; + + px = calc_px(dma_addr); + return &pto[px]; +} + +static void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags) +{ + unsigned long pte; + + pte = READ_ONCE(*ptep); + if (flags & ZPCI_PTE_INVALID) { + invalidate_pt_entry(&pte); + } else { + set_pt_pfaa(&pte, page_addr); + validate_pt_entry(&pte); + } + + if (flags & ZPCI_TABLE_PROTECTED) + entry_set_protected(&pte); + else + entry_clr_protected(&pte); + + xchg(ptep, pte); +} + static struct s390_domain *to_s390_domain(struct iommu_domain *dom) { return container_of(dom, struct s390_domain, domain); @@ -44,9 +328,14 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type) { struct s390_domain *s390_domain; - if (domain_type != IOMMU_DOMAIN_UNMANAGED) + switch (domain_type) { + case IOMMU_DOMAIN_DMA: + case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_UNMANAGED: + break; + default: return NULL; - + } s390_domain = kzalloc(sizeof(*s390_domain), GFP_KERNEL); if (!s390_domain) return NULL; @@ -74,14 +363,18 @@ static void s390_domain_free(struct iommu_domain *domain) WARN_ON(!list_empty(&s390_domain->devices)); rcu_read_unlock(); dma_cleanup_tables(s390_domain->dma_table); + s390_domain->dma_table = NULL; kfree(s390_domain); } -static void __s390_iommu_detach_device(struct zpci_dev *zdev) +static void s390_iommu_detach_device(struct iommu_domain *domain, + struct device *dev) { - struct s390_domain *s390_domain = zdev->s390_domain; + struct s390_domain *s390_domain = to_s390_domain(domain); + struct zpci_dev *zdev = to_zpci_dev(dev); unsigned long flags; + WARN_ON(zdev->s390_domain != to_s390_domain(domain)); if (!s390_domain) return; @@ -111,9 +404,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain, return -EINVAL; if (zdev->s390_domain) - __s390_iommu_detach_device(zdev); - else if (zdev->dma_table) - zpci_dma_exit_device(zdev); + s390_iommu_detach_device(&zdev->s390_domain->domain, dev); cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, virt_to_phys(s390_domain->dma_table), &status); @@ -135,17 +426,6 @@ static int s390_iommu_attach_device(struct iommu_domain *domain, return 0; } -static void s390_iommu_detach_device(struct iommu_domain *domain, - struct device *dev) -{ - struct zpci_dev *zdev = to_zpci_dev(dev); - - WARN_ON(zdev->s390_domain != to_s390_domain(domain)); - - __s390_iommu_detach_device(zdev); - zpci_dma_init_device(zdev); -} - static void s390_iommu_get_resv_regions(struct device *dev, struct list_head *list) { @@ -198,7 +478,7 @@ static void s390_iommu_release_device(struct device *dev) * to the device, but keep it attached to other devices in the group. */ if (zdev) - __s390_iommu_detach_device(zdev); + s390_iommu_detach_device(&zdev->s390_domain->domain, dev); } static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) @@ -209,6 +489,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { + atomic64_inc(&s390_domain->ctrs.global_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma, zdev->end_dma - zdev->start_dma + 1); if (rc) @@ -231,6 +512,7 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain, rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { + atomic64_inc(&s390_domain->ctrs.sync_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, gather->start, size); if (rc) @@ -250,6 +532,7 @@ static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain, list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { if (!zdev->tlb_refresh) continue; + atomic64_inc(&s390_domain->ctrs.sync_map_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, iova, size); if (rc) @@ -332,16 +615,15 @@ static int s390_iommu_map_pages(struct iommu_domain *domain, if (!IS_ALIGNED(iova | paddr, pgsize)) return -EINVAL; - if (!(prot & IOMMU_READ)) - return -EINVAL; - if (!(prot & IOMMU_WRITE)) flags |= ZPCI_TABLE_PROTECTED; rc = s390_iommu_validate_trans(s390_domain, paddr, iova, - pgcount, flags); - if (!rc) + pgcount, flags); + if (!rc) { *mapped = size; + atomic64_add(pgcount, &s390_domain->ctrs.mapped_pages); + } return rc; } @@ -397,12 +679,29 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain, return 0; iommu_iotlb_gather_add_range(gather, iova, size); + atomic64_add(pgcount, &s390_domain->ctrs.unmapped_pages); return size; } +static void s390_iommu_probe_finalize(struct device *dev) +{ + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + + iommu_dma_forcedac = true; + iommu_setup_dma_ops(dev, domain->geometry.aperture_start, domain->geometry.aperture_end); +} + +struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev) +{ + if (!zdev && !zdev->s390_domain) + return NULL; + return &zdev->s390_domain->ctrs; +} + int zpci_init_iommu(struct zpci_dev *zdev) { + u64 aperture_size; int rc = 0; rc = iommu_device_sysfs_add(&zdev->iommu_dev, NULL, NULL, @@ -414,6 +713,12 @@ int zpci_init_iommu(struct zpci_dev *zdev) if (rc) goto out_sysfs; + zdev->start_dma = PAGE_ALIGN(zdev->start_dma); + aperture_size = min3(s390_iommu_aperture, + ZPCI_TABLE_SIZE_RT - zdev->start_dma, + zdev->end_dma - zdev->start_dma + 1); + zdev->end_dma = zdev->start_dma + aperture_size - 1; + return 0; out_sysfs: @@ -429,10 +734,49 @@ void zpci_destroy_iommu(struct zpci_dev *zdev) iommu_device_sysfs_remove(&zdev->iommu_dev); } +static int __init s390_iommu_setup(char *str) +{ + if (!strcmp(str, "strict")) { + pr_warn("s390_iommu=strict deprecated; use iommu.strict=1 instead\n"); + iommu_set_dma_strict(); + } + return 1; +} + +__setup("s390_iommu=", s390_iommu_setup); + +static int __init s390_iommu_aperture_setup(char *str) +{ + if (kstrtou32(str, 10, &s390_iommu_aperture_factor)) + s390_iommu_aperture_factor = 1; + return 1; +} + +__setup("s390_iommu_aperture=", s390_iommu_aperture_setup); + +static int __init s390_iommu_init(void) +{ + int rc; + + s390_iommu_aperture = (u64)virt_to_phys(high_memory); + if (!s390_iommu_aperture_factor) + s390_iommu_aperture = ULONG_MAX; + else + s390_iommu_aperture *= s390_iommu_aperture_factor; + + rc = dma_alloc_cpu_table_caches(); + if (rc) + return rc; + + return rc; +} +subsys_initcall(s390_iommu_init); + static const struct iommu_ops s390_iommu_ops = { .capable = s390_iommu_capable, .domain_alloc = s390_domain_alloc, .probe_device = s390_iommu_probe_device, + .probe_finalize = s390_iommu_probe_finalize, .release_device = s390_iommu_release_device, .device_group = generic_device_group, .pgsize_bitmap = SZ_4K, From patchwork Wed Oct 19 14:44:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5611 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370922wrs; Wed, 19 Oct 2022 07:55:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6FFa/K9gU2JqqN/y6A4M65cl2bMkqLBmJ5J7nt5OYE4r0No7wqR11s91HhxAmDy72dyW8y X-Received: by 2002:a17:902:c7c4:b0:186:5ebe:38e with SMTP id r4-20020a170902c7c400b001865ebe038emr290940pla.33.1666191341980; Wed, 19 Oct 2022 07:55:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191341; cv=none; d=google.com; s=arc-20160816; b=XgjZlxip/xH/6GFbmo+okq6CZpwXodS8OlLJ+d8Y/GjQgBdXjGUyksi0VJaCz4EYdm rCwk4qGW/HAu8kxIp/qnhpv4z4eNCWWruliI1pbExhS/vTffbdA/YlqrmJagTtMvlOhR 1Z2lKGXXGoF7QdxT57SSG1ze+TIk7dczpddjKq8Ko159HUzI9yDIitLNVrc/yHADK4qS IwlYbEYsgx3gPA+aKYIv94MyU/u0wHrc9FroZRRl7PWmICyAJAhemdv9nLpbQ2Ir+NJl FE5LrT2YzqpSOxzQFSqbDcnM4s0/zlrzS2wM0XVil7b1rX0jNKRmQQb6aWCDRQnxX8Cj yV3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+qhlNFZf7wMfnyek/HyGUhEtAQVwDLah2w1h2OTPdtI=; b=ZakF5dEI+ES6XnKx3c2+qVXVRFnIEZbgwXBU9VPgL+0EDcxzezvEfgd8iPGKqxtez4 +SYyio/pG/twSGooVhijcGmSFIX657DPyOOy4K9yCzV9c0JAKycBlDaAbTi7Ih7PyGkN 1hOWgprogw2aPXSHplgEYdobdVCHQUfrxCd0xOXJQJOn8pu46YFQCuSXp4MS3X6c+Ify yaf29GMN/3S36E3Ras5pUWrg+w6BVHeQ7ndQgQbWtDUVBVcZXbf5u5DGQ3huOEfN+s03 YBcXCQJgUjEUe4ZTY6P4sfkDmdUFeVxCDLbkat/VXAdxBJ2KjBd0ul7trB/L0GDwpsUG 6kIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=ORXZ9KSs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id je22-20020a170903265600b00178627f172csi16658675plb.348.2022.10.19.07.55.27; Wed, 19 Oct 2022 07:55:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=ORXZ9KSs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230248AbiJSOyR (ORCPT + 99 others); Wed, 19 Oct 2022 10:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231192AbiJSOxM (ORCPT ); Wed, 19 Oct 2022 10:53:12 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1C39A3A89; Wed, 19 Oct 2022 07:44:56 -0700 (PDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JEcFpZ018253; Wed, 19 Oct 2022 14:44:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=+qhlNFZf7wMfnyek/HyGUhEtAQVwDLah2w1h2OTPdtI=; b=ORXZ9KSsxQoB5LbVPoUrNGC4TSkTc83t4GcP20yjoizsSJw8L6HsWF2SfCgHutvn0bph N/HDc1d98OF6fC1F73KPnpbwgc3oszfd6fpWZA1tor+CQcSA4Gfez6uZCG2E0f7Z9eCW W+0YJeQK3pGDnD+f3vGYbxHEJ29DT8uvMtSRYMxSaUdVo+iD2DBoZe8kkAtGyix7MZtI PGVeJVWcifm4/UCEiSqACrpSSGBT7L4UATzijOhaCsOgunCdz9RkLIWhfCxnFdjh4vYN RdeQn9TYnWfWy8IQJ39rFE0Oqu8C3beT5gqB89Rhihz8/JBKIZTfr0ilYJNeyIazqslu 4g== Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kaj58bmmp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:43 +0000 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEZWkk008990; Wed, 19 Oct 2022 14:44:41 GMT Received: from b06avi18626390.portsmouth.uk.ibm.com (b06avi18626390.portsmouth.uk.ibm.com [9.149.26.192]) by ppma05fra.de.ibm.com with ESMTP id 3k7mg8wfyy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:41 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18626390.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEdcJG43188652 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:39:38 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BA9D54C04E; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4150E4C044; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 4/6] iommu/dma: Prepare for multiple flush queue implementations Date: Wed, 19 Oct 2022 16:44:33 +0200 Message-Id: <20221019144435.369902-5-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: -PJiRFsdg3uhKOxxyqwGqf0ScFeMw4OH X-Proofpoint-ORIG-GUID: -PJiRFsdg3uhKOxxyqwGqf0ScFeMw4OH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 suspectscore=0 clxscore=1015 bulkscore=0 malwarescore=0 adultscore=0 mlxlogscore=999 phishscore=0 spamscore=0 priorityscore=1501 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128252576358914?= X-GMAIL-MSGID: =?utf-8?q?1747128252576358914?= The dma-iommu currently only supports a per CPU, highly parallelized flush queue that depends on relatively cheap global flushes. Especially in virtualized environments these assumptions may not hold true which in the past lead to the need to resort to strict mode. Instead prepare the dma-iommu code for alternative flushing schemes that optimize for different needs. Signed-off-by: Niklas Schnelle --- drivers/iommu/dma-iommu.c | 164 ++++++++++++++++++++++---------------- 1 file changed, 97 insertions(+), 67 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9297b741f5e8..77d969f5aad7 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -44,12 +44,13 @@ enum iommu_dma_cookie_type { struct iommu_dma_cookie { enum iommu_dma_cookie_type type; + union { /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ struct { - struct iova_domain iovad; - - struct iova_fq __percpu *fq; /* Flush queue */ + struct iova_domain iovad; + /* Flush queue */ + struct iova_percpu __percpu *percpu_fq; /* Number of TLB flushes that have been started */ atomic64_t fq_flush_start_cnt; /* Number of TLB flushes that have been finished */ @@ -83,13 +84,13 @@ static int __init iommu_dma_forcedac_setup(char *str) early_param("iommu.forcedac", iommu_dma_forcedac_setup); /* Number of entries per flush queue */ -#define IOVA_FQ_SIZE 256 +#define IOVA_PERCPU_SIZE 256 /* Timeout (in ms) after which entries are flushed from the queue */ -#define IOVA_FQ_TIMEOUT 10 +#define IOVA_PERCPU_TIMEOUT 10 /* Flush queue entry for deferred flushing */ -struct iova_fq_entry { +struct iova_percpu_entry { unsigned long iova_pfn; unsigned long pages; struct list_head freelist; @@ -97,40 +98,40 @@ struct iova_fq_entry { }; /* Per-CPU flush queue structure */ -struct iova_fq { - struct iova_fq_entry entries[IOVA_FQ_SIZE]; +struct iova_percpu { + struct iova_percpu_entry entries[IOVA_PERCPU_SIZE]; unsigned int head, tail; spinlock_t lock; }; -#define fq_ring_for_each(i, fq) \ - for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_FQ_SIZE) +#define ring_for_each_percpu(i, fq) \ + for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_PERCPU_SIZE) -static inline bool fq_full(struct iova_fq *fq) +static inline bool is_full_percpu(struct iova_percpu *fq) { assert_spin_locked(&fq->lock); - return (((fq->tail + 1) % IOVA_FQ_SIZE) == fq->head); + return (((fq->tail + 1) % IOVA_PERCPU_SIZE) == fq->head); } -static inline unsigned int fq_ring_add(struct iova_fq *fq) +static inline unsigned int ring_add_percpu(struct iova_percpu *fq) { unsigned int idx = fq->tail; assert_spin_locked(&fq->lock); - fq->tail = (idx + 1) % IOVA_FQ_SIZE; + fq->tail = (idx + 1) % IOVA_PERCPU_SIZE; return idx; } -static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq) +static void ring_free_percpu(struct iommu_dma_cookie *cookie, struct iova_percpu *fq) { u64 counter = atomic64_read(&cookie->fq_flush_finish_cnt); unsigned int idx; assert_spin_locked(&fq->lock); - fq_ring_for_each(idx, fq) { + ring_for_each_percpu(idx, fq) { if (fq->entries[idx].counter >= counter) break; @@ -140,69 +141,66 @@ static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq) fq->entries[idx].iova_pfn, fq->entries[idx].pages); - fq->head = (fq->head + 1) % IOVA_FQ_SIZE; + fq->head = (fq->head + 1) % IOVA_PERCPU_SIZE; } } -static void fq_flush_iotlb(struct iommu_dma_cookie *cookie) +static void flush_iotlb_percpu(struct iommu_dma_cookie *cookie) { atomic64_inc(&cookie->fq_flush_start_cnt); cookie->fq_domain->ops->flush_iotlb_all(cookie->fq_domain); atomic64_inc(&cookie->fq_flush_finish_cnt); } -static void fq_flush_timeout(struct timer_list *t) +static void flush_percpu(struct iommu_dma_cookie *cookie) { - struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); int cpu; - atomic_set(&cookie->fq_timer_on, 0); - fq_flush_iotlb(cookie); + flush_iotlb_percpu(cookie); for_each_possible_cpu(cpu) { unsigned long flags; - struct iova_fq *fq; + struct iova_percpu *fq; - fq = per_cpu_ptr(cookie->fq, cpu); + fq = per_cpu_ptr(cookie->percpu_fq, cpu); spin_lock_irqsave(&fq->lock, flags); - fq_ring_free(cookie, fq); + ring_free_percpu(cookie, fq); spin_unlock_irqrestore(&fq->lock, flags); } } -static void queue_iova(struct iommu_dma_cookie *cookie, - unsigned long pfn, unsigned long pages, - struct list_head *freelist) +static void fq_flush_timeout(struct timer_list *t) { - struct iova_fq *fq; + struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); + + atomic_set(&cookie->fq_timer_on, 0); + flush_percpu(cookie); +} + +static void queue_iova_percpu(struct iommu_dma_cookie *cookie, + unsigned long pfn, unsigned long pages, + struct list_head *freelist) +{ + struct iova_percpu *fq; unsigned long flags; unsigned int idx; - /* - * Order against the IOMMU driver's pagetable update from unmapping - * @pte, to guarantee that fq_flush_iotlb() observes that if called - * from a different CPU before we release the lock below. Full barrier - * so it also pairs with iommu_dma_init_fq() to avoid seeing partially - * written fq state here. - */ - smp_mb(); - - fq = raw_cpu_ptr(cookie->fq); + fq = raw_cpu_ptr(cookie->percpu_fq); spin_lock_irqsave(&fq->lock, flags); /* * First remove all entries from the flush queue that have already been - * flushed out on another CPU. This makes the fq_full() check below less + * flushed out on another CPU. This makes the fullness check below less * likely to be true. */ - fq_ring_free(cookie, fq); + ring_free_percpu(cookie, fq); - if (fq_full(fq)) { - fq_flush_iotlb(cookie); - fq_ring_free(cookie, fq); + if (is_full_percpu(fq)) { + flush_iotlb_percpu(cookie); + ring_free_percpu(cookie, fq); } - idx = fq_ring_add(fq); + idx = ring_add_percpu(fq); fq->entries[idx].iova_pfn = pfn; fq->entries[idx].pages = pages; @@ -210,65 +208,97 @@ static void queue_iova(struct iommu_dma_cookie *cookie, list_splice(freelist, &fq->entries[idx].freelist); spin_unlock_irqrestore(&fq->lock, flags); +} + +static void queue_iova(struct iommu_dma_cookie *cookie, + unsigned long pfn, unsigned long pages, + struct list_head *freelist) +{ + /* + * Order against the IOMMU driver's pagetable update from unmapping + * @pte, to guarantee that flush_iotlb_percpu() observes that if called + * from a different CPU before we release the lock below. Full barrier + * so it also pairs with iommu_dma_init_fq() to avoid seeing + * partially written queue state here. + */ + smp_mb(); + + queue_iova_percpu(cookie, pfn, pages, freelist); /* Avoid false sharing as much as possible. */ if (!atomic_read(&cookie->fq_timer_on) && !atomic_xchg(&cookie->fq_timer_on, 1)) mod_timer(&cookie->fq_timer, - jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); + jiffies + msecs_to_jiffies(IOVA_PERCPU_TIMEOUT)); } -static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) +static void iommu_dma_free_percpu(struct iommu_dma_cookie *cookie) { int cpu, idx; - if (!cookie->fq) + if (!cookie->percpu_fq) return; - del_timer_sync(&cookie->fq_timer); - /* The IOVAs will be torn down separately, so just free our queued pages */ for_each_possible_cpu(cpu) { - struct iova_fq *fq = per_cpu_ptr(cookie->fq, cpu); + struct iova_percpu *fq = per_cpu_ptr(cookie->percpu_fq, cpu); - fq_ring_for_each(idx, fq) + ring_for_each_percpu(idx, fq) put_pages_list(&fq->entries[idx].freelist); } - free_percpu(cookie->fq); + free_percpu(cookie->percpu_fq); } -/* sysfs updates are serialised by the mutex of the group owning @domain */ -int iommu_dma_init_fq(struct iommu_domain *domain) +static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) { - struct iommu_dma_cookie *cookie = domain->iova_cookie; - struct iova_fq __percpu *queue; - int i, cpu; + del_timer_sync(&cookie->fq_timer); + /* The IOVAs will be torn down separately, so just free our queued pages */ + iommu_dma_free_percpu(cookie); +} - if (cookie->fq_domain) - return 0; +static int iommu_dma_init_percpu(struct iommu_dma_cookie *cookie) +{ + struct iova_percpu __percpu *queue; + int i, cpu; atomic64_set(&cookie->fq_flush_start_cnt, 0); atomic64_set(&cookie->fq_flush_finish_cnt, 0); - queue = alloc_percpu(struct iova_fq); - if (!queue) { - pr_warn("iova flush queue initialization failed\n"); + queue = alloc_percpu(struct iova_percpu); + if (!queue) return -ENOMEM; - } for_each_possible_cpu(cpu) { - struct iova_fq *fq = per_cpu_ptr(queue, cpu); + struct iova_percpu *fq = per_cpu_ptr(queue, cpu); fq->head = 0; fq->tail = 0; spin_lock_init(&fq->lock); - for (i = 0; i < IOVA_FQ_SIZE; i++) + for (i = 0; i < IOVA_PERCPU_SIZE; i++) INIT_LIST_HEAD(&fq->entries[i].freelist); } - cookie->fq = queue; + cookie->percpu_fq = queue; + + return 0; +} + +/* sysfs updates are serialised by the mutex of the group owning @domain */ +int iommu_dma_init_fq(struct iommu_domain *domain) +{ + struct iommu_dma_cookie *cookie = domain->iova_cookie; + int rc; + + if (cookie->fq_domain) + return 0; + + rc = iommu_dma_init_percpu(cookie); + if (rc) { + pr_warn("iova flush queue initialization failed\n"); + return rc; + } timer_setup(&cookie->fq_timer, fq_flush_timeout, 0); atomic_set(&cookie->fq_timer_on, 0); From patchwork Wed Oct 19 14:44:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5608 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370473wrs; Wed, 19 Oct 2022 07:54:33 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7iu9y3HGcofXee4cCp1YgqpE+FsiPgbg1rViIb4WXOKKJ1VPXZvlKRXAnjX5FPrrsJ/3OC X-Received: by 2002:a17:902:e805:b0:185:52a8:14c2 with SMTP id u5-20020a170902e80500b0018552a814c2mr9033701plg.46.1666191273555; Wed, 19 Oct 2022 07:54:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191273; cv=none; d=google.com; s=arc-20160816; b=kuOjrOZW+2n8i+YMt1fOv5D5SP08tg6ZDt6dFy/GCnWgGDC0CadjF5bJdEOIXOij+i sOO+QVnl9BWCHUGnqT7wxfU/ZpnaiaqCy8OznxrWJfcT+wwzo4/8s9RM7SJDWjvqQf8+ FnVe8gGWDPn3itdkadWvvLNU1TK2Ou6acaA5y35uvJxg9aYtN+/hTfSBw1CzjmJbk9W3 hB1omu8kgZEqx7B4qLhVkVhg4o+Hfkq5Ej7TFKWDmoRQFmFdwhi7H+5uc9p++wiwjtTq 1pUmtqpy1lIucw6tx045NaJzyZ/MCGbUTrTIety+lfzxpl+8Y64MihkurrTLYMqyvlb7 TR5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=akekm9bDUfMZxwGQudd6My8JhpeoURSTUO+R48iuqfE=; b=mQUqj9dOetf0nlLQEYmP21goehc4OYopX4mDVr8OtGT0w0pojJM/wr1LsxH+/qPVJd SKS9FiyHf4w0sm50mo6ITdv4Nzok2hVhHmTWgvJAK50DwYtDAO/VE+RzAnxJq2cy38KY jbanUmurJk4I2CAE6CFxHryY2R0yn/aSvZwQqd/EZdeol/rKKRlBXNlqtIOc6AjeLwXE xg00zGRTxyjLgkeiqUVRzIlh7nYRZj5CGMW0oXeH82GczaM9+tevgUiOjQNY7mW6Sz7e O6fVztwLaB0ruA016qVnUEFIPwnsoinInIMhvnvvzR9bqmtC4xdRG7ZQK2VPzLBX7GF5 N/cg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=CGkRrsQ9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p4-20020a637f44000000b0046b3dce8455si10166221pgn.14.2022.10.19.07.54.17; Wed, 19 Oct 2022 07:54:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=CGkRrsQ9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231667AbiJSOxx (ORCPT + 99 others); Wed, 19 Oct 2022 10:53:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbiJSOxF (ORCPT ); Wed, 19 Oct 2022 10:53:05 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BEEFEE8B1; Wed, 19 Oct 2022 07:44:59 -0700 (PDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JDnLaO001414; Wed, 19 Oct 2022 14:44:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=akekm9bDUfMZxwGQudd6My8JhpeoURSTUO+R48iuqfE=; b=CGkRrsQ9+Bf2iaL0IO5QdXCDwqrf0b7jNiim80FqG2J2fW8EoxFr/96jCjgakBXwj0Um +LCC7zQaOKj84SMA7eUpefL8OCvegDFDWLiG/cFGqFnzixUd9O+hkizlm4Mw49geYYu6 pmvNr8V3Pvx7EiMYy7O+a1Ohk1fFh1n3wMGb9nAVmWL871lh2V4qWWXWc6iWdg6qQxbo /X6Cj+Ig7OOu+9LGqRwezBiEirvK295hJDDFeB3cRLwE7G5WEb8y3JIhd4GTsOSjVPyo vlAh6V2iR436yKQKWz7391Z6anl+P5kyyR1Ad9fW29jijHCN4/dRWMNLJ/rYNp5YJc2f Mw== Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3kajf4jftg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:44 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEZDuN025252; Wed, 19 Oct 2022 14:44:42 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma04ams.nl.ibm.com with ESMTP id 3k7mg978u1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:42 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEid4P1245894 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:44:39 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4B2EF4C040; Wed, 19 Oct 2022 14:44:39 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C63B24C046; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:38 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 5/6] iommu/dma: Add simple batching flush queue implementation Date: Wed, 19 Oct 2022 16:44:34 +0200 Message-Id: <20221019144435.369902-6-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: GWR1zCYnuPrgQzoIOmJDYp7XiwvC9EPQ X-Proofpoint-GUID: GWR1zCYnuPrgQzoIOmJDYp7XiwvC9EPQ X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 spamscore=0 priorityscore=1501 suspectscore=0 bulkscore=0 malwarescore=0 lowpriorityscore=0 clxscore=1015 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128180980914523?= X-GMAIL-MSGID: =?utf-8?q?1747128180980914523?= Having dma-iommu prepared for alternative flush queue implementations we add a simple per-domain flush queue that optimzes for scenarios where global IOTLB flushes are used but are quite expensive and poorly parallelized. This is for example the case when IOTLB flushes are used to trigger updates to an underlying hypervisor's shadow tables and approximates the global flushing scheme previously in use on s390. This is achieved by having a per-domain global flush queue that allows queuing a much large number of lazily freed IOVAs than the per-CPU flush queues. While using a single queue reduces parallelism this is not a problem when global IOTLB flushes are synchronized in the hypervisor anyway. While this flush queue allows queuing a large number of IOVAs we do limit the time a freed IOVA remains accessible by hardware to 1 second using a timeout that is reset on flush. Enable this new flush queue implementation by default on s390 systems which use IOTLB flushes to trigger shadowing namely z/VM and KVM guests. Link: https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@arm.com/ Signed-off-by: Niklas Schnelle --- drivers/iommu/dma-iommu.c | 157 +++++++++++++++++++++++++++++++++++-- drivers/iommu/iommu.c | 19 +++-- drivers/iommu/s390-iommu.c | 11 +++ include/linux/iommu.h | 6 ++ 4 files changed, 180 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 77d969f5aad7..427fb84f50c3 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -49,8 +49,13 @@ struct iommu_dma_cookie { /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ struct { struct iova_domain iovad; - /* Flush queue */ - struct iova_percpu __percpu *percpu_fq; + /* Flush queues */ + union { + struct iova_percpu __percpu *percpu_fq; + struct iova_simple *simple_fq; + }; + /* Queue timeout in milliseconds */ + unsigned int fq_timeout; /* Number of TLB flushes that have been started */ atomic64_t fq_flush_start_cnt; /* Number of TLB flushes that have been finished */ @@ -104,6 +109,119 @@ struct iova_percpu { spinlock_t lock; }; +/* Simplified batched flush queue for expensive IOTLB flushes */ +#define IOVA_SIMPLE_SIZE 32768 +/* Maximum time in milliseconds an IOVA can remain lazily freed */ +#define IOVA_SIMPLE_TIMEOUT 1000 + +struct iova_simple_entry { + unsigned long iova_pfn; + unsigned long pages; +}; + +struct iova_simple { + /* Unlike iova_percpu we use a single queue lock */ + spinlock_t lock; + unsigned int tail; + unsigned long total_pages; + struct list_head freelist; + struct iova_simple_entry entries[]; +}; + +static bool is_full_simple(struct iommu_dma_cookie *cookie) +{ + struct iommu_domain *fq_domain = cookie->fq_domain; + struct iova_domain *iovad = &cookie->iovad; + struct iova_simple *sq = cookie->simple_fq; + unsigned long aperture_pages; + + assert_spin_locked(&sq->lock); + + /* If more than 7/8 the aperture is batched let's flush */ + aperture_pages = ((fq_domain->geometry.aperture_end + 1) - + fq_domain->geometry.aperture_start) >> iova_shift(iovad); + aperture_pages -= aperture_pages >> 3; + + return (sq->tail >= IOVA_SIMPLE_SIZE || + sq->total_pages >= aperture_pages); +} + +static void flush_simple(struct iommu_dma_cookie *cookie) +{ + struct iova_simple *sq = cookie->simple_fq; + unsigned int i; + + assert_spin_locked(&sq->lock); + /* We're flushing so postpone timeout */ + mod_timer(&cookie->fq_timer, + jiffies + msecs_to_jiffies(cookie->fq_timeout)); + cookie->fq_domain->ops->flush_iotlb_all(cookie->fq_domain); + + put_pages_list(&sq->freelist); + for (i = 0; i < sq->tail; i++) { + free_iova_fast(&cookie->iovad, + sq->entries[i].iova_pfn, + sq->entries[i].pages); + } + sq->tail = 0; + sq->total_pages = 0; +} + +static void flush_simple_lock(struct iommu_dma_cookie *cookie) +{ + unsigned long flags; + + spin_lock_irqsave(&cookie->simple_fq->lock, flags); + flush_simple(cookie); + spin_unlock_irqrestore(&cookie->simple_fq->lock, flags); +} + +static void queue_iova_simple(struct iommu_dma_cookie *cookie, + unsigned long pfn, unsigned long pages, + struct list_head *freelist) +{ + struct iova_simple *sq = cookie->simple_fq; + unsigned long flags; + unsigned int idx; + + spin_lock_irqsave(&sq->lock, flags); + if (is_full_simple(cookie)) + flush_simple(cookie); + idx = sq->tail++; + + sq->entries[idx].iova_pfn = pfn; + sq->entries[idx].pages = pages; + list_splice(freelist, &sq->freelist); + sq->total_pages += pages; + spin_unlock_irqrestore(&sq->lock, flags); +} + +static int iommu_dma_init_simple(struct iommu_dma_cookie *cookie) +{ + struct iova_simple *queue; + + queue = vzalloc(sizeof(*queue) + + IOVA_SIMPLE_SIZE * sizeof(struct iova_simple_entry)); + if (!queue) + return -ENOMEM; + + INIT_LIST_HEAD(&queue->freelist); + cookie->fq_timeout = IOVA_SIMPLE_TIMEOUT; + cookie->simple_fq = queue; + + return 0; +} + +static void iommu_dma_free_simple(struct iommu_dma_cookie *cookie) +{ + if (!cookie->simple_fq) + return; + + put_pages_list(&cookie->simple_fq->freelist); + vfree(cookie->simple_fq); + cookie->simple_fq = NULL; +} + #define ring_for_each_percpu(i, fq) \ for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_PERCPU_SIZE) @@ -169,12 +287,23 @@ static void flush_percpu(struct iommu_dma_cookie *cookie) } } +static void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie) +{ + if (!cookie->fq_domain) + return; + + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + flush_percpu(cookie); + else + flush_simple_lock(cookie); +} + static void fq_flush_timeout(struct timer_list *t) { struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); atomic_set(&cookie->fq_timer_on, 0); - flush_percpu(cookie); + iommu_dma_flush_fq(cookie); } static void queue_iova_percpu(struct iommu_dma_cookie *cookie, @@ -223,13 +352,16 @@ static void queue_iova(struct iommu_dma_cookie *cookie, */ smp_mb(); - queue_iova_percpu(cookie, pfn, pages, freelist); + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + queue_iova_percpu(cookie, pfn, pages, freelist); + else + queue_iova_simple(cookie, pfn, pages, freelist); /* Avoid false sharing as much as possible. */ if (!atomic_read(&cookie->fq_timer_on) && !atomic_xchg(&cookie->fq_timer_on, 1)) mod_timer(&cookie->fq_timer, - jiffies + msecs_to_jiffies(IOVA_PERCPU_TIMEOUT)); + jiffies + msecs_to_jiffies(cookie->fq_timeout)); } static void iommu_dma_free_percpu(struct iommu_dma_cookie *cookie) @@ -253,7 +385,10 @@ static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) { del_timer_sync(&cookie->fq_timer); /* The IOVAs will be torn down separately, so just free our queued pages */ - iommu_dma_free_percpu(cookie); + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + iommu_dma_free_percpu(cookie); + else + iommu_dma_free_simple(cookie); } static int iommu_dma_init_percpu(struct iommu_dma_cookie *cookie) @@ -280,6 +415,7 @@ static int iommu_dma_init_percpu(struct iommu_dma_cookie *cookie) INIT_LIST_HEAD(&fq->entries[i].freelist); } + cookie->fq_timeout = IOVA_PERCPU_TIMEOUT; cookie->percpu_fq = queue; return 0; @@ -294,7 +430,10 @@ int iommu_dma_init_fq(struct iommu_domain *domain) if (cookie->fq_domain) return 0; - rc = iommu_dma_init_percpu(cookie); + if (domain->type == IOMMU_DOMAIN_DMA_FQ) + rc = iommu_dma_init_percpu(cookie); + else + rc = iommu_dma_init_simple(cookie); if (rc) { pr_warn("iova flush queue initialization failed\n"); return rc; @@ -613,7 +752,9 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, goto done_unlock; /* If the FQ fails we can simply fall back to strict mode */ - if (domain->type == IOMMU_DOMAIN_DMA_FQ && iommu_dma_init_fq(domain)) + if ((domain->type == IOMMU_DOMAIN_DMA_FQ || + domain->type == IOMMU_DOMAIN_DMA_SQ) && + iommu_dma_init_fq(domain)) domain->type = IOMMU_DOMAIN_DMA; ret = iova_reserve_iommu_regions(dev, domain); diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 4893c2429ca5..2b3a12799702 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -140,6 +140,7 @@ static const char *iommu_domain_type_str(unsigned int t) return "Unmanaged"; case IOMMU_DOMAIN_DMA: case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_DMA_SQ: return "Translated"; default: return "Unknown"; @@ -437,7 +438,8 @@ early_param("iommu.strict", iommu_dma_setup); void iommu_set_dma_strict(void) { iommu_dma_strict = true; - if (iommu_def_domain_type == IOMMU_DOMAIN_DMA_FQ) + if (iommu_def_domain_type == IOMMU_DOMAIN_DMA_FQ || + iommu_def_domain_type == IOMMU_DOMAIN_DMA_SQ) iommu_def_domain_type = IOMMU_DOMAIN_DMA; } @@ -638,6 +640,9 @@ static ssize_t iommu_group_show_type(struct iommu_group *group, case IOMMU_DOMAIN_DMA_FQ: type = "DMA-FQ\n"; break; + case IOMMU_DOMAIN_DMA_SQ: + type = "DMA-SQ\n"; + break; } } mutex_unlock(&group->mutex); @@ -2908,10 +2913,11 @@ static int iommu_change_dev_def_domain(struct iommu_group *group, } /* We can bring up a flush queue without tearing down the domain */ - if (type == IOMMU_DOMAIN_DMA_FQ && prev_dom->type == IOMMU_DOMAIN_DMA) { + if ((type == IOMMU_DOMAIN_DMA_FQ || type == IOMMU_DOMAIN_DMA_SQ) && + prev_dom->type == IOMMU_DOMAIN_DMA) { ret = iommu_dma_init_fq(prev_dom); if (!ret) - prev_dom->type = IOMMU_DOMAIN_DMA_FQ; + prev_dom->type = type; goto out; } @@ -2982,6 +2988,8 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, req_type = IOMMU_DOMAIN_DMA; else if (sysfs_streq(buf, "DMA-FQ")) req_type = IOMMU_DOMAIN_DMA_FQ; + else if (sysfs_streq(buf, "DMA-SQ")) + req_type = IOMMU_DOMAIN_DMA_SQ; else if (sysfs_streq(buf, "auto")) req_type = 0; else @@ -3033,8 +3041,9 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, /* Check if the device in the group still has a driver bound to it */ device_lock(dev); - if (device_is_bound(dev) && !(req_type == IOMMU_DOMAIN_DMA_FQ && - group->default_domain->type == IOMMU_DOMAIN_DMA)) { + if (device_is_bound(dev) && + !((req_type == IOMMU_DOMAIN_DMA_FQ || req_type == IOMMU_DOMAIN_DMA_SQ) && + group->default_domain->type == IOMMU_DOMAIN_DMA)) { pr_err_ratelimited("Device is still bound to driver\n"); ret = -EBUSY; goto out; diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index c2b8a7b96b8e..506f8b92931f 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -324,6 +324,15 @@ static bool s390_iommu_capable(struct device *dev, enum iommu_cap cap) } } +static int s390_iommu_def_domain_type(struct device *dev) +{ + struct zpci_dev *zdev = to_zpci_dev(dev); + + if (zdev->tlb_refresh) + return IOMMU_DOMAIN_DMA_SQ; + return IOMMU_DOMAIN_DMA_FQ; +} + static struct iommu_domain *s390_domain_alloc(unsigned domain_type) { struct s390_domain *s390_domain; @@ -331,6 +340,7 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type) switch (domain_type) { case IOMMU_DOMAIN_DMA: case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_DMA_SQ: case IOMMU_DOMAIN_UNMANAGED: break; default: @@ -774,6 +784,7 @@ subsys_initcall(s390_iommu_init); static const struct iommu_ops s390_iommu_ops = { .capable = s390_iommu_capable, + .def_domain_type = s390_iommu_def_domain_type, .domain_alloc = s390_domain_alloc, .probe_device = s390_iommu_probe_device, .probe_finalize = s390_iommu_probe_finalize, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index a325532aeab5..6c3fe62ec0df 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -63,6 +63,7 @@ struct iommu_domain_geometry { implementation */ #define __IOMMU_DOMAIN_PT (1U << 2) /* Domain is identity mapped */ #define __IOMMU_DOMAIN_DMA_FQ (1U << 3) /* DMA-API uses flush queue */ +#define __IOMMU_DOMAIN_DMA_SQ (1U << 4) /* DMA-API uses max fill queue */ /* * This are the possible domain-types @@ -77,6 +78,8 @@ struct iommu_domain_geometry { * certain optimizations for these domains * IOMMU_DOMAIN_DMA_FQ - As above, but definitely using batched TLB * invalidation. + * IOMMU_DOMAIN_DMA_SQ - As above, but batched invalidations are only + * flushed when running out of queue space. */ #define IOMMU_DOMAIN_BLOCKED (0U) #define IOMMU_DOMAIN_IDENTITY (__IOMMU_DOMAIN_PT) @@ -86,6 +89,9 @@ struct iommu_domain_geometry { #define IOMMU_DOMAIN_DMA_FQ (__IOMMU_DOMAIN_PAGING | \ __IOMMU_DOMAIN_DMA_API | \ __IOMMU_DOMAIN_DMA_FQ) +#define IOMMU_DOMAIN_DMA_SQ (__IOMMU_DOMAIN_PAGING | \ + __IOMMU_DOMAIN_DMA_API | \ + __IOMMU_DOMAIN_DMA_SQ) struct iommu_domain { unsigned type; From patchwork Wed Oct 19 14:44:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 5612 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp370987wrs; Wed, 19 Oct 2022 07:55:52 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4z8MDZLB0w8AlmwRrPvgTEFe6+P5GW1DGDABtTKIuhJPnJaaPZ6oOjXqOxQc3djmtS9xYg X-Received: by 2002:a63:8648:0:b0:461:722b:ffc8 with SMTP id x69-20020a638648000000b00461722bffc8mr7745859pgd.118.1666191352274; Wed, 19 Oct 2022 07:55:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666191352; cv=none; d=google.com; s=arc-20160816; b=XcanFRTufP8kODy6vBN83Kyr9qzFia2gzLbl0CSI06jO+kkyoJfL5SWwmbFRGnR7ih G7btcaFliSXCL5SlLFLQCaN0xwPzQQ13d0WHMHmbkWfRXw06eR/gIN4SpnF5vqv7Ojq9 gBizgUpijTgdAtw5EkCx10rs2PgnvPU9IJacGoibSzElJlqv/gxjokeYeGHtiSyX+abb PX7jZq3W1vktlYv1W6LiIMxFyLY6uUh6dJ/lhsI1NNHJrvg744KctauYpUPyn5MR7zYI LgtEKG4rfLiOFE+WJTHuTDQIEEDaLxsJwlDjW9Tapx7ftA5CM9Ac1nxEL4uhPwgbLfOY DryQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=T9ipHhLWjB4FtbXLtXXC5uRsveoymZZqeV5ExTNBY10=; b=Z+kCGTb9e9e7v7EnigP0ecChWVq8rcCn/YCU7Ues6oKgd0r12aLteXCqOfO5f+16YL hrsf2TFphMnr3CxnNHR7IV3tqdXfBHMdmaKt2oV6X5t4S1ZZlh7dGkB+Tto/W56qTc2x Pv+WulADU6A/nUP1PcpdHsRzTInIYP/16uSdfdYTlAowxErb0nudxDgkg1Rua2dilB9Z beoHxlocSa19ARBMGts3h5/T27DK+8HkD1HjFMffJnNpVw9ffGKxeWVpmkx/KY4LFdmn HmX2aRVjRg7UJsr0sD+UVIynUlA8Q7GterQ5TbcdM571595BpPeshlxAXq7KlDt7j2cs hbJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=UljNcdwF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r7-20020a056a00216700b0053e15843f1fsi17641446pff.131.2022.10.19.07.55.31; Wed, 19 Oct 2022 07:55:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=UljNcdwF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229954AbiJSOyJ (ORCPT + 99 others); Wed, 19 Oct 2022 10:54:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231468AbiJSOxJ (ORCPT ); Wed, 19 Oct 2022 10:53:09 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E30EFEC51A; Wed, 19 Oct 2022 07:44:58 -0700 (PDT) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29JEgDc0012825; Wed, 19 Oct 2022 14:44:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=T9ipHhLWjB4FtbXLtXXC5uRsveoymZZqeV5ExTNBY10=; b=UljNcdwFHsHrpZoyCXMsdW36XZ0sSp3RvCX7g68FwFoxS1aandACxdR41FIRu4Y9BWlf hAfRuPJ+54zLjqnYIyl9aU8TqEa0JKv1jAn/oINPbQqZgE5xJf7O/tZfdskvPjj5HSbu Nw1meFtJ6c8LMDEirLEBAIxRjm9qoR65oILmgw6qVJNSQ4JSrxBFWswlfrAkbArYc8UM l03CEMr9BePefBYsapbC9j+1lpvPJyQFfejoLIan9BNwzD+JH5ERM6GTQ6vBYcFDJIIH R/3lu0cbnHRZB1jREJowIHn2+3yAVh+5C3DAG19++DwzfkQjQxY556QsHNyemFTYNRBX vg== Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3kak7w82em-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:45 +0000 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 29JEZQjO016360; Wed, 19 Oct 2022 14:44:43 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma04fra.de.ibm.com with ESMTP id 3k7mg95fuf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 19 Oct 2022 14:44:43 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 29JEidiX63308238 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 19 Oct 2022 14:44:40 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CCEB44C046; Wed, 19 Oct 2022 14:44:39 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 591824C044; Wed, 19 Oct 2022 14:44:39 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Oct 2022 14:44:39 +0000 (GMT) From: Niklas Schnelle To: Matthew Rosato , iommu@lists.linux.dev, Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe Cc: Gerd Bayer , Pierre Morel , linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Wenjia Zhang , Julian Ruess Subject: [RFC 6/6] iommu/s390: flush queued IOVAs on RPCIT out of resource indication Date: Wed, 19 Oct 2022 16:44:35 +0200 Message-Id: <20221019144435.369902-7-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221019144435.369902-1-schnelle@linux.ibm.com> References: <20221019144435.369902-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Zvaae_gU5yUneCMK3W5k3ZpMFxHyxn3x X-Proofpoint-ORIG-GUID: Zvaae_gU5yUneCMK3W5k3ZpMFxHyxn3x X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-19_08,2022-10-19_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0 mlxscore=0 impostorscore=0 priorityscore=1501 spamscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 malwarescore=0 lowpriorityscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2209130000 definitions=main-2210190081 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747128263441532169?= X-GMAIL-MSGID: =?utf-8?q?1747128263441532169?= When RPCIT indicates that the underlying hypervisor has run out of resources it often means that its IOVA space is exhausted and IOVAs need to be freed before new ones can be created. By triggering a flush of the IOVA queue we can get the queued IOVAs freed and also get the new mapping established during the global flush. Signed-off-by: Niklas Schnelle --- drivers/iommu/dma-iommu.c | 2 +- drivers/iommu/dma-iommu.h | 1 + drivers/iommu/s390-iommu.c | 12 ++++++++++++ 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 427fb84f50c3..4853f98f3305 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -287,7 +287,7 @@ static void flush_percpu(struct iommu_dma_cookie *cookie) } } -static void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie) +void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie) { if (!cookie->fq_domain) return; diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h index 942790009292..cac06030aa26 100644 --- a/drivers/iommu/dma-iommu.h +++ b/drivers/iommu/dma-iommu.h @@ -13,6 +13,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain); void iommu_put_dma_cookie(struct iommu_domain *domain); int iommu_dma_init_fq(struct iommu_domain *domain); +void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie); void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 506f8b92931f..270662584f96 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -502,6 +502,10 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) atomic64_inc(&s390_domain->ctrs.global_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma, zdev->end_dma - zdev->start_dma + 1); + if (rc == -ENOMEM) { + iommu_dma_flush_fq(domain->iova_cookie); + rc = 0; + } if (rc) break; } @@ -525,6 +529,10 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain, atomic64_inc(&s390_domain->ctrs.sync_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, gather->start, size); + if (rc == -ENOMEM) { + iommu_dma_flush_fq(domain->iova_cookie); + rc = 0; + } if (rc) break; } @@ -545,6 +553,10 @@ static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain, atomic64_inc(&s390_domain->ctrs.sync_map_rpcits); rc = zpci_refresh_trans((u64)zdev->fh << 32, iova, size); + if (rc == -ENOMEM) { + iommu_dma_flush_fq(domain->iova_cookie); + rc = 0; + } if (rc) break; }