From patchwork Tue Jan 24 12:50:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47692 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2129973wrn; Tue, 24 Jan 2023 04:52:45 -0800 (PST) X-Google-Smtp-Source: AMrXdXtGgCoKYLMdRhqAklUGNhFYQgrW2poF5vQJ4wUugHmmfNyu/Eew1uaLjV/B5rysWLYIVydb X-Received: by 2002:a17:906:3518:b0:7c1:962e:cf23 with SMTP id r24-20020a170906351800b007c1962ecf23mr27800593eja.37.1674564765568; Tue, 24 Jan 2023 04:52:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674564765; cv=none; d=google.com; s=arc-20160816; b=lDtCGo/Ck3cZt5aThY09Id43FzdZv0qjK0skiP61wHitdrWeKpzt9/OfxNtJOgYDvt k6cD6Wm3b22mSY6Qg23m8J5mwQqQjGEYvuiqq7eVeUJlF/96jpxJEPsLXNU7HpgSTVId hLnDqxnPAX3agHKW2xsTeXESZ4Rcd0tv4+AxEFCiuCTSK3hpq8dWw6Q4DKJasUtPffjv KQuSyatUYDLdSyozKg/lhg20VN7tSKxWbSH6soC1KGVtQkRoI/S4yq451Svvj/dhazmG /C353NecG8dSoQxqNOD/AzYOkgMwufcNbUhwWUpzJVwygerMTAGa5WeRT+11gMFhql0+ E79g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=a9ko8+SfBJpFm0MscknoI7eqwNEE8A4XsRRjuFD+Gwk=; b=mA6iB3P2ip6Z6TUuzhraEp4KcZeN4q4HEZMLR9uFwuSJt6cpdKFQt7fxWkgr1mklZz OcR0emkfFCV9svGbQZ5iLSSiOsQ10RnboWPG75EexHlACVWXe/TUaakO23FhPdVN6mLl b3x1uS1a+s81lH82406Pi/Iy8Wlr20LGo5Kf/W0BGDdrACFyuOXQxxFv3so2wisS6m8k 9OdIw+0sUhpFvNSSvpPgM+QNzzoiyXnrmwl3qN2h/AnHGW/K62RGSCtLaD9H90IiD+ka IrkmB9l9HJdHARgS5+YLYc4Uloi6HSShv4ASQINW/f3QbbMn1dK5tTLBiNQkDoVFpXVp F9wA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=oamOU0Xo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mu31-20020a1709068a9f00b00875c7be1326si2573591ejc.92.2023.01.24.04.52.21; Tue, 24 Jan 2023 04:52:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=oamOU0Xo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233298AbjAXMwL (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232966AbjAXMwJ (ORCPT ); Tue, 24 Jan 2023 07:52:09 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCAFC7699; Tue, 24 Jan 2023 04:51:42 -0800 (PST) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OBnXuf018706; Tue, 24 Jan 2023 12:50:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=a9ko8+SfBJpFm0MscknoI7eqwNEE8A4XsRRjuFD+Gwk=; b=oamOU0Xo/Gy7aqQ+0oXj28A5A6rLSSjw1v/grwel2ZzcAIGdQ6R9sWyhhltw9cUrI68X PY/P5zPC/isidyXP82J70OuIyQNs/x4iVEbndYvjm9OLkYcs1uH8at89MQXnq2oJ23lr V3j4BDFVes9sZuMhTIT7oWz/AfMErWGpi/lMl2ewrn/MKIs5CBNWfpU3c7UoeDi/Ouku MyR8ClxM4mQrPNeeHEKSHCwVPSnmXBUJcDydV23CdYqwn16QBxYhgauLentDwSFY1vx3 uWSNUpJSmqB2dMds9AQmBt1CFJUz6lq9SmTOew16JA3WtlnEncnDDt+zVJbGLA738HsO bw== Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3na838an4a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:47 +0000 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30O2NV6i009185; Tue, 24 Jan 2023 12:50:44 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma03fra.de.ibm.com (PPS) with ESMTPS id 3n87p6aqpy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:44 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCocSY21758506 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:38 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BEB9020040; Tue, 24 Jan 2023 12:50:38 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 325B32004B; Tue, 24 Jan 2023 12:50:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:38 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 1/7] s390/ism: Set DMA coherent mask Date: Tue, 24 Jan 2023 13:50:31 +0100 Message-Id: <20230124125037.3201345-2-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: BRvKZE6t-Gnt_R5CWZuO_cn-NVnNUmGC X-Proofpoint-ORIG-GUID: BRvKZE6t-Gnt_R5CWZuO_cn-NVnNUmGC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 spamscore=0 mlxlogscore=999 adultscore=0 lowpriorityscore=0 bulkscore=0 clxscore=1015 malwarescore=0 mlxscore=0 priorityscore=1501 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908423557482710?= X-GMAIL-MSGID: =?utf-8?q?1755908423557482710?= A future change will convert the DMA API implementation from the architecture specific arch/s390/pci/pci_dma.c to using the common code drivers/iommu/dma-iommu.c which the utilizes the same IOMMU hardware through the s390-iommu driver. Unlike the s390 specific DMA API this requires devices to correctly call set the coherent mask to be allowed to use IOVAs >2^32 in dma_alloc_coherent(). This was however not done for ISM devices. ISM requires such addresses since currently the DMA aperture for PCI devices starts at 2^32 and all calls to dma_alloc_coherent() would thus fail. Reviewed-by: Alexandra Winter Reviewed-by: Matthew Rosato Signed-off-by: Niklas Schnelle --- drivers/s390/net/ism_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c index dfd401d9e362..aba03b613296 100644 --- a/drivers/s390/net/ism_drv.c +++ b/drivers/s390/net/ism_drv.c @@ -557,7 +557,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) goto err_disable; - ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); if (ret) goto err_resource; From patchwork Tue Jan 24 12:50:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47697 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2132034wrn; Tue, 24 Jan 2023 04:58:29 -0800 (PST) X-Google-Smtp-Source: AMrXdXsmw6c6qszbthxojNDRWd0pLLNvp2hhdbDUdvTH3VcTGoCYFfsXVQRLBF+sz0zNwAi8DzjN X-Received: by 2002:a17:907:c30c:b0:86c:cbfd:936 with SMTP id tl12-20020a170907c30c00b0086ccbfd0936mr35275462ejc.11.1674565109751; Tue, 24 Jan 2023 04:58:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674565109; cv=none; d=google.com; s=arc-20160816; b=s5w8KjyClcbPPajPyjOwYJjChuvvAmZG0EAAicl34twPhyI6cQE1gBQ7Qk1/YzW2H3 dPvMCUryS/ZH/8+aj7c7NodRyrDchuxRc6N2FCA0Jp+9B3LEKPzVN/bYcBk970S5oOg5 Xu59/O2AnvI3XeoxUn5OgcP2in8B9+BWGFTSStd8+0oYEsFpYcpYETbkN8pS3kpFBVdp DPXOxdYAOtpm03hn6KTYvxFlyMD+xIX0xBXrbtxUc2BzojKVzlG3nqahW802r4IvuCAV maSRKGZ2QRELqrVja2U/q8cRRCUy1EoVBEX6yy3CTKO636U8dBl4lpzcMUzmhpeVQlc1 X/FQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jRXASvPVg3y0Hb9srnP4XjuCmdfLD6Pi+o7+IT18XNU=; b=VPNzrXAFGWHdGi1FvR/2nTySkgX4eWMyjfCpqgRK5c588VO1SN/4mNjSjGk0oSNo5P KyPRpYjc+V3rqr5lC35SHMaKcwimmvLIZnIRftSFs1xd6+4oCVUGrBraIzBh2u56vDhk JqY1GFTrMW9AQorRrtNr3rATbpdy5Z+jC908Sk8O31Rww1NCEcgDp0niwsfEOAOFocpw bnN7MteBXkXuT+7P8T0Z0AKVQZzo9C3PD6YWucTo4xEtf128EJD+qQAKIQt0GD+ZgCHO ii94+XFKs+FchNV8DLRLFdZKj6y50khB6b61XdkBFEqaOV4nuXxKL2VccrDycHBfbTob YXvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=kGMrM+4m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 16-20020a170906209000b0087788139fe7si2165334ejq.287.2023.01.24.04.58.05; Tue, 24 Jan 2023 04:58:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=kGMrM+4m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234180AbjAXMwn (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233889AbjAXMwW (ORCPT ); Tue, 24 Jan 2023 07:52:22 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 077DE35A7; Tue, 24 Jan 2023 04:51:56 -0800 (PST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OBFhZ4027127; Tue, 24 Jan 2023 12:50:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=jRXASvPVg3y0Hb9srnP4XjuCmdfLD6Pi+o7+IT18XNU=; b=kGMrM+4ma7OGyd3pD0tUsK6p6mhNgoAnN62IKA/20YxKuykSWNkRcxS6hai9l/XPGz5Z iX9TiPNQA7JkuA7iks9UTCrC9o3T3CHl9n1XtUywB0/38jTZlOasqsHO8pk3IPwt2J5J dkArEapq2AJWN+/miwA4t8DL7icLQPR3v1EY4oefSilVm25egem7UQ6HtVf7A66WXeqq I0MZN66R28XnXuaptLDdIDhSLbTMiV1ubq8Dx9YdRiLQKeu/OIVTx9SQX4wKQRLVi9Es m6SaWGwXVUsWMoF3Dvjw2PkzK56z2FgDweFgqZT1b/dL3wTzGFap5qBlVDZyUgYC9UpG Pw== Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3nabuewmm3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:45 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30O9xdPs026731; Tue, 24 Jan 2023 12:50:43 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3n87p6btmg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:43 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCodSL40239426 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:39 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5E4D22004B; Tue, 24 Jan 2023 12:50:39 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CBAD320043; Tue, 24 Jan 2023 12:50:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:38 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 2/7] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Date: Tue, 24 Jan 2023 13:50:32 +0100 Message-Id: <20230124125037.3201345-3-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 8-Uin76g6XkXb-oqu8efxh009UmBWZAJ X-Proofpoint-ORIG-GUID: 8-Uin76g6XkXb-oqu8efxh009UmBWZAJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=882 bulkscore=0 mlxscore=0 impostorscore=0 clxscore=1015 malwarescore=0 priorityscore=1501 lowpriorityscore=0 spamscore=0 adultscore=0 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908784243635876?= X-GMAIL-MSGID: =?utf-8?q?1755908784243635876?= On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync mappings by letting the hypervisor inspect the synced IOVA range and updating a shadow table. This however means that .iotlb_sync_map can fail as the hypervisor may run out of resources while doing the sync. This can be due to the hypervisor being unable to pin guest pages, due to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit or lack of other resources. Either way such a failure to sync a mapping should result in a DMA_MAPPING_ERROR. Now especially when running with batched IOTLB flushes for unmap it may be that some IOVAs have already been invalidated but not yet synced via .iotlb_sync_map. Thus if the hypervisor indicates running out of resources, first do a global flush allowing the hypervisor to free resources associated with these mappings as well a retry creating the new mappings and only if that also fails report this error to callers. Reviewed-by: Lu Baolu Reviewed-by: Matthew Rosato Signed-off-by: Niklas Schnelle --- v3 -> v4: - Adapted signature of .iommu_tlb_sync mapo for sun50i IOMMU driver added in v6.2-rc1 (kernel test robot) drivers/iommu/amd/iommu.c | 5 +++-- drivers/iommu/apple-dart.c | 5 +++-- drivers/iommu/intel/iommu.c | 5 +++-- drivers/iommu/iommu.c | 20 ++++++++++++++++---- drivers/iommu/msm_iommu.c | 5 +++-- drivers/iommu/mtk_iommu.c | 5 +++-- drivers/iommu/s390-iommu.c | 29 ++++++++++++++++++++++++----- drivers/iommu/sprd-iommu.c | 5 +++-- drivers/iommu/sun50i-iommu.c | 4 +++- drivers/iommu/tegra-gart.c | 5 +++-- include/linux/iommu.h | 4 ++-- 11 files changed, 66 insertions(+), 26 deletions(-) diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index cbeaab55c0db..3df7d20e0e52 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2180,14 +2180,15 @@ static int amd_iommu_attach_device(struct iommu_domain *dom, return ret; } -static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom, - unsigned long iova, size_t size) +static int amd_iommu_iotlb_sync_map(struct iommu_domain *dom, + unsigned long iova, size_t size) { struct protection_domain *domain = to_pdomain(dom); struct io_pgtable_ops *ops = &domain->iop.iop.ops; if (ops->map_pages) domain_flush_np_cache(domain, iova, size); + return 0; } static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova, diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c index 4f4a323be0d0..4a76f4d95459 100644 --- a/drivers/iommu/apple-dart.c +++ b/drivers/iommu/apple-dart.c @@ -344,10 +344,11 @@ static void apple_dart_iotlb_sync(struct iommu_domain *domain, apple_dart_domain_flush_tlb(to_dart_domain(domain)); } -static void apple_dart_iotlb_sync_map(struct iommu_domain *domain, - unsigned long iova, size_t size) +static int apple_dart_iotlb_sync_map(struct iommu_domain *domain, + unsigned long iova, size_t size) { apple_dart_domain_flush_tlb(to_dart_domain(domain)); + return 0; } static phys_addr_t apple_dart_iova_to_phys(struct iommu_domain *domain, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 59df7e42fd53..3b36a544c8fa 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4725,8 +4725,8 @@ static bool risky_device(struct pci_dev *pdev) return false; } -static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain, - unsigned long iova, size_t size) +static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain, + unsigned long iova, size_t size) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); unsigned long pages = aligned_nrpages(iova, size); @@ -4736,6 +4736,7 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain, xa_for_each(&dmar_domain->iommu_array, i, info) __mapping_notify_one(info->iommu, dmar_domain, pfn, pages); + return 0; } static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5f6a85aea501..5565e510f7d2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2367,8 +2367,17 @@ static int _iommu_map(struct iommu_domain *domain, unsigned long iova, int ret; ret = __iommu_map(domain, iova, paddr, size, prot, gfp); - if (ret == 0 && ops->iotlb_sync_map) - ops->iotlb_sync_map(domain, iova, size); + if (ret == 0 && ops->iotlb_sync_map) { + ret = ops->iotlb_sync_map(domain, iova, size); + if (ret) + goto out_err; + } + + return ret; + +out_err: + /* undo mappings already done */ + iommu_unmap(domain, iova, size); return ret; } @@ -2516,8 +2525,11 @@ static ssize_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova, sg = sg_next(sg); } - if (ops->iotlb_sync_map) - ops->iotlb_sync_map(domain, iova, mapped); + if (ops->iotlb_sync_map) { + ret = ops->iotlb_sync_map(domain, iova, mapped); + if (ret) + goto out_err; + } return mapped; out_err: diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index c60624910872..62fc52765554 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -486,12 +486,13 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -static void msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, - size_t size) +static int msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size) { struct msm_priv *priv = to_msm_priv(domain); __flush_iotlb_range(iova, size, SZ_4K, false, priv); + return 0; } static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 2badd6acfb23..76d413aef1ef 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -758,12 +758,13 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain, mtk_iommu_tlb_flush_range_sync(gather->start, length, dom->bank); } -static void mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, - size_t size) +static int mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size) { struct mtk_iommu_domain *dom = to_mtk_domain(domain); mtk_iommu_tlb_flush_range_sync(iova, size, dom->bank); + return 0; } static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain, diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index ed33c6cce083..4dfa557270f4 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -210,6 +210,14 @@ static void s390_iommu_release_device(struct device *dev) __s390_iommu_detach_device(zdev); } + +static int zpci_refresh_all(struct zpci_dev *zdev) +{ + return zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma, + zdev->end_dma - zdev->start_dma + 1); + +} + static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) { struct s390_domain *s390_domain = to_s390_domain(domain); @@ -217,8 +225,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { - zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma, - zdev->end_dma - zdev->start_dma + 1); + zpci_refresh_all(zdev); } rcu_read_unlock(); } @@ -242,20 +249,32 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain, rcu_read_unlock(); } -static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain, +static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { struct s390_domain *s390_domain = to_s390_domain(domain); struct zpci_dev *zdev; + int ret = 0; rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { if (!zdev->tlb_refresh) continue; - zpci_refresh_trans((u64)zdev->fh << 32, - iova, size); + ret = zpci_refresh_trans((u64)zdev->fh << 32, + iova, size); + /* + * let the hypervisor discover invalidated entries + * allowing it to free IOVAs and unpin pages + */ + if (ret == -ENOMEM) { + ret = zpci_refresh_all(zdev); + if (ret) + break; + } } rcu_read_unlock(); + + return ret; } static int s390_iommu_validate_trans(struct s390_domain *s390_domain, diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c index 219bfa11f7f4..9e590829992c 100644 --- a/drivers/iommu/sprd-iommu.c +++ b/drivers/iommu/sprd-iommu.c @@ -330,8 +330,8 @@ static size_t sprd_iommu_unmap(struct iommu_domain *domain, unsigned long iova, return size; } -static void sprd_iommu_sync_map(struct iommu_domain *domain, - unsigned long iova, size_t size) +static int sprd_iommu_sync_map(struct iommu_domain *domain, + unsigned long iova, size_t size) { struct sprd_iommu_domain *dom = to_sprd_domain(domain); unsigned int reg; @@ -343,6 +343,7 @@ static void sprd_iommu_sync_map(struct iommu_domain *domain, /* clear IOMMU TLB buffer after page table updated */ sprd_iommu_write(dom->sdev, reg, 0xffffffff); + return 0; } static void sprd_iommu_sync(struct iommu_domain *domain, diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c index 5b585eace3d4..9d2ff7a95624 100644 --- a/drivers/iommu/sun50i-iommu.c +++ b/drivers/iommu/sun50i-iommu.c @@ -402,7 +402,7 @@ static void sun50i_iommu_flush_iotlb_all(struct iommu_domain *domain) spin_unlock_irqrestore(&iommu->iommu_lock, flags); } -static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain, +static int sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain, unsigned long iova, size_t size) { struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain); @@ -412,6 +412,8 @@ static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain, spin_lock_irqsave(&iommu->iommu_lock, flags); sun50i_iommu_zap_range(iommu, iova, size); spin_unlock_irqrestore(&iommu->iommu_lock, flags); + + return 0; } static void sun50i_iommu_iotlb_sync(struct iommu_domain *domain, diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index ed53279d1106..a59966290e46 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -252,10 +252,11 @@ static int gart_iommu_of_xlate(struct device *dev, return 0; } -static void gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, - size_t size) +static int gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova, + size_t size) { FLUSH_GART_REGS(gart_handle); + return 0; } static void gart_iommu_sync(struct iommu_domain *domain, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 46e1347bfa22..e7f76599f09e 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -332,8 +332,8 @@ struct iommu_domain_ops { struct iommu_iotlb_gather *iotlb_gather); void (*flush_iotlb_all)(struct iommu_domain *domain); - void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova, - size_t size); + int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova, + size_t size); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); From patchwork Tue Jan 24 12:50:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47698 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2132591wrn; Tue, 24 Jan 2023 04:59:54 -0800 (PST) X-Google-Smtp-Source: AK7set+6NOY17FaXSY7tCz0hXBnLcVic1cZiTXkRI72ciw2oOIJuUzKPxcaWDtPdtcUBxrGNBxpW X-Received: by 2002:a17:902:dacc:b0:196:11dd:b464 with SMTP id q12-20020a170902dacc00b0019611ddb464mr3323635plx.27.1674565193701; Tue, 24 Jan 2023 04:59:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674565193; cv=none; d=google.com; s=arc-20160816; b=d7lHIre948L+pg6ebQr7zXA15NotZSiYS6Hhzpeg+x253rk/5wqPw/2oTY/kRfdkn4 HqS0VRnA2lfE9G94/uxOCWPqGD+T8FGiWjNx/huLwBUbGKW27//0Wq9CHgQtO+B9NsfL P6j3Vh/TeG57ExHAMKr8ZINqJsilehIksB9PhaV3DjXrg2NtpeivL8nvjT5mqR/WgLr1 uoU0y91/lPhXx5zpmX4ANpBwxJWzhJ7ux/o2ImipTMIQ5updikWq8G/3fteezTZLptgY phs0MeluIfPxVY85J6YKSZUkG5E2EkBR/TN9mBo8Yx5yVJ5/hOVFpf/2g3tyNcAQO7yy +/kg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FzJPQIWID7udwDzccsSaQMdjKv9trlQdgcA+VWM69VE=; b=mlzCCrwq9Zn8kFf8uBiZnnW+8Vn12jBmQScyzz6UHxv86Npo9yn6roN7i7fKmgGVqc sFhySP9by/0QhfI+6Qv6UTVM2mnOLcf1t5B3VA6cgoKtskWOLczr2reerXUPsIOinkWg 2IalG5MNADJCu+n5g/hkUcdqkFPYDVEPkC3U1Pe3N/8XZ3ibu/abxxzy2Gi9sjtyJHQj mrHbOixTEA8FKpa8QSos97mRsOX3T8e5OY5VjKd/Aax/a2VgQ32fBuCjNiM9d/sjHYWU RdySXrFsqJ6sSB5ZCeWXGSX/0XWNocUDzNx8QjLu/i9hYjJohJ2vrCCpEnP5a7ZqX0DS Vt7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=oqLCylEk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j4-20020a170903028400b00194bb2ac8dfsi2534912plr.269.2023.01.24.04.59.41; Tue, 24 Jan 2023 04:59:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=oqLCylEk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234172AbjAXMwj (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233806AbjAXMwV (ORCPT ); Tue, 24 Jan 2023 07:52:21 -0500 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B980013DC8; Tue, 24 Jan 2023 04:51:57 -0800 (PST) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OC8WpS005410; Tue, 24 Jan 2023 12:50:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=FzJPQIWID7udwDzccsSaQMdjKv9trlQdgcA+VWM69VE=; b=oqLCylEkwEAyqfmdWEtrRCOlHiky6PL6uuBkLEjnt8q0wCF/PbgFVhg+U4h2RK4Oaqlf l2x9qbCT0baJTmGopnDmRs4YlXy3ipcqrdGjHvXYSW+xxUF1Y3w03v5ZcIi0uMyzIDtO jCFUX7hVIyLLAfavjSW/iQoDdsl0Ux25rmjeK1boSyG0vBx6ECMEM/R4eTMHpnpWceco 3TWFphBQwds+I86t8uBH5WmCqIRh2Zvba1XfhY0UK9/K7Ufm4Q7qX9hIb7PkI9iOWOrj UFGIwVsuH4zLFE8J2aUwiQM9bUOZRFHVHzdjcrWAYov5IH08+OjLBvdv6BH2X1F4WDMf 0w== Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3na9vbrasc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:45 +0000 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30OBhRXO029852; Tue, 24 Jan 2023 12:50:43 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma06fra.de.ibm.com (PPS) with ESMTPS id 3n87afarpw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:43 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCoeal46662086 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:40 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id ED80520040; Tue, 24 Jan 2023 12:50:39 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6AF792004D; Tue, 24 Jan 2023 12:50:39 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:39 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 3/7] s390/pci: prepare is_passed_through() for dma-iommu Date: Tue, 24 Jan 2023 13:50:33 +0100 Message-Id: <20230124125037.3201345-4-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: x4qsvnitCnYyjOppkm1UAqR1vyj-I4SF X-Proofpoint-ORIG-GUID: x4qsvnitCnYyjOppkm1UAqR1vyj-I4SF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 phishscore=0 mlxlogscore=984 clxscore=1015 spamscore=0 adultscore=0 priorityscore=1501 malwarescore=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908872252393038?= X-GMAIL-MSGID: =?utf-8?q?1755908872252393038?= With the IOMMU always controlled through the IOMMU driver testing for zdev->s390_domain is not a valid indication of the device being passed-through. Instead test if zdev->kzdev is set. Reviewed-by: Pierre Morel Reviewed-by: Matthew Rosato Signed-off-by: Niklas Schnelle --- arch/s390/pci/pci_event.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c index b9324ca2eb94..4ef5a6a1d618 100644 --- a/arch/s390/pci/pci_event.c +++ b/arch/s390/pci/pci_event.c @@ -59,9 +59,16 @@ static inline bool ers_result_indicates_abort(pci_ers_result_t ers_res) } } -static bool is_passed_through(struct zpci_dev *zdev) +static bool is_passed_through(struct pci_dev *pdev) { - return zdev->s390_domain; + struct zpci_dev *zdev = to_zpci(pdev); + bool ret; + + mutex_lock(&zdev->kzdev_lock); + ret = !!zdev->kzdev; + mutex_unlock(&zdev->kzdev_lock); + + return ret; } static bool is_driver_supported(struct pci_driver *driver) @@ -176,7 +183,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev) } pdev->error_state = pci_channel_io_frozen; - if (is_passed_through(to_zpci(pdev))) { + if (is_passed_through(pdev)) { pr_info("%s: Cannot be recovered in the host because it is a pass-through device\n", pci_name(pdev)); goto out_unlock; @@ -239,7 +246,7 @@ static void zpci_event_io_failure(struct pci_dev *pdev, pci_channel_state_t es) * we will inject the error event and let the guest recover the device * itself. */ - if (is_passed_through(to_zpci(pdev))) + if (is_passed_through(pdev)) goto out; driver = to_pci_driver(pdev->dev.driver); if (driver && driver->err_handler && driver->err_handler->error_detected) From patchwork Tue Jan 24 12:50:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47696 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2130218wrn; Tue, 24 Jan 2023 04:53:27 -0800 (PST) X-Google-Smtp-Source: AMrXdXsM/LdKMVKcPZJrGLLbOlbS/ANq6WI0Y/2SSAEMriN8iZfwXxTsYZaiRM+vZwGtFRQH20RL X-Received: by 2002:a17:90b:80d:b0:219:20b8:a6fe with SMTP id bk13-20020a17090b080d00b0021920b8a6femr29487614pjb.46.1674564807341; Tue, 24 Jan 2023 04:53:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674564807; cv=none; d=google.com; s=arc-20160816; b=IzPgTFwI9WgbYQc7CTx8jRK9d4WSSm8wC1ypHb9dFZP+bWERPj4zo63WMFTmfSwc8l tt+ufuoBcTl3dZ3WADbZMbTQ6BlFDg7JcRn7xPlR0jZpYg5/Z6QOoa+e1WlxhzIxiXhX ItJ1Om+wb2aG6nmcnv6mNp53SBjsiHyzrgYuL7ifIWbOHNBnLOAMYt0f845GkINfuaG2 23QtvOJLy/Ix+bMuw0iaBC1uNtYT+6C/tlDGJWzgGVDDER2+GN3l4GKhAQ34vXCi+z8M k8/AlUFPwOYIzjJ7+uDfZShs+MNHJWoisFc0JlA1rgbmAh4XtssIEnx6XRKnexPhD6XA JxKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=o20pSJJbzXJh+esRDmnTJPZ27dI51nPwK5gfzGsW9R4=; b=FCfJbcrOfD+oEHcaBoq28zzu/rWmsqcBkRPkxa32qAmRHOEonKSw3Kw65eodVhFVI7 0JXOkbk/nYG6qDe0qVjDQdq8jFpJnz9RhfcW2zK+BF2Wa2fZ/9WCJEhn8TuEOKEYdWNa jIZKXhxxsXosr1/K61E5lW8+kIzEuXZSd8m1x0GtrfX3OsqsbON9OWXuxpZZsteh2EbU ydHTLK3iGCCD1IirtQngKEBofVNJqnXJSLtFGQQPY0Kwa70cfWwhQtkzX1fc7UW7bcn/ qmm5kCv5CFf167mWWtPLj8IKmSWqdydkClzVQnHdnADjcOkpUL5X9ufhT4f94+9DA5aF 4wRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=NDBFdGKT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d7-20020a17090a7bc700b00219846484cbsi2246212pjl.176.2023.01.24.04.53.15; Tue, 24 Jan 2023 04:53:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=NDBFdGKT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234123AbjAXMwe (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234033AbjAXMwU (ORCPT ); Tue, 24 Jan 2023 07:52:20 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 540965B9E; Tue, 24 Jan 2023 04:51:49 -0800 (PST) Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OCNtkE022139; Tue, 24 Jan 2023 12:50:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=o20pSJJbzXJh+esRDmnTJPZ27dI51nPwK5gfzGsW9R4=; b=NDBFdGKTy8TK0/VuHLwhSmHer7Vx2OfoGngiijqy4rdd3WogHyMgkd26qz1IAOr/J9GC UF02bMBHq2Wjv95xcQElMgc6SZrcUggz762Mm6omvFCTWQmJCSx5D0InjCc1DXXQfmDg gWRTI9WIP5XorctHZoiiRjjcZ6AwcRqucv6f78TsJztnzDwYZttFJ8GMUbgYSCpXTs8M 6Md3aSIbaLstT4ke4nE3ewfL8Lg349CfeRKHEHff8Ltq1UhG76vdkuuy60waAmiuLmgI sPZtazAHkz4NM//+FPaUWjwY/igEzPBJEGyfhqtMJS74Oc1BYVZjwTnJIeBlJis1FbND KA== Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3naad2ymeh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:48 +0000 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30O6rpSU016163; Tue, 24 Jan 2023 12:50:45 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma04fra.de.ibm.com (PPS) with ESMTPS id 3n87p6ar4g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:45 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCoe1l46662096 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:40 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 98C9520040; Tue, 24 Jan 2023 12:50:40 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 07A032004B; Tue, 24 Jan 2023 12:50:40 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:39 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 4/7] s390/pci: Use dma-iommu layer Date: Tue, 24 Jan 2023 13:50:34 +0100 Message-Id: <20230124125037.3201345-5-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 64xkb--PbIYBTqMjhQYaNctHuns0R7x9 X-Proofpoint-GUID: 64xkb--PbIYBTqMjhQYaNctHuns0R7x9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 priorityscore=1501 malwarescore=0 clxscore=1015 mlxscore=0 bulkscore=0 adultscore=0 phishscore=0 mlxlogscore=999 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908467363780178?= X-GMAIL-MSGID: =?utf-8?q?1755908467363780178?= While s390 already has a standard IOMMU driver and previous changes have added I/O TLB flushing operations this driver is currently only used for user-space PCI access such as vfio-pci. For the DMA API s390 instead utilizes its own implementation in arch/s390/pci/pci_dma.c which drives the same hardware and shares some code but requires a complex and fragile hand over between DMA API and IOMMU API use of a device and despite code sharing still leads to significant duplication and maintenance effort. Let's utilize the common code DMAP API implementation from drivers/iommu/dma-iommu.c instead allowing us to get rid of arch/s390/pci/pci_dma.c. Signed-off-by: Niklas Schnelle --- v4 -> v5: - Replaced a missed check for IOMMU_DOMAIN_DMA_FQ with the new generic __IOMMU_DOMAIN_DMA_LAZY .../admin-guide/kernel-parameters.txt | 9 +- arch/s390/include/asm/pci.h | 7 - arch/s390/include/asm/pci_clp.h | 3 + arch/s390/include/asm/pci_dma.h | 120 +-- arch/s390/pci/Makefile | 2 +- arch/s390/pci/pci.c | 22 +- arch/s390/pci/pci_bus.c | 5 - arch/s390/pci/pci_debug.c | 12 +- arch/s390/pci/pci_dma.c | 732 ------------------ arch/s390/pci/pci_event.c | 2 - arch/s390/pci/pci_sysfs.c | 19 +- drivers/iommu/Kconfig | 4 +- drivers/iommu/s390-iommu.c | 389 +++++++++- 13 files changed, 408 insertions(+), 918 deletions(-) delete mode 100644 arch/s390/pci/pci_dma.c diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 6cfa6e3996cf..1b61d2b690c0 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2167,7 +2167,7 @@ forcing Dual Address Cycle for PCI cards supporting greater than 32-bit addressing. - iommu.strict= [ARM64, X86] Configure TLB invalidation behaviour + iommu.strict= [ARM64, X86, S390] Configure TLB invalidation behaviour Format: { "0" | "1" } 0 - Lazy mode. Request that DMA unmap operations use deferred @@ -5418,9 +5418,10 @@ s390_iommu= [HW,S390] Set s390 IOTLB flushing mode strict - With strict flushing every unmap operation will result in - an IOTLB flush. Default is lazy flushing before reuse, - which is faster. + With strict flushing every unmap operation will result + in an IOTLB flush. Default is lazy flushing before + reuse, which is faster. Deprecated, equivalent to + iommu.strict=1. s390_iommu_aperture= [KNL,S390] Specifies the size of the per device DMA address space diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h index b248694e0024..3f74f1cf37df 100644 --- a/arch/s390/include/asm/pci.h +++ b/arch/s390/include/asm/pci.h @@ -159,13 +159,6 @@ struct zpci_dev { unsigned long *dma_table; int tlb_refresh; - spinlock_t iommu_bitmap_lock; - unsigned long *iommu_bitmap; - unsigned long *lazy_bitmap; - unsigned long iommu_size; - unsigned long iommu_pages; - unsigned int next_bit; - struct iommu_device iommu_dev; /* IOMMU core handle */ char res_name[16]; diff --git a/arch/s390/include/asm/pci_clp.h b/arch/s390/include/asm/pci_clp.h index d6189ed14f84..f0c677ddd270 100644 --- a/arch/s390/include/asm/pci_clp.h +++ b/arch/s390/include/asm/pci_clp.h @@ -50,6 +50,9 @@ struct clp_fh_list_entry { #define CLP_UTIL_STR_LEN 64 #define CLP_PFIP_NR_SEGMENTS 4 +/* PCI function type numbers */ +#define PCI_FUNC_TYPE_ISM 0x5 /* ISM device */ + extern bool zpci_unique_uid; struct clp_rsp_slpc_pci { diff --git a/arch/s390/include/asm/pci_dma.h b/arch/s390/include/asm/pci_dma.h index 91e63426bdc5..42d7cc4262ca 100644 --- a/arch/s390/include/asm/pci_dma.h +++ b/arch/s390/include/asm/pci_dma.h @@ -82,116 +82,16 @@ enum zpci_ioat_dtype { #define ZPCI_TABLE_VALID_MASK 0x20 #define ZPCI_TABLE_PROT_MASK 0x200 -static inline unsigned int calc_rtx(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK; -} - -static inline unsigned int calc_sx(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK; -} - -static inline unsigned int calc_px(dma_addr_t ptr) -{ - return ((unsigned long) ptr >> PAGE_SHIFT) & ZPCI_PT_MASK; -} - -static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa) -{ - *entry &= ZPCI_PTE_FLAG_MASK; - *entry |= (pfaa & ZPCI_PTE_ADDR_MASK); -} - -static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto) -{ - *entry &= ZPCI_RTE_FLAG_MASK; - *entry |= (sto & ZPCI_RTE_ADDR_MASK); - *entry |= ZPCI_TABLE_TYPE_RTX; -} - -static inline void set_st_pto(unsigned long *entry, phys_addr_t pto) -{ - *entry &= ZPCI_STE_FLAG_MASK; - *entry |= (pto & ZPCI_STE_ADDR_MASK); - *entry |= ZPCI_TABLE_TYPE_SX; -} - -static inline void validate_rt_entry(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_VALID_MASK; - *entry &= ~ZPCI_TABLE_OFFSET_MASK; - *entry |= ZPCI_TABLE_VALID; - *entry |= ZPCI_TABLE_LEN_RTX; -} - -static inline void validate_st_entry(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_VALID_MASK; - *entry |= ZPCI_TABLE_VALID; -} - -static inline void invalidate_pt_entry(unsigned long *entry) -{ - WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID); - *entry &= ~ZPCI_PTE_VALID_MASK; - *entry |= ZPCI_PTE_INVALID; -} - -static inline void validate_pt_entry(unsigned long *entry) -{ - WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID); - *entry &= ~ZPCI_PTE_VALID_MASK; - *entry |= ZPCI_PTE_VALID; -} - -static inline void entry_set_protected(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_PROT_MASK; - *entry |= ZPCI_TABLE_PROTECTED; -} - -static inline void entry_clr_protected(unsigned long *entry) -{ - *entry &= ~ZPCI_TABLE_PROT_MASK; - *entry |= ZPCI_TABLE_UNPROTECTED; -} - -static inline int reg_entry_isvalid(unsigned long entry) -{ - return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID; -} - -static inline int pt_entry_isvalid(unsigned long entry) -{ - return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID; -} - -static inline unsigned long *get_rt_sto(unsigned long entry) -{ - if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX) - return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK); - else - return NULL; - -} - -static inline unsigned long *get_st_pto(unsigned long entry) -{ - if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX) - return phys_to_virt(entry & ZPCI_STE_ADDR_MASK); - else - return NULL; -} - -/* Prototypes */ -void dma_free_seg_table(unsigned long); -unsigned long *dma_alloc_cpu_table(void); -void dma_cleanup_tables(unsigned long *); -unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr); -void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags); - -extern const struct dma_map_ops s390_pci_dma_ops; +struct zpci_iommu_ctrs { + atomic64_t mapped_pages; + atomic64_t unmapped_pages; + atomic64_t global_rpcits; + atomic64_t sync_map_rpcits; + atomic64_t sync_rpcits; +}; + +struct zpci_dev; +struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev); #endif diff --git a/arch/s390/pci/Makefile b/arch/s390/pci/Makefile index 5ae31ca9dd44..0547a10406e7 100644 --- a/arch/s390/pci/Makefile +++ b/arch/s390/pci/Makefile @@ -3,7 +3,7 @@ # Makefile for the s390 PCI subsystem. # -obj-$(CONFIG_PCI) += pci.o pci_irq.o pci_dma.o pci_clp.o pci_sysfs.o \ +obj-$(CONFIG_PCI) += pci.o pci_irq.o pci_clp.o pci_sysfs.o \ pci_event.o pci_debug.o pci_insn.o pci_mmio.o \ pci_bus.o pci_kvm_hook.o obj-$(CONFIG_PCI_IOV) += pci_iov.o diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c index ef38b1514c77..ec4b41839c22 100644 --- a/arch/s390/pci/pci.c +++ b/arch/s390/pci/pci.c @@ -124,7 +124,11 @@ int zpci_register_ioat(struct zpci_dev *zdev, u8 dmaas, WARN_ON_ONCE(iota & 0x3fff); fib.pba = base; - fib.pal = limit; + /* Work around off by one in ISM virt device */ + if (zdev->pft == PCI_FUNC_TYPE_ISM && limit > base) + fib.pal = limit + (1 << 12); + else + fib.pal = limit; fib.iota = iota | ZPCI_IOTA_RTTO_FLAG; fib.gd = zdev->gisa; cc = zpci_mod_fc(req, &fib, status); @@ -615,7 +619,6 @@ int pcibios_device_add(struct pci_dev *pdev) pdev->no_vf_scan = 1; pdev->dev.groups = zpci_attr_groups; - pdev->dev.dma_ops = &s390_pci_dma_ops; zpci_map_resources(pdev); for (i = 0; i < PCI_STD_NUM_BARS; i++) { @@ -789,8 +792,6 @@ int zpci_hot_reset_device(struct zpci_dev *zdev) if (zdev->dma_table) rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, virt_to_phys(zdev->dma_table), &status); - else - rc = zpci_dma_init_device(zdev); if (rc) { zpci_disable_device(zdev); return rc; @@ -915,11 +916,6 @@ int zpci_deconfigure_device(struct zpci_dev *zdev) if (zdev->zbus->bus) zpci_bus_remove_device(zdev, false); - if (zdev->dma_table) { - rc = zpci_dma_exit_device(zdev); - if (rc) - return rc; - } if (zdev_enabled(zdev)) { rc = zpci_disable_device(zdev); if (rc) @@ -968,8 +964,6 @@ void zpci_release_device(struct kref *kref) if (zdev->zbus->bus) zpci_bus_remove_device(zdev, false); - if (zdev->dma_table) - zpci_dma_exit_device(zdev); if (zdev_enabled(zdev)) zpci_disable_device(zdev); @@ -1159,10 +1153,6 @@ static int __init pci_base_init(void) if (rc) goto out_irq; - rc = zpci_dma_init(); - if (rc) - goto out_dma; - rc = clp_scan_pci_devices(); if (rc) goto out_find; @@ -1172,8 +1162,6 @@ static int __init pci_base_init(void) return 0; out_find: - zpci_dma_exit(); -out_dma: zpci_irq_exit(); out_irq: zpci_mem_exit(); diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c index 6a8da1b742ae..b15ad15999f8 100644 --- a/arch/s390/pci/pci_bus.c +++ b/arch/s390/pci/pci_bus.c @@ -49,11 +49,6 @@ static int zpci_bus_prepare_device(struct zpci_dev *zdev) rc = zpci_enable_device(zdev); if (rc) return rc; - rc = zpci_dma_init_device(zdev); - if (rc) { - zpci_disable_device(zdev); - return rc; - } } if (!zdev->has_resources) { diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c index ca6bd98eec13..6dde2263c79d 100644 --- a/arch/s390/pci/pci_debug.c +++ b/arch/s390/pci/pci_debug.c @@ -53,9 +53,11 @@ static char *pci_fmt3_names[] = { }; static char *pci_sw_names[] = { - "Allocated pages", "Mapped pages", "Unmapped pages", + "Global RPCITs", + "Sync Map RPCITs", + "Sync RPCITs", }; static void pci_fmb_show(struct seq_file *m, char *name[], int length, @@ -69,10 +71,14 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length, static void pci_sw_counter_show(struct seq_file *m) { - struct zpci_dev *zdev = m->private; - atomic64_t *counter = &zdev->allocated_pages; + struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private); + atomic64_t *counter; int i; + if (!ctrs) + return; + + counter = &ctrs->mapped_pages; for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], atomic64_read(counter)); diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c deleted file mode 100644 index ea478d11fbd1..000000000000 --- a/arch/s390/pci/pci_dma.c +++ /dev/null @@ -1,732 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Copyright IBM Corp. 2012 - * - * Author(s): - * Jan Glauber - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -static struct kmem_cache *dma_region_table_cache; -static struct kmem_cache *dma_page_table_cache; -static int s390_iommu_strict; -static u64 s390_iommu_aperture; -static u32 s390_iommu_aperture_factor = 1; - -static int zpci_refresh_global(struct zpci_dev *zdev) -{ - return zpci_refresh_trans((u64) zdev->fh << 32, zdev->start_dma, - zdev->iommu_pages * PAGE_SIZE); -} - -unsigned long *dma_alloc_cpu_table(void) -{ - unsigned long *table, *entry; - - table = kmem_cache_alloc(dma_region_table_cache, GFP_ATOMIC); - if (!table) - return NULL; - - for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++) - *entry = ZPCI_TABLE_INVALID; - return table; -} - -static void dma_free_cpu_table(void *table) -{ - kmem_cache_free(dma_region_table_cache, table); -} - -static unsigned long *dma_alloc_page_table(void) -{ - unsigned long *table, *entry; - - table = kmem_cache_alloc(dma_page_table_cache, GFP_ATOMIC); - if (!table) - return NULL; - - for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++) - *entry = ZPCI_PTE_INVALID; - return table; -} - -static void dma_free_page_table(void *table) -{ - kmem_cache_free(dma_page_table_cache, table); -} - -static unsigned long *dma_get_seg_table_origin(unsigned long *rtep) -{ - unsigned long old_rte, rte; - unsigned long *sto; - - rte = READ_ONCE(*rtep); - if (reg_entry_isvalid(rte)) { - sto = get_rt_sto(rte); - } else { - sto = dma_alloc_cpu_table(); - if (!sto) - return NULL; - - set_rt_sto(&rte, virt_to_phys(sto)); - validate_rt_entry(&rte); - entry_clr_protected(&rte); - - old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte); - if (old_rte != ZPCI_TABLE_INVALID) { - /* Somone else was faster, use theirs */ - dma_free_cpu_table(sto); - sto = get_rt_sto(old_rte); - } - } - return sto; -} - -static unsigned long *dma_get_page_table_origin(unsigned long *step) -{ - unsigned long old_ste, ste; - unsigned long *pto; - - ste = READ_ONCE(*step); - if (reg_entry_isvalid(ste)) { - pto = get_st_pto(ste); - } else { - pto = dma_alloc_page_table(); - if (!pto) - return NULL; - set_st_pto(&ste, virt_to_phys(pto)); - validate_st_entry(&ste); - entry_clr_protected(&ste); - - old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste); - if (old_ste != ZPCI_TABLE_INVALID) { - /* Somone else was faster, use theirs */ - dma_free_page_table(pto); - pto = get_st_pto(old_ste); - } - } - return pto; -} - -unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr) -{ - unsigned long *sto, *pto; - unsigned int rtx, sx, px; - - rtx = calc_rtx(dma_addr); - sto = dma_get_seg_table_origin(&rto[rtx]); - if (!sto) - return NULL; - - sx = calc_sx(dma_addr); - pto = dma_get_page_table_origin(&sto[sx]); - if (!pto) - return NULL; - - px = calc_px(dma_addr); - return &pto[px]; -} - -void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags) -{ - unsigned long pte; - - pte = READ_ONCE(*ptep); - if (flags & ZPCI_PTE_INVALID) { - invalidate_pt_entry(&pte); - } else { - set_pt_pfaa(&pte, page_addr); - validate_pt_entry(&pte); - } - - if (flags & ZPCI_TABLE_PROTECTED) - entry_set_protected(&pte); - else - entry_clr_protected(&pte); - - xchg(ptep, pte); -} - -static int __dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa, - dma_addr_t dma_addr, size_t size, int flags) -{ - unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - phys_addr_t page_addr = (pa & PAGE_MASK); - unsigned long *entry; - int i, rc = 0; - - if (!nr_pages) - return -EINVAL; - - if (!zdev->dma_table) - return -EINVAL; - - for (i = 0; i < nr_pages; i++) { - entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr); - if (!entry) { - rc = -ENOMEM; - goto undo_cpu_trans; - } - dma_update_cpu_trans(entry, page_addr, flags); - page_addr += PAGE_SIZE; - dma_addr += PAGE_SIZE; - } - -undo_cpu_trans: - if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) { - flags = ZPCI_PTE_INVALID; - while (i-- > 0) { - page_addr -= PAGE_SIZE; - dma_addr -= PAGE_SIZE; - entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr); - if (!entry) - break; - dma_update_cpu_trans(entry, page_addr, flags); - } - } - return rc; -} - -static int __dma_purge_tlb(struct zpci_dev *zdev, dma_addr_t dma_addr, - size_t size, int flags) -{ - unsigned long irqflags; - int ret; - - /* - * With zdev->tlb_refresh == 0, rpcit is not required to establish new - * translations when previously invalid translation-table entries are - * validated. With lazy unmap, rpcit is skipped for previously valid - * entries, but a global rpcit is then required before any address can - * be re-used, i.e. after each iommu bitmap wrap-around. - */ - if ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID) { - if (!zdev->tlb_refresh) - return 0; - } else { - if (!s390_iommu_strict) - return 0; - } - - ret = zpci_refresh_trans((u64) zdev->fh << 32, dma_addr, - PAGE_ALIGN(size)); - if (ret == -ENOMEM && !s390_iommu_strict) { - /* enable the hypervisor to free some resources */ - if (zpci_refresh_global(zdev)) - goto out; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, irqflags); - bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap, - zdev->lazy_bitmap, zdev->iommu_pages); - bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages); - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, irqflags); - ret = 0; - } -out: - return ret; -} - -static int dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa, - dma_addr_t dma_addr, size_t size, int flags) -{ - int rc; - - rc = __dma_update_trans(zdev, pa, dma_addr, size, flags); - if (rc) - return rc; - - rc = __dma_purge_tlb(zdev, dma_addr, size, flags); - if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) - __dma_update_trans(zdev, pa, dma_addr, size, ZPCI_PTE_INVALID); - - return rc; -} - -void dma_free_seg_table(unsigned long entry) -{ - unsigned long *sto = get_rt_sto(entry); - int sx; - - for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++) - if (reg_entry_isvalid(sto[sx])) - dma_free_page_table(get_st_pto(sto[sx])); - - dma_free_cpu_table(sto); -} - -void dma_cleanup_tables(unsigned long *table) -{ - int rtx; - - if (!table) - return; - - for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++) - if (reg_entry_isvalid(table[rtx])) - dma_free_seg_table(table[rtx]); - - dma_free_cpu_table(table); -} - -static unsigned long __dma_alloc_iommu(struct device *dev, - unsigned long start, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - - return iommu_area_alloc(zdev->iommu_bitmap, zdev->iommu_pages, - start, size, zdev->start_dma >> PAGE_SHIFT, - dma_get_seg_boundary_nr_pages(dev, PAGE_SHIFT), - 0); -} - -static dma_addr_t dma_alloc_address(struct device *dev, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long offset, flags; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags); - offset = __dma_alloc_iommu(dev, zdev->next_bit, size); - if (offset == -1) { - if (!s390_iommu_strict) { - /* global flush before DMA addresses are reused */ - if (zpci_refresh_global(zdev)) - goto out_error; - - bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap, - zdev->lazy_bitmap, zdev->iommu_pages); - bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages); - } - /* wrap-around */ - offset = __dma_alloc_iommu(dev, 0, size); - if (offset == -1) - goto out_error; - } - zdev->next_bit = offset + size; - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); - - return zdev->start_dma + offset * PAGE_SIZE; - -out_error: - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); - return DMA_MAPPING_ERROR; -} - -static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long flags, offset; - - offset = (dma_addr - zdev->start_dma) >> PAGE_SHIFT; - - spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags); - if (!zdev->iommu_bitmap) - goto out; - - if (s390_iommu_strict) - bitmap_clear(zdev->iommu_bitmap, offset, size); - else - bitmap_set(zdev->lazy_bitmap, offset, size); - -out: - spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags); -} - -static inline void zpci_err_dma(unsigned long rc, unsigned long addr) -{ - struct { - unsigned long rc; - unsigned long addr; - } __packed data = {rc, addr}; - - zpci_err_hex(&data, sizeof(data)); -} - -static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction direction, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - unsigned long pa = page_to_phys(page) + offset; - int flags = ZPCI_PTE_VALID; - unsigned long nr_pages; - dma_addr_t dma_addr; - int ret; - - /* This rounds up number of pages based on size and offset */ - nr_pages = iommu_num_pages(pa, size, PAGE_SIZE); - dma_addr = dma_alloc_address(dev, nr_pages); - if (dma_addr == DMA_MAPPING_ERROR) { - ret = -ENOSPC; - goto out_err; - } - - /* Use rounded up size */ - size = nr_pages * PAGE_SIZE; - - if (direction == DMA_NONE || direction == DMA_TO_DEVICE) - flags |= ZPCI_TABLE_PROTECTED; - - ret = dma_update_trans(zdev, pa, dma_addr, size, flags); - if (ret) - goto out_free; - - atomic64_add(nr_pages, &zdev->mapped_pages); - return dma_addr + (offset & ~PAGE_MASK); - -out_free: - dma_free_address(dev, dma_addr, nr_pages); -out_err: - zpci_err("map error:\n"); - zpci_err_dma(ret, pa); - return DMA_MAPPING_ERROR; -} - -static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr, - size_t size, enum dma_data_direction direction, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - int npages, ret; - - npages = iommu_num_pages(dma_addr, size, PAGE_SIZE); - dma_addr = dma_addr & PAGE_MASK; - ret = dma_update_trans(zdev, 0, dma_addr, npages * PAGE_SIZE, - ZPCI_PTE_INVALID); - if (ret) { - zpci_err("unmap error:\n"); - zpci_err_dma(ret, dma_addr); - return; - } - - atomic64_add(npages, &zdev->unmapped_pages); - dma_free_address(dev, dma_addr, npages); -} - -static void *s390_dma_alloc(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flag, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - struct page *page; - phys_addr_t pa; - dma_addr_t map; - - size = PAGE_ALIGN(size); - page = alloc_pages(flag | __GFP_ZERO, get_order(size)); - if (!page) - return NULL; - - pa = page_to_phys(page); - map = s390_dma_map_pages(dev, page, 0, size, DMA_BIDIRECTIONAL, 0); - if (dma_mapping_error(dev, map)) { - __free_pages(page, get_order(size)); - return NULL; - } - - atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages); - if (dma_handle) - *dma_handle = map; - return phys_to_virt(pa); -} - -static void s390_dma_free(struct device *dev, size_t size, - void *vaddr, dma_addr_t dma_handle, - unsigned long attrs) -{ - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - - size = PAGE_ALIGN(size); - atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages); - s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, 0); - free_pages((unsigned long)vaddr, get_order(size)); -} - -/* Map a segment into a contiguous dma address area */ -static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg, - size_t size, dma_addr_t *handle, - enum dma_data_direction dir) -{ - unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct zpci_dev *zdev = to_zpci(to_pci_dev(dev)); - dma_addr_t dma_addr_base, dma_addr; - int flags = ZPCI_PTE_VALID; - struct scatterlist *s; - phys_addr_t pa = 0; - int ret; - - dma_addr_base = dma_alloc_address(dev, nr_pages); - if (dma_addr_base == DMA_MAPPING_ERROR) - return -ENOMEM; - - dma_addr = dma_addr_base; - if (dir == DMA_NONE || dir == DMA_TO_DEVICE) - flags |= ZPCI_TABLE_PROTECTED; - - for (s = sg; dma_addr < dma_addr_base + size; s = sg_next(s)) { - pa = page_to_phys(sg_page(s)); - ret = __dma_update_trans(zdev, pa, dma_addr, - s->offset + s->length, flags); - if (ret) - goto unmap; - - dma_addr += s->offset + s->length; - } - ret = __dma_purge_tlb(zdev, dma_addr_base, size, flags); - if (ret) - goto unmap; - - *handle = dma_addr_base; - atomic64_add(nr_pages, &zdev->mapped_pages); - - return ret; - -unmap: - dma_update_trans(zdev, 0, dma_addr_base, dma_addr - dma_addr_base, - ZPCI_PTE_INVALID); - dma_free_address(dev, dma_addr_base, nr_pages); - zpci_err("map error:\n"); - zpci_err_dma(ret, pa); - return ret; -} - -static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg, - int nr_elements, enum dma_data_direction dir, - unsigned long attrs) -{ - struct scatterlist *s = sg, *start = sg, *dma = sg; - unsigned int max = dma_get_max_seg_size(dev); - unsigned int size = s->offset + s->length; - unsigned int offset = s->offset; - int count = 0, i, ret; - - for (i = 1; i < nr_elements; i++) { - s = sg_next(s); - - s->dma_length = 0; - - if (s->offset || (size & ~PAGE_MASK) || - size + s->length > max) { - ret = __s390_dma_map_sg(dev, start, size, - &dma->dma_address, dir); - if (ret) - goto unmap; - - dma->dma_address += offset; - dma->dma_length = size - offset; - - size = offset = s->offset; - start = s; - dma = sg_next(dma); - count++; - } - size += s->length; - } - ret = __s390_dma_map_sg(dev, start, size, &dma->dma_address, dir); - if (ret) - goto unmap; - - dma->dma_address += offset; - dma->dma_length = size - offset; - - return count + 1; -unmap: - for_each_sg(sg, s, count, i) - s390_dma_unmap_pages(dev, sg_dma_address(s), sg_dma_len(s), - dir, attrs); - - return ret; -} - -static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg, - int nr_elements, enum dma_data_direction dir, - unsigned long attrs) -{ - struct scatterlist *s; - int i; - - for_each_sg(sg, s, nr_elements, i) { - if (s->dma_length) - s390_dma_unmap_pages(dev, s->dma_address, s->dma_length, - dir, attrs); - s->dma_address = 0; - s->dma_length = 0; - } -} - -int zpci_dma_init_device(struct zpci_dev *zdev) -{ - u8 status; - int rc; - - /* - * At this point, if the device is part of an IOMMU domain, this would - * be a strong hint towards a bug in the IOMMU API (common) code and/or - * simultaneous access via IOMMU and DMA API. So let's issue a warning. - */ - WARN_ON(zdev->s390_domain); - - spin_lock_init(&zdev->iommu_bitmap_lock); - - zdev->dma_table = dma_alloc_cpu_table(); - if (!zdev->dma_table) { - rc = -ENOMEM; - goto out; - } - - /* - * Restrict the iommu bitmap size to the minimum of the following: - * - s390_iommu_aperture which defaults to high_memory - * - 3-level pagetable address limit minus start_dma offset - * - DMA address range allowed by the hardware (clp query pci fn) - * - * Also set zdev->end_dma to the actual end address of the usable - * range, instead of the theoretical maximum as reported by hardware. - * - * This limits the number of concurrently usable DMA mappings since - * for each DMA mapped memory address we need a DMA address including - * extra DMA addresses for multiple mappings of the same memory address. - */ - zdev->start_dma = PAGE_ALIGN(zdev->start_dma); - zdev->iommu_size = min3(s390_iommu_aperture, - ZPCI_TABLE_SIZE_RT - zdev->start_dma, - zdev->end_dma - zdev->start_dma + 1); - zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1; - zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT; - zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8); - if (!zdev->iommu_bitmap) { - rc = -ENOMEM; - goto free_dma_table; - } - if (!s390_iommu_strict) { - zdev->lazy_bitmap = vzalloc(zdev->iommu_pages / 8); - if (!zdev->lazy_bitmap) { - rc = -ENOMEM; - goto free_bitmap; - } - - } - if (zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, - virt_to_phys(zdev->dma_table), &status)) { - rc = -EIO; - goto free_bitmap; - } - - return 0; -free_bitmap: - vfree(zdev->iommu_bitmap); - zdev->iommu_bitmap = NULL; - vfree(zdev->lazy_bitmap); - zdev->lazy_bitmap = NULL; -free_dma_table: - dma_free_cpu_table(zdev->dma_table); - zdev->dma_table = NULL; -out: - return rc; -} - -int zpci_dma_exit_device(struct zpci_dev *zdev) -{ - int cc = 0; - - /* - * At this point, if the device is part of an IOMMU domain, this would - * be a strong hint towards a bug in the IOMMU API (common) code and/or - * simultaneous access via IOMMU and DMA API. So let's issue a warning. - */ - WARN_ON(zdev->s390_domain); - if (zdev_enabled(zdev)) - cc = zpci_unregister_ioat(zdev, 0); - /* - * cc == 3 indicates the function is gone already. This can happen - * if the function was deconfigured/disabled suddenly and we have not - * received a new handle yet. - */ - if (cc && cc != 3) - return -EIO; - - dma_cleanup_tables(zdev->dma_table); - zdev->dma_table = NULL; - vfree(zdev->iommu_bitmap); - zdev->iommu_bitmap = NULL; - vfree(zdev->lazy_bitmap); - zdev->lazy_bitmap = NULL; - zdev->next_bit = 0; - return 0; -} - -static int __init dma_alloc_cpu_table_caches(void) -{ - dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables", - ZPCI_TABLE_SIZE, ZPCI_TABLE_ALIGN, - 0, NULL); - if (!dma_region_table_cache) - return -ENOMEM; - - dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables", - ZPCI_PT_SIZE, ZPCI_PT_ALIGN, - 0, NULL); - if (!dma_page_table_cache) { - kmem_cache_destroy(dma_region_table_cache); - return -ENOMEM; - } - return 0; -} - -int __init zpci_dma_init(void) -{ - s390_iommu_aperture = (u64)virt_to_phys(high_memory); - if (!s390_iommu_aperture_factor) - s390_iommu_aperture = ULONG_MAX; - else - s390_iommu_aperture *= s390_iommu_aperture_factor; - - return dma_alloc_cpu_table_caches(); -} - -void zpci_dma_exit(void) -{ - kmem_cache_destroy(dma_page_table_cache); - kmem_cache_destroy(dma_region_table_cache); -} - -const struct dma_map_ops s390_pci_dma_ops = { - .alloc = s390_dma_alloc, - .free = s390_dma_free, - .map_sg = s390_dma_map_sg, - .unmap_sg = s390_dma_unmap_sg, - .map_page = s390_dma_map_pages, - .unmap_page = s390_dma_unmap_pages, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, - .free_pages = dma_common_free_pages, - /* dma_supported is unconditionally true without a callback */ -}; -EXPORT_SYMBOL_GPL(s390_pci_dma_ops); - -static int __init s390_iommu_setup(char *str) -{ - if (!strcmp(str, "strict")) - s390_iommu_strict = 1; - return 1; -} - -__setup("s390_iommu=", s390_iommu_setup); - -static int __init s390_iommu_aperture_setup(char *str) -{ - if (kstrtou32(str, 10, &s390_iommu_aperture_factor)) - s390_iommu_aperture_factor = 1; - return 1; -} - -__setup("s390_iommu_aperture=", s390_iommu_aperture_setup); diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c index 4ef5a6a1d618..4d9773ef9e0a 100644 --- a/arch/s390/pci/pci_event.c +++ b/arch/s390/pci/pci_event.c @@ -313,8 +313,6 @@ static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh) /* Even though the device is already gone we still * need to free zPCI resources as part of the disable. */ - if (zdev->dma_table) - zpci_dma_exit_device(zdev); if (zdev_enabled(zdev)) zpci_disable_device(zdev); zdev->state = ZPCI_FN_STATE_STANDBY; diff --git a/arch/s390/pci/pci_sysfs.c b/arch/s390/pci/pci_sysfs.c index cae280e5c047..8a7abac51816 100644 --- a/arch/s390/pci/pci_sysfs.c +++ b/arch/s390/pci/pci_sysfs.c @@ -56,6 +56,7 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, struct pci_dev *pdev = to_pci_dev(dev); struct zpci_dev *zdev = to_zpci(pdev); int ret = 0; + u8 status; /* Can't use device_remove_self() here as that would lead us to lock * the pci_rescan_remove_lock while holding the device' kernfs lock. @@ -82,12 +83,6 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, pci_lock_rescan_remove(); if (pci_dev_is_added(pdev)) { pci_stop_and_remove_bus_device(pdev); - if (zdev->dma_table) { - ret = zpci_dma_exit_device(zdev); - if (ret) - goto out; - } - if (zdev_enabled(zdev)) { ret = zpci_disable_device(zdev); /* @@ -105,14 +100,16 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr, ret = zpci_enable_device(zdev); if (ret) goto out; - ret = zpci_dma_init_device(zdev); - if (ret) { - zpci_disable_device(zdev); - goto out; + + if (zdev->dma_table) { + ret = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, + virt_to_phys(zdev->dma_table), &status); + if (ret) + zpci_disable_device(zdev); } - pci_rescan_bus(zdev->zbus->bus); } out: + pci_rescan_bus(zdev->zbus->bus); pci_unlock_rescan_remove(); if (kn) sysfs_unbreak_active_protection(kn); diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 79707685d54a..c89ecd981448 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -93,7 +93,7 @@ config IOMMU_DEBUGFS choice prompt "IOMMU default domain type" depends on IOMMU_API - default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64 + default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64 || S390 default IOMMU_DEFAULT_DMA_STRICT help Choose the type of IOMMU domain used to manage DMA API usage by @@ -148,7 +148,7 @@ config OF_IOMMU # IOMMU-agnostic DMA-mapping layer config IOMMU_DMA - def_bool ARM64 || IA64 || X86 + def_bool ARM64 || IA64 || X86 || S390 select DMA_OPS select IOMMU_API select IOMMU_IOVA diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 4dfa557270f4..73144ea0adfc 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -14,16 +14,300 @@ #include #include +#include "dma-iommu.h" + static const struct iommu_ops s390_iommu_ops; +static struct kmem_cache *dma_region_table_cache; +static struct kmem_cache *dma_page_table_cache; + +static u64 s390_iommu_aperture; +static u32 s390_iommu_aperture_factor = 1; + struct s390_domain { struct iommu_domain domain; struct list_head devices; + struct zpci_iommu_ctrs ctrs; unsigned long *dma_table; spinlock_t list_lock; struct rcu_head rcu; }; +static inline unsigned int calc_rtx(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK; +} + +static inline unsigned int calc_sx(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK; +} + +static inline unsigned int calc_px(dma_addr_t ptr) +{ + return ((unsigned long)ptr >> PAGE_SHIFT) & ZPCI_PT_MASK; +} + +static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa) +{ + *entry &= ZPCI_PTE_FLAG_MASK; + *entry |= (pfaa & ZPCI_PTE_ADDR_MASK); +} + +static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto) +{ + *entry &= ZPCI_RTE_FLAG_MASK; + *entry |= (sto & ZPCI_RTE_ADDR_MASK); + *entry |= ZPCI_TABLE_TYPE_RTX; +} + +static inline void set_st_pto(unsigned long *entry, phys_addr_t pto) +{ + *entry &= ZPCI_STE_FLAG_MASK; + *entry |= (pto & ZPCI_STE_ADDR_MASK); + *entry |= ZPCI_TABLE_TYPE_SX; +} + +static inline void validate_rt_entry(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_VALID_MASK; + *entry &= ~ZPCI_TABLE_OFFSET_MASK; + *entry |= ZPCI_TABLE_VALID; + *entry |= ZPCI_TABLE_LEN_RTX; +} + +static inline void validate_st_entry(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_VALID_MASK; + *entry |= ZPCI_TABLE_VALID; +} + +static inline void invalidate_pt_entry(unsigned long *entry) +{ + WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID); + *entry &= ~ZPCI_PTE_VALID_MASK; + *entry |= ZPCI_PTE_INVALID; +} + +static inline void validate_pt_entry(unsigned long *entry) +{ + WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID); + *entry &= ~ZPCI_PTE_VALID_MASK; + *entry |= ZPCI_PTE_VALID; +} + +static inline void entry_set_protected(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_PROT_MASK; + *entry |= ZPCI_TABLE_PROTECTED; +} + +static inline void entry_clr_protected(unsigned long *entry) +{ + *entry &= ~ZPCI_TABLE_PROT_MASK; + *entry |= ZPCI_TABLE_UNPROTECTED; +} + +static inline int reg_entry_isvalid(unsigned long entry) +{ + return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID; +} + +static inline int pt_entry_isvalid(unsigned long entry) +{ + return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID; +} + +static inline unsigned long *get_rt_sto(unsigned long entry) +{ + if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX) + return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK); + else + return NULL; +} + +static inline unsigned long *get_st_pto(unsigned long entry) +{ + if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX) + return phys_to_virt(entry & ZPCI_STE_ADDR_MASK); + else + return NULL; +} + +static int __init dma_alloc_cpu_table_caches(void) +{ + dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables", + ZPCI_TABLE_SIZE, + ZPCI_TABLE_ALIGN, + 0, NULL); + if (!dma_region_table_cache) + return -ENOMEM; + + dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables", + ZPCI_PT_SIZE, + ZPCI_PT_ALIGN, + 0, NULL); + if (!dma_page_table_cache) { + kmem_cache_destroy(dma_region_table_cache); + return -ENOMEM; + } + return 0; +} + +static unsigned long *dma_alloc_cpu_table(void) +{ + unsigned long *table, *entry; + + table = kmem_cache_alloc(dma_region_table_cache, GFP_ATOMIC); + if (!table) + return NULL; + + for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++) + *entry = ZPCI_TABLE_INVALID; + return table; +} + +static void dma_free_cpu_table(void *table) +{ + kmem_cache_free(dma_region_table_cache, table); +} + +static void dma_free_page_table(void *table) +{ + kmem_cache_free(dma_page_table_cache, table); +} + +static void dma_free_seg_table(unsigned long entry) +{ + unsigned long *sto = get_rt_sto(entry); + int sx; + + for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++) + if (reg_entry_isvalid(sto[sx])) + dma_free_page_table(get_st_pto(sto[sx])); + + dma_free_cpu_table(sto); +} + +static void dma_cleanup_tables(unsigned long *table) +{ + int rtx; + + if (!table) + return; + + for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++) + if (reg_entry_isvalid(table[rtx])) + dma_free_seg_table(table[rtx]); + + dma_free_cpu_table(table); +} + +static unsigned long *dma_alloc_page_table(void) +{ + unsigned long *table, *entry; + + table = kmem_cache_alloc(dma_page_table_cache, GFP_ATOMIC); + if (!table) + return NULL; + + for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++) + *entry = ZPCI_PTE_INVALID; + return table; +} + +static unsigned long *dma_get_seg_table_origin(unsigned long *rtep) +{ + unsigned long old_rte, rte; + unsigned long *sto; + + rte = READ_ONCE(*rtep); + if (reg_entry_isvalid(rte)) { + sto = get_rt_sto(rte); + } else { + sto = dma_alloc_cpu_table(); + if (!sto) + return NULL; + + set_rt_sto(&rte, virt_to_phys(sto)); + validate_rt_entry(&rte); + entry_clr_protected(&rte); + + old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte); + if (old_rte != ZPCI_TABLE_INVALID) { + /* Somone else was faster, use theirs */ + dma_free_cpu_table(sto); + sto = get_rt_sto(old_rte); + } + } + return sto; +} + +static unsigned long *dma_get_page_table_origin(unsigned long *step) +{ + unsigned long old_ste, ste; + unsigned long *pto; + + ste = READ_ONCE(*step); + if (reg_entry_isvalid(ste)) { + pto = get_st_pto(ste); + } else { + pto = dma_alloc_page_table(); + if (!pto) + return NULL; + set_st_pto(&ste, virt_to_phys(pto)); + validate_st_entry(&ste); + entry_clr_protected(&ste); + + old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste); + if (old_ste != ZPCI_TABLE_INVALID) { + /* Somone else was faster, use theirs */ + dma_free_page_table(pto); + pto = get_st_pto(old_ste); + } + } + return pto; +} + +static unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr) +{ + unsigned long *sto, *pto; + unsigned int rtx, sx, px; + + rtx = calc_rtx(dma_addr); + sto = dma_get_seg_table_origin(&rto[rtx]); + if (!sto) + return NULL; + + sx = calc_sx(dma_addr); + pto = dma_get_page_table_origin(&sto[sx]); + if (!pto) + return NULL; + + px = calc_px(dma_addr); + return &pto[px]; +} + +static void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags) +{ + unsigned long pte; + + pte = READ_ONCE(*ptep); + if (flags & ZPCI_PTE_INVALID) { + invalidate_pt_entry(&pte); + } else { + set_pt_pfaa(&pte, page_addr); + validate_pt_entry(&pte); + } + + if (flags & ZPCI_TABLE_PROTECTED) + entry_set_protected(&pte); + else + entry_clr_protected(&pte); + + xchg(ptep, pte); +} + static struct s390_domain *to_s390_domain(struct iommu_domain *dom) { return container_of(dom, struct s390_domain, domain); @@ -45,9 +329,14 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type) { struct s390_domain *s390_domain; - if (domain_type != IOMMU_DOMAIN_UNMANAGED) + switch (domain_type) { + case IOMMU_DOMAIN_DMA: + case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_UNMANAGED: + break; + default: return NULL; - + } s390_domain = kzalloc(sizeof(*s390_domain), GFP_KERNEL); if (!s390_domain) return NULL; @@ -86,11 +375,14 @@ static void s390_domain_free(struct iommu_domain *domain) call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain); } -static void __s390_iommu_detach_device(struct zpci_dev *zdev) +static void s390_iommu_detach_device(struct iommu_domain *domain, + struct device *dev) { - struct s390_domain *s390_domain = zdev->s390_domain; + struct s390_domain *s390_domain = to_s390_domain(domain); + struct zpci_dev *zdev = to_zpci_dev(dev); unsigned long flags; + WARN_ON(zdev->s390_domain != to_s390_domain(domain)); if (!s390_domain) return; @@ -120,9 +412,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain, return -EINVAL; if (zdev->s390_domain) - __s390_iommu_detach_device(zdev); - else if (zdev->dma_table) - zpci_dma_exit_device(zdev); + s390_iommu_detach_device(&zdev->s390_domain->domain, dev); cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, virt_to_phys(s390_domain->dma_table), &status); @@ -144,17 +434,6 @@ static int s390_iommu_attach_device(struct iommu_domain *domain, return 0; } -static void s390_iommu_detach_device(struct iommu_domain *domain, - struct device *dev) -{ - struct zpci_dev *zdev = to_zpci_dev(dev); - - WARN_ON(zdev->s390_domain != to_s390_domain(domain)); - - __s390_iommu_detach_device(zdev); - zpci_dma_init_device(zdev); -} - static void s390_iommu_get_resv_regions(struct device *dev, struct list_head *list) { @@ -207,7 +486,7 @@ static void s390_iommu_release_device(struct device *dev) * to the device, but keep it attached to other devices in the group. */ if (zdev) - __s390_iommu_detach_device(zdev); + s390_iommu_detach_device(&zdev->s390_domain->domain, dev); } @@ -225,6 +504,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { + atomic64_inc(&s390_domain->ctrs.global_rpcits); zpci_refresh_all(zdev); } rcu_read_unlock(); @@ -243,6 +523,7 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain, rcu_read_lock(); list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { + atomic64_inc(&s390_domain->ctrs.sync_rpcits); zpci_refresh_trans((u64)zdev->fh << 32, gather->start, size); } @@ -260,6 +541,7 @@ static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain, list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { if (!zdev->tlb_refresh) continue; + atomic64_inc(&s390_domain->ctrs.sync_map_rpcits); ret = zpci_refresh_trans((u64)zdev->fh << 32, iova, size); /* @@ -351,16 +633,15 @@ static int s390_iommu_map_pages(struct iommu_domain *domain, if (!IS_ALIGNED(iova | paddr, pgsize)) return -EINVAL; - if (!(prot & IOMMU_READ)) - return -EINVAL; - if (!(prot & IOMMU_WRITE)) flags |= ZPCI_TABLE_PROTECTED; rc = s390_iommu_validate_trans(s390_domain, paddr, iova, - pgcount, flags); - if (!rc) + pgcount, flags); + if (!rc) { *mapped = size; + atomic64_add(pgcount, &s390_domain->ctrs.mapped_pages); + } return rc; } @@ -416,12 +697,27 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain, return 0; iommu_iotlb_gather_add_range(gather, iova, size); + atomic64_add(pgcount, &s390_domain->ctrs.unmapped_pages); return size; } +static void s390_iommu_probe_finalize(struct device *dev) +{ + iommu_dma_forcedac = true; + iommu_setup_dma_ops(dev, 0, U64_MAX); +} + +struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev) +{ + if (!zdev && !zdev->s390_domain) + return NULL; + return &zdev->s390_domain->ctrs; +} + int zpci_init_iommu(struct zpci_dev *zdev) { + u64 aperture_size; int rc = 0; rc = iommu_device_sysfs_add(&zdev->iommu_dev, NULL, NULL, @@ -433,6 +729,12 @@ int zpci_init_iommu(struct zpci_dev *zdev) if (rc) goto out_sysfs; + zdev->start_dma = PAGE_ALIGN(zdev->start_dma); + aperture_size = min3(s390_iommu_aperture, + ZPCI_TABLE_SIZE_RT - zdev->start_dma, + zdev->end_dma - zdev->start_dma + 1); + zdev->end_dma = zdev->start_dma + aperture_size - 1; + return 0; out_sysfs: @@ -448,10 +750,49 @@ void zpci_destroy_iommu(struct zpci_dev *zdev) iommu_device_sysfs_remove(&zdev->iommu_dev); } +static int __init s390_iommu_setup(char *str) +{ + if (!strcmp(str, "strict")) { + pr_warn("s390_iommu=strict deprecated; use iommu.strict=1 instead\n"); + iommu_set_dma_strict(); + } + return 1; +} + +__setup("s390_iommu=", s390_iommu_setup); + +static int __init s390_iommu_aperture_setup(char *str) +{ + if (kstrtou32(str, 10, &s390_iommu_aperture_factor)) + s390_iommu_aperture_factor = 1; + return 1; +} + +__setup("s390_iommu_aperture=", s390_iommu_aperture_setup); + +static int __init s390_iommu_init(void) +{ + int rc; + + s390_iommu_aperture = (u64)virt_to_phys(high_memory); + if (!s390_iommu_aperture_factor) + s390_iommu_aperture = ULONG_MAX; + else + s390_iommu_aperture *= s390_iommu_aperture_factor; + + rc = dma_alloc_cpu_table_caches(); + if (rc) + return rc; + + return rc; +} +subsys_initcall(s390_iommu_init); + static const struct iommu_ops s390_iommu_ops = { .capable = s390_iommu_capable, .domain_alloc = s390_domain_alloc, .probe_device = s390_iommu_probe_device, + .probe_finalize = s390_iommu_probe_finalize, .release_device = s390_iommu_release_device, .device_group = generic_device_group, .pgsize_bitmap = SZ_4K, From patchwork Tue Jan 24 12:50:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47694 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2130030wrn; Tue, 24 Jan 2023 04:52:57 -0800 (PST) X-Google-Smtp-Source: AMrXdXtRWEqQKice31HPhAgHXpbfj5Ktb0HzFHhKlZP9gjobbZQNkixNF6iizx48Cn2ljmkK9HEO X-Received: by 2002:a05:6a20:4c9f:b0:b8:694c:201 with SMTP id fq31-20020a056a204c9f00b000b8694c0201mr29136704pzb.11.1674564776763; Tue, 24 Jan 2023 04:52:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674564776; cv=none; d=google.com; s=arc-20160816; b=MfTUHLq7ZbyPQXOIuIbPStIgCr2jJKB4cqP1rCriv4ljyX4whsFZ9qm3IaD7rV3Mqt 0WrAB0bIX2YqElXhJf9DvIjk99T/BWcWA1/8sYPDqGO2XAUF2FfNB7+h9cJodQTt3trr zSma8mZmT3HLWRdItkesPdGWpSIDy7KqPbkX8MNG6aXajJkQ/XUgCq9wn47Sd2YFRuZK NasSLnDYInkC0/ns2p+HjeK/i0hrNMs5E4OhNA0iMkz+KGdGBGUAKp/mjAOviHMVFoKu 1LjNJEdVn3HeMcC5b3Wa/cytSvmOr5kyWuD+cwrg9Pn0jQjyXacNsdyMY/YqHPUN6qTy Ro2w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zKsCqsryuqmTs7WJH9lz+vGmy2YilcZtyv3WqxIa/fI=; b=tmwWIEgU7A/rjAEvx5Ft/eMWG9a46TCWaFryOcbnb/swhG+z6t+jYTubtVr9cFMrbC M7NRiNvJNWe2+3U9FZ4gEDPMPRqkZ7Pcs8qOJ4sbnM6xqpsOdiWIAY3h+561CCVc5Eh9 IxMlxy6z4TfqvJMz0KixBSp4Z4njMEgVAlX9mZtawPHD3ivMGg1H8PWlUQ7Fm71jnkkf ieDEZIWI+ycZjRNUf0fsvXzP8w8qBc/AlKG8YwgsiuJ/RNTkZXN8+G6oWlvPUsd+llBP zz+sH9MD9Pjnd2u8n9ASFtIF1+7fA1rY5BmYBu4cItKz4APdyUs6lFBH3xjCEjGUvVLy Yzog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=O43wCtEv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t26-20020a63b25a000000b004d0b91d7f1asi2214033pgo.43.2023.01.24.04.52.44; Tue, 24 Jan 2023 04:52:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=O43wCtEv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234114AbjAXMwb (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234032AbjAXMwU (ORCPT ); Tue, 24 Jan 2023 07:52:20 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F8E745880; Tue, 24 Jan 2023 04:51:55 -0800 (PST) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OBcfNa000782; Tue, 24 Jan 2023 12:50:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=zKsCqsryuqmTs7WJH9lz+vGmy2YilcZtyv3WqxIa/fI=; b=O43wCtEvvwbUp2a1VcbClXNJ/PB+BRx29Czz33poUgyhdaJKbDaTiv1z2S7Yq+DKFtkE osA7fK88X4d03oRrmbEzjIsoOgwzlKF7t7HEfJ6MyHAie4YSzEsw92eKnjR+OW/oC3vm B2FnRD5V0p7wrx6J1CkzINXwI6CJMLdP+yRruaNrhXwH4jhE1XlBxt/mOlq8XK+AWhOl 4i8maHSutjSVhLSySsMbfWPQF4IHaxVaY3dW3W5bqS0TPzgUlc1yzkPNRYRjRdf+NvSn xUaIDIn/9j+cVUn9Gk/4qaE1KdfJ9LHmsbB8Qjz+l+DpgWGGBCRmlckqWmWeMyHlBDhi sg== Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3nac1ywct5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:47 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30O9xdPt026731; Tue, 24 Jan 2023 12:50:44 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma04ams.nl.ibm.com (PPS) with ESMTPS id 3n87p6btmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:44 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCofVj45416952 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:41 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 38C8E2004B; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A657120043; Tue, 24 Jan 2023 12:50:40 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:40 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 5/7] iommu/dma: Allow a single FQ in addition to per-CPU FQs Date: Tue, 24 Jan 2023 13:50:35 +0100 Message-Id: <20230124125037.3201345-6-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: -1hFMJegCS57Mr3Z8cb2fuj8Yp5ruF6p X-Proofpoint-ORIG-GUID: -1hFMJegCS57Mr3Z8cb2fuj8Yp5ruF6p X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 bulkscore=0 malwarescore=0 mlxscore=0 priorityscore=1501 spamscore=0 clxscore=1015 impostorscore=0 mlxlogscore=915 lowpriorityscore=0 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908435035488927?= X-GMAIL-MSGID: =?utf-8?q?1755908435035488927?= In some virtualized environments, including s390 paged memory guests, IOTLB flushes are used to update IOMMU shadow tables. Due to this, they are much more expensive than in typical bare metal environments or non-paged s390 guests. In addition they may parallelize more poorly in virtualized environments. This changes the trade off for flushing IOVAs such that minimizing the number of IOTLB flushes trumps any benefit of cheaper queuing operations or increased paralellism. In this scenario per-CPU flush queues pose several problems. Firstly per-CPU memory is often quite limited prohibiting larger queues. Secondly collecting IOVAs per-CPU but flushing via a global timeout reduces the number of IOVAs flushed for each timeout especially on s390 where PCI interrupts may not be bound to a specific CPU. Thus let's introduce a single flush queue mode IOMMU_DOMAIN_DMA_SQ that reuses the same queue logic but only allocates a single global queue allowing larger batches of IOVAs to be freed at once and with larger timeouts. This is to allow the common IOVA flushing code to more closely resemble the global flush behavior used on s390's previous internal DMA API implementation. As we now support two different variants of flush queues rename the existing __IOMMU_DOMAIN_DMA_FQ to __IOMMU_DOMAIN_DMA_LAZY to indicate the general case of having a flush queue and introduce separate __IOMMU_DOMAIN_DMA_PERCPU_Q and __IOMMU_DOMAIN_DMA_SINGLE_Q bits to indicate the two queue variants. Link: https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@arm.com/ Signed-off-by: Niklas Schnelle --- v4 -> v5: - Fixed iommu_group_store_type() mistakenly initializing DMA-SQ instead of DMA-FQ. This was caused by iommu_dma_init_fq() being called before domain->type is set, instead pass the type as paramater. This also closes a window where domain->type is still DMA while the FQ is already used. (Gerd) v2 -> v3: - Rename __IOMMU_DOMAIN_DMA_FQ to __IOMMU_DOMAIN_DMA_LAZY to make it more clear that this bit indicates flush queue use independent of the exact queuing strategy drivers/iommu/dma-iommu.c | 155 ++++++++++++++++++++++++++++--------- drivers/iommu/dma-iommu.h | 4 +- drivers/iommu/iommu.c | 16 ++-- drivers/iommu/s390-iommu.c | 1 + include/linux/iommu.h | 14 +++- 5 files changed, 142 insertions(+), 48 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f798c44e0903..d13ca6db0012 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -48,8 +48,11 @@ struct iommu_dma_cookie { /* Full allocator for IOMMU_DMA_IOVA_COOKIE */ struct { struct iova_domain iovad; - - struct iova_fq __percpu *fq; /* Flush queue */ + /* Flush queue */ + union { + struct iova_fq *single_fq; + struct iova_fq __percpu *percpu_fq; + }; /* Number of TLB flushes that have been started */ atomic64_t fq_flush_start_cnt; /* Number of TLB flushes that have been finished */ @@ -151,25 +154,44 @@ static void fq_flush_iotlb(struct iommu_dma_cookie *cookie) atomic64_inc(&cookie->fq_flush_finish_cnt); } -static void fq_flush_timeout(struct timer_list *t) +static void fq_flush_percpu(struct iommu_dma_cookie *cookie) { - struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); int cpu; - atomic_set(&cookie->fq_timer_on, 0); - fq_flush_iotlb(cookie); - for_each_possible_cpu(cpu) { unsigned long flags; struct iova_fq *fq; - fq = per_cpu_ptr(cookie->fq, cpu); + fq = per_cpu_ptr(cookie->percpu_fq, cpu); spin_lock_irqsave(&fq->lock, flags); fq_ring_free(cookie, fq); spin_unlock_irqrestore(&fq->lock, flags); } } +static void fq_flush_single(struct iommu_dma_cookie *cookie) +{ + struct iova_fq *fq = cookie->single_fq; + unsigned long flags; + + spin_lock_irqsave(&fq->lock, flags); + fq_ring_free(cookie, fq); + spin_unlock_irqrestore(&fq->lock, flags); +} + +static void fq_flush_timeout(struct timer_list *t) +{ + struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer); + + atomic_set(&cookie->fq_timer_on, 0); + fq_flush_iotlb(cookie); + + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + fq_flush_percpu(cookie); + else + fq_flush_single(cookie); +} + static void queue_iova(struct iommu_dma_cookie *cookie, unsigned long pfn, unsigned long pages, struct list_head *freelist) @@ -187,7 +209,11 @@ static void queue_iova(struct iommu_dma_cookie *cookie, */ smp_mb(); - fq = raw_cpu_ptr(cookie->fq); + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + fq = raw_cpu_ptr(cookie->percpu_fq); + else + fq = cookie->single_fq; + spin_lock_irqsave(&fq->lock, flags); /* @@ -218,31 +244,91 @@ static void queue_iova(struct iommu_dma_cookie *cookie, jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); } -static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) +static void iommu_dma_free_fq_single(struct iova_fq *fq) { - int cpu, idx; + int idx; - if (!cookie->fq) + if (!fq) return; + fq_ring_for_each(idx, fq) + put_pages_list(&fq->entries[idx].freelist); + vfree(fq); +} + +static void iommu_dma_free_fq_percpu(struct iova_fq __percpu *percpu_fq) +{ + int cpu, idx; - del_timer_sync(&cookie->fq_timer); /* The IOVAs will be torn down separately, so just free our queued pages */ for_each_possible_cpu(cpu) { - struct iova_fq *fq = per_cpu_ptr(cookie->fq, cpu); + struct iova_fq *fq = per_cpu_ptr(percpu_fq, cpu); fq_ring_for_each(idx, fq) put_pages_list(&fq->entries[idx].freelist); } - free_percpu(cookie->fq); + free_percpu(percpu_fq); +} + +static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) +{ + if (!cookie->fq_domain) + return; + + del_timer_sync(&cookie->fq_timer); + if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ) + iommu_dma_free_fq_percpu(cookie->percpu_fq); + else + iommu_dma_free_fq_single(cookie->single_fq); +} + + +static void iommu_dma_init_one_fq(struct iova_fq *fq) +{ + int i; + + fq->head = 0; + fq->tail = 0; + + spin_lock_init(&fq->lock); + + for (i = 0; i < IOVA_FQ_SIZE; i++) + INIT_LIST_HEAD(&fq->entries[i].freelist); +} + +static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie) +{ + struct iova_fq *queue; + + queue = vzalloc(sizeof(*queue)); + if (!queue) + return -ENOMEM; + iommu_dma_init_one_fq(queue); + cookie->single_fq = queue; + + return 0; +} + +static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie) +{ + struct iova_fq __percpu *queue; + int cpu; + + queue = alloc_percpu(struct iova_fq); + if (!queue) + return -ENOMEM; + + for_each_possible_cpu(cpu) + iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu)); + cookie->percpu_fq = queue; + return 0; } /* sysfs updates are serialised by the mutex of the group owning @domain */ -int iommu_dma_init_fq(struct iommu_domain *domain) +int iommu_dma_init_fq(struct iommu_domain *domain, int type) { struct iommu_dma_cookie *cookie = domain->iova_cookie; - struct iova_fq __percpu *queue; - int i, cpu; + int rc; if (cookie->fq_domain) return 0; @@ -250,26 +336,19 @@ int iommu_dma_init_fq(struct iommu_domain *domain) atomic64_set(&cookie->fq_flush_start_cnt, 0); atomic64_set(&cookie->fq_flush_finish_cnt, 0); - queue = alloc_percpu(struct iova_fq); - if (!queue) { - pr_warn("iova flush queue initialization failed\n"); - return -ENOMEM; - } - - for_each_possible_cpu(cpu) { - struct iova_fq *fq = per_cpu_ptr(queue, cpu); - - fq->head = 0; - fq->tail = 0; - - spin_lock_init(&fq->lock); + if (type == IOMMU_DOMAIN_DMA_FQ) + rc = iommu_dma_init_fq_percpu(cookie); + else + rc = iommu_dma_init_fq_single(cookie); - for (i = 0; i < IOVA_FQ_SIZE; i++) - INIT_LIST_HEAD(&fq->entries[i].freelist); + if (rc) { + pr_warn("iova flush queue initialization failed\n"); + /* fall back to strict mode */ + domain->type = IOMMU_DOMAIN_DMA; + return rc; } - cookie->fq = queue; - + domain->type = type; timer_setup(&cookie->fq_timer, fq_flush_timeout, 0); atomic_set(&cookie->fq_timer_on, 0); /* @@ -582,9 +661,9 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (ret) goto done_unlock; - /* If the FQ fails we can simply fall back to strict mode */ - if (domain->type == IOMMU_DOMAIN_DMA_FQ && iommu_dma_init_fq(domain)) - domain->type = IOMMU_DOMAIN_DMA; + /* If the FQ fails we fall back to strict mode */ + if (domain->type & __IOMMU_DOMAIN_DMA_LAZY) + iommu_dma_init_fq(domain, domain->type); ret = iova_reserve_iommu_regions(dev, domain); diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h index 942790009292..2e037fc6074c 100644 --- a/drivers/iommu/dma-iommu.h +++ b/drivers/iommu/dma-iommu.h @@ -12,7 +12,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain); void iommu_put_dma_cookie(struct iommu_domain *domain); -int iommu_dma_init_fq(struct iommu_domain *domain); +int iommu_dma_init_fq(struct iommu_domain *domain, int type); void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); @@ -20,7 +20,7 @@ extern bool iommu_dma_forcedac; #else /* CONFIG_IOMMU_DMA */ -static inline int iommu_dma_init_fq(struct iommu_domain *domain) +static inline int iommu_dma_init_fq(struct iommu_domain *domain, int type) { return -EINVAL; } diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5565e510f7d2..b58c4313851b 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -145,6 +145,7 @@ static const char *iommu_domain_type_str(unsigned int t) return "Unmanaged"; case IOMMU_DOMAIN_DMA: case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_DMA_SQ: return "Translated"; default: return "Unknown"; @@ -477,7 +478,7 @@ early_param("iommu.strict", iommu_dma_setup); void iommu_set_dma_strict(void) { iommu_dma_strict = true; - if (iommu_def_domain_type == IOMMU_DOMAIN_DMA_FQ) + if (!!(iommu_def_domain_type & __IOMMU_DOMAIN_DMA_LAZY)) iommu_def_domain_type = IOMMU_DOMAIN_DMA; } @@ -678,6 +679,9 @@ static ssize_t iommu_group_show_type(struct iommu_group *group, case IOMMU_DOMAIN_DMA_FQ: type = "DMA-FQ\n"; break; + case IOMMU_DOMAIN_DMA_SQ: + type = "DMA-SQ\n"; + break; } } mutex_unlock(&group->mutex); @@ -2896,10 +2900,8 @@ static int iommu_change_dev_def_domain(struct iommu_group *group, } /* We can bring up a flush queue without tearing down the domain */ - if (type == IOMMU_DOMAIN_DMA_FQ && prev_dom->type == IOMMU_DOMAIN_DMA) { - ret = iommu_dma_init_fq(prev_dom); - if (!ret) - prev_dom->type = IOMMU_DOMAIN_DMA_FQ; + if (!!(type & __IOMMU_DOMAIN_DMA_LAZY) && prev_dom->type == IOMMU_DOMAIN_DMA) { + ret = iommu_dma_init_fq(prev_dom, type); goto out; } @@ -2970,6 +2972,8 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, req_type = IOMMU_DOMAIN_DMA; else if (sysfs_streq(buf, "DMA-FQ")) req_type = IOMMU_DOMAIN_DMA_FQ; + else if (sysfs_streq(buf, "DMA-SQ")) + req_type = IOMMU_DOMAIN_DMA_SQ; else if (sysfs_streq(buf, "auto")) req_type = 0; else @@ -3021,7 +3025,7 @@ static ssize_t iommu_group_store_type(struct iommu_group *group, /* Check if the device in the group still has a driver bound to it */ device_lock(dev); - if (device_is_bound(dev) && !(req_type == IOMMU_DOMAIN_DMA_FQ && + if (device_is_bound(dev) && !((req_type & __IOMMU_DOMAIN_DMA_LAZY) && group->default_domain->type == IOMMU_DOMAIN_DMA)) { pr_err_ratelimited("Device is still bound to driver\n"); ret = -EBUSY; diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 73144ea0adfc..ff73b75be886 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -332,6 +332,7 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type) switch (domain_type) { case IOMMU_DOMAIN_DMA: case IOMMU_DOMAIN_DMA_FQ: + case IOMMU_DOMAIN_DMA_SQ: case IOMMU_DOMAIN_UNMANAGED: break; default: diff --git a/include/linux/iommu.h b/include/linux/iommu.h index e7f76599f09e..74cee59516aa 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -62,10 +62,13 @@ struct iommu_domain_geometry { #define __IOMMU_DOMAIN_DMA_API (1U << 1) /* Domain for use in DMA-API implementation */ #define __IOMMU_DOMAIN_PT (1U << 2) /* Domain is identity mapped */ -#define __IOMMU_DOMAIN_DMA_FQ (1U << 3) /* DMA-API uses flush queue */ +#define __IOMMU_DOMAIN_DMA_LAZY (1U << 3) /* DMA-API uses flush queue */ #define __IOMMU_DOMAIN_SVA (1U << 4) /* Shared process address space */ +#define __IOMMU_DOMAIN_DMA_PERCPU_Q (1U << 5) /* Per-CPU flush queue */ +#define __IOMMU_DOMAIN_DMA_SINGLE_Q (1U << 6) /* Single flush queue */ + /* * This are the possible domain-types * @@ -79,6 +82,8 @@ struct iommu_domain_geometry { * certain optimizations for these domains * IOMMU_DOMAIN_DMA_FQ - As above, but definitely using batched TLB * invalidation. + * IOMMU_DOMAIN_DMA_SQ - As IOMMU_DOMAIN_DMA_FQ, but batched TLB + * validations use single global queue * IOMMU_DOMAIN_SVA - DMA addresses are shared process addresses * represented by mm_struct's. */ @@ -89,7 +94,12 @@ struct iommu_domain_geometry { __IOMMU_DOMAIN_DMA_API) #define IOMMU_DOMAIN_DMA_FQ (__IOMMU_DOMAIN_PAGING | \ __IOMMU_DOMAIN_DMA_API | \ - __IOMMU_DOMAIN_DMA_FQ) + __IOMMU_DOMAIN_DMA_LAZY | \ + __IOMMU_DOMAIN_DMA_PERCPU_Q) +#define IOMMU_DOMAIN_DMA_SQ (__IOMMU_DOMAIN_PAGING | \ + __IOMMU_DOMAIN_DMA_API | \ + __IOMMU_DOMAIN_DMA_LAZY | \ + __IOMMU_DOMAIN_DMA_SINGLE_Q) #define IOMMU_DOMAIN_SVA (__IOMMU_DOMAIN_SVA) struct iommu_domain { From patchwork Tue Jan 24 12:50:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47693 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2130009wrn; Tue, 24 Jan 2023 04:52:53 -0800 (PST) X-Google-Smtp-Source: AMrXdXtcmp2GlwBixSWTkYJpR8w5jr1IPObMDHWi972WrmU4D5iNPtt2KKVzEVt2FYPOOyerLTno X-Received: by 2002:a05:6a21:2d89:b0:b9:7a47:bca5 with SMTP id ty9-20020a056a212d8900b000b97a47bca5mr16717221pzb.43.1674564772960; Tue, 24 Jan 2023 04:52:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674564772; cv=none; d=google.com; s=arc-20160816; b=NPfaDwYMraqyRCgFFtJepouRVPK1pIIlcxFnff3a6Ps1wrDXpw9i2ja4Q0H4IX0YmD 2DhKwBNnSFfJ1s2jgNngw0sAMtsSlm+C2hD4RiEoLOj8qIcF/lgnUG5PFo1f4Mz3+zpq dQHiQK/bsjDLklIyRfkrWcrlU/JYl3cGyNXryWYe+sd4usoAvs68ISvOfxliVPJJSl6r qQdEWdGbH39fQt5uAZA/Asf8XUhinBPFh1ctPRQwVZaWNsHtG6oU8XGk4ByOhLumrU5x qX+kjF55fOUjFDjeEOOInxn57V1SOkw1S6KmX1gWVHL/4stXlj86b1xGs2DDHzL+/q5I SC1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=d2HmQqHHKqjXGyOgiK/iBnWjLKr1vlBVmXL0wrUZgP0=; b=YNZ86uK/gzeFWyBvXtMi3xLY32BG2c0NaSXdL4WEPpNbBLAlgCWMQuhrqhI3KuwAmC 7duUIKtlYqX2RCGpxK4qdNPK4Hrd1KiBE9N8GlYfg+cp/thGbXMeBBCWbuLfSr+mU0cx oorW40+eg8tNiQGzWrFwmfZEGwdrlakYZZzr3XwMlHD9rOiqt91Lw8PNXZHkxHUs8F3B xvj/SO1pSwcfG8PtwVzUDAqVboDesJNfz6i/cR4gdmEbBIQTbEx5AkpGFDtwP7Sk9/y9 WCt9hKIyqC05otyaOf8s0gNLhn65Asdoomy2Idvh+BnyZbhW2tId4eRS9uG/z2qu5hCX WKCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=U0qDBR1a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bu24-20020a632958000000b004d422660fe5si2037793pgb.457.2023.01.24.04.52.40; Tue, 24 Jan 2023 04:52:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=U0qDBR1a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233976AbjAXMw0 (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233907AbjAXMwQ (ORCPT ); Tue, 24 Jan 2023 07:52:16 -0500 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81B7D2CC5E; Tue, 24 Jan 2023 04:51:50 -0800 (PST) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OCfpT5006304; Tue, 24 Jan 2023 12:50:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=d2HmQqHHKqjXGyOgiK/iBnWjLKr1vlBVmXL0wrUZgP0=; b=U0qDBR1a3+iOTh/6bHPLGluqnuqJsXGMlsK0I/PhPTtGeMktn3vpaVj65pTC4JltK6dC L6EJ1YGo2eRY0rX2P+4ZlOAs2LCAw76UUV+9S18N9ZVtnu1UayNQTK9LKlOhyBUhKO2u 63wRBYiJat8Q8bfexQz51qkUYkB3eGtvTJ1NnVP2+/fpvzsyMA243vWnSeAU/QtCLAyi kUN554rwNj1nfep68FXSEE6SydxOrjKH17G2qa/74NIBlVOBkapvOWDwVPKdU/7U5Pog 0Rqztq+meEovGsofOAo+8AT5TZBZNpeKJ+NdgmblHYrp2y59OuEh8BkQaueJFhf3tSlu kQ== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3naaknqc5j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:47 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30OA5NXP015356; Tue, 24 Jan 2023 12:50:45 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma06ams.nl.ibm.com (PPS) with ESMTPS id 3n87afbt7s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:45 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCofM845416956 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:41 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C538920043; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 455BD2004D; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 6/7] iommu/dma: Enable variable queue size and use larger single queue Date: Tue, 24 Jan 2023 13:50:36 +0100 Message-Id: <20230124125037.3201345-7-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: u2VxQrIjKNvE3vPFR90cUdg9AFu_KXlK X-Proofpoint-GUID: u2VxQrIjKNvE3vPFR90cUdg9AFu_KXlK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 clxscore=1015 priorityscore=1501 impostorscore=0 mlxlogscore=999 spamscore=0 adultscore=0 mlxscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908431536999735?= X-GMAIL-MSGID: =?utf-8?q?1755908431536999735?= Flush queues currently use a fixed compile time size of 256 entries. This being a power of 2 allows the compiler to use shifts and mask instead of more expensive modulo operations. With per-CPU flush queues larger queue sizes would hit per-CPU allocation limits, with a single flush queue these limits do not apply however. As single flush queue mode is intended for environments with epensive IOTLB flushes it then makes sense to use a larger queue size and timeout. To this end re-order struct iova_fq so we can use a dynamic array and make the flush queue size and timeout variable. So as not to lose the shift and mask optimization, check that the variable length is a power of 2 and use explicit shift and mask instead of letting the compiler optimize this. For now use a large fixed queue size and timeout for single flush queues that brings its performance on s390 paged memory guests on par with the previous s390 specific DMA API implementation. In the future the flush queue size can then be turned into a config option or kernel parameter. Signed-off-by: Niklas Schnelle --- drivers/iommu/dma-iommu.c | 60 ++++++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 19 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d13ca6db0012..58394cc81d68 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -61,6 +61,8 @@ struct iommu_dma_cookie { struct timer_list fq_timer; /* 1 when timer is active, 0 when not */ atomic_t fq_timer_on; + /* timeout in ms */ + unsigned long fq_timer_timeout; }; /* Trivial linear page allocator for IOMMU_DMA_MSI_COOKIE */ dma_addr_t msi_iova; @@ -86,10 +88,16 @@ static int __init iommu_dma_forcedac_setup(char *str) early_param("iommu.forcedac", iommu_dma_forcedac_setup); /* Number of entries per flush queue */ -#define IOVA_FQ_SIZE 256 +#define IOVA_DEFAULT_FQ_SIZE 256 + +/* Number of entries for a single queue */ +#define IOVA_SINGLE_FQ_SIZE 32768 /* Timeout (in ms) after which entries are flushed from the queue */ -#define IOVA_FQ_TIMEOUT 10 +#define IOVA_DEFAULT_FQ_TIMEOUT 10 + +/* Timeout (in ms) for a single queue */ +#define IOVA_SINGLE_FQ_TIMEOUT 1000 /* Flush queue entry for deferred flushing */ struct iova_fq_entry { @@ -101,18 +109,19 @@ struct iova_fq_entry { /* Per-CPU flush queue structure */ struct iova_fq { - struct iova_fq_entry entries[IOVA_FQ_SIZE]; - unsigned int head, tail; spinlock_t lock; + unsigned int head, tail; + unsigned int mod_mask; + struct iova_fq_entry entries[]; }; #define fq_ring_for_each(i, fq) \ - for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_FQ_SIZE) + for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) & (fq)->mod_mask) static inline bool fq_full(struct iova_fq *fq) { assert_spin_locked(&fq->lock); - return (((fq->tail + 1) % IOVA_FQ_SIZE) == fq->head); + return (((fq->tail + 1) & fq->mod_mask) == fq->head); } static inline unsigned int fq_ring_add(struct iova_fq *fq) @@ -121,7 +130,7 @@ static inline unsigned int fq_ring_add(struct iova_fq *fq) assert_spin_locked(&fq->lock); - fq->tail = (idx + 1) % IOVA_FQ_SIZE; + fq->tail = (idx + 1) & fq->mod_mask; return idx; } @@ -143,7 +152,7 @@ static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq) fq->entries[idx].iova_pfn, fq->entries[idx].pages); - fq->head = (fq->head + 1) % IOVA_FQ_SIZE; + fq->head = (fq->head + 1) & fq->mod_mask; } } @@ -241,7 +250,7 @@ static void queue_iova(struct iommu_dma_cookie *cookie, if (!atomic_read(&cookie->fq_timer_on) && !atomic_xchg(&cookie->fq_timer_on, 1)) mod_timer(&cookie->fq_timer, - jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT)); + jiffies + msecs_to_jiffies(cookie->fq_timer_timeout)); } static void iommu_dma_free_fq_single(struct iova_fq *fq) @@ -283,43 +292,45 @@ static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie) } -static void iommu_dma_init_one_fq(struct iova_fq *fq) +static void iommu_dma_init_one_fq(struct iova_fq *fq, unsigned int fq_size) { int i; fq->head = 0; fq->tail = 0; + fq->mod_mask = fq_size - 1; spin_lock_init(&fq->lock); - for (i = 0; i < IOVA_FQ_SIZE; i++) + for (i = 0; i < fq_size; i++) INIT_LIST_HEAD(&fq->entries[i].freelist); } -static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie) +static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie, unsigned int fq_size) { struct iova_fq *queue; - queue = vzalloc(sizeof(*queue)); + queue = vzalloc(struct_size(queue, entries, fq_size)); if (!queue) return -ENOMEM; - iommu_dma_init_one_fq(queue); + iommu_dma_init_one_fq(queue, fq_size); cookie->single_fq = queue; return 0; } -static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie) +static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie, unsigned int fq_size) { struct iova_fq __percpu *queue; int cpu; - queue = alloc_percpu(struct iova_fq); + queue = __alloc_percpu(struct_size(queue, entries, fq_size), + __alignof__(*queue)); if (!queue) return -ENOMEM; for_each_possible_cpu(cpu) - iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu)); + iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu), fq_size); cookie->percpu_fq = queue; return 0; } @@ -328,18 +339,27 @@ static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie) int iommu_dma_init_fq(struct iommu_domain *domain, int type) { struct iommu_dma_cookie *cookie = domain->iova_cookie; + unsigned int fq_size = IOVA_DEFAULT_FQ_SIZE; int rc; if (cookie->fq_domain) return 0; + if (domain->type == IOMMU_DOMAIN_DMA_SQ) + fq_size = IOVA_SINGLE_FQ_SIZE; + + if (!is_power_of_2(fq_size)) { + pr_err("FQ size must be a power of 2\n"); + return -EINVAL; + } + atomic64_set(&cookie->fq_flush_start_cnt, 0); atomic64_set(&cookie->fq_flush_finish_cnt, 0); if (type == IOMMU_DOMAIN_DMA_FQ) - rc = iommu_dma_init_fq_percpu(cookie); + rc = iommu_dma_init_fq_percpu(cookie, fq_size); else - rc = iommu_dma_init_fq_single(cookie); + rc = iommu_dma_init_fq_single(cookie, fq_size); if (rc) { pr_warn("iova flush queue initialization failed\n"); @@ -349,6 +369,8 @@ int iommu_dma_init_fq(struct iommu_domain *domain, int type) } domain->type = type; + cookie->fq_timer_timeout = (type == IOMMU_DOMAIN_DMA_SQ) ? + IOVA_SINGLE_FQ_TIMEOUT : IOVA_DEFAULT_FQ_TIMEOUT; timer_setup(&cookie->fq_timer, fq_flush_timeout, 0); atomic_set(&cookie->fq_timer_on, 0); /* From patchwork Tue Jan 24 12:50:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Niklas Schnelle X-Patchwork-Id: 47695 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2130042wrn; Tue, 24 Jan 2023 04:52:59 -0800 (PST) X-Google-Smtp-Source: AMrXdXup635l8vOLNcNVzskRSqL6fUVkZAZhVbb8or829xOAdWftiq4SvWxF62G9I/kQB/7fESg3 X-Received: by 2002:a05:6a20:6d27:b0:b8:7b3c:3543 with SMTP id fv39-20020a056a206d2700b000b87b3c3543mr26729319pzb.62.1674564779021; Tue, 24 Jan 2023 04:52:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674564779; cv=none; d=google.com; s=arc-20160816; b=paPt17vi/LDvsB+ui9PgXuOOiDFNEgDwpGu5IfFbvc6F0FZcUQN1rJMokxOGPumopx r218yyzs2adiLVzOaJtbL+B+Zury4UvRpjUsXTaUVG2K3Pl5WqTT1lxjG4ITfsgbX+Cp OLSEXYDEUN1vU+4kgHiEn5PrGg5BBbZA9za2k6SuDUkApkn2ICskZPoRIru9rNqPZt1y Nc7/h6i9CPciFWGvNudQ+veQGsOc1S4rieELK2ccqT7Rfrh7yJqkrV/Zn4asD/gwwC90 r17M8Vz98ckeVwQvI6P3lDW5agYPekTc3e/ud5PmkhmRI5ec7SJMn+Z6l0S0XA075FCJ CTGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=E8MJdl92aJpDkwBwj/LvmjWdNZqIXnjaJVo4Fsd1nKs=; b=z5NX8lwPlqz6WF4fQmM17BZ5eolu2rGOMDCAQf8SpGU9Hv/K/OQpnAGMLX585B7zAo zjoRHadzHykSUFEaBp7CH7s8e7S/hWFcH6UnxeQanPHprWUMJ9Dw5fjPDiOPjQQ0iFUd h78NCxKm6rJkcnz03Bs2mVEyktUAjnbW5RxhmRvNmCxzDt1mB4AInQETv+7C6noKIQJu MznOlA7eITxna9IrhHnKeyOCN2nDzPH8Phmwk1jVDGfSLXzF6uGmG2KLrpMwaj+iXmbY EzFl4J9/rfCkynUYVH+vdXqJMMPkeOFqhXbSM5ditACYbj2bbAAu9np+AQPQZ+/Xpsvl dPUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=m8yFx1Jd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q16-20020a637510000000b004ccd057d472si2158777pgc.815.2023.01.24.04.52.47; Tue, 24 Jan 2023 04:52:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=m8yFx1Jd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234001AbjAXMwT (ORCPT + 99 others); Tue, 24 Jan 2023 07:52:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233806AbjAXMwP (ORCPT ); Tue, 24 Jan 2023 07:52:15 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7034457F1; Tue, 24 Jan 2023 04:51:47 -0800 (PST) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30OBvVD2024695; Tue, 24 Jan 2023 12:50:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=E8MJdl92aJpDkwBwj/LvmjWdNZqIXnjaJVo4Fsd1nKs=; b=m8yFx1JdyjcphtklIXgNOsJvueF7Ey9VWznaVXCtf4+pXaoA2Ps/Gvtpx+uB0LR/Q2yP NNspb9bb0Nk30fz9v2OXoKrUsrr43ZKOLphpQ0zP27Udgy6ZK7o930U/+YosqvmKlHJ6 x/TcdV9TZVYmHVoIY0KeuATxsAMtDsRgyMf1xRNbeNtengM/6tCHnmXpKnTYOWXHsLNb uHtF4300ORAHldTVmlLVLAkhiWb87Q8Biicm/njHT5Apkut6VTQl4on9F6MSEKyIOfen /ozBEwIu4PYmAGa2rtZc6Vh/q/zv7VXoosCrmjYPwGkvyZhoLrXonLbkTIC0X1kZLX+H /Q== Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0b-001b2d01.pphosted.com (PPS) with ESMTPS id 3nac9550m0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:47 +0000 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 30OCA2hb031519; Tue, 24 Jan 2023 12:50:46 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma02fra.de.ibm.com (PPS) with ESMTPS id 3n87p6ar5c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 24 Jan 2023 12:50:46 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 30OCoggD46268886 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 24 Jan 2023 12:50:42 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5DBDF2005A; Tue, 24 Jan 2023 12:50:42 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D23D72004B; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 24 Jan 2023 12:50:41 +0000 (GMT) From: Niklas Schnelle To: Joerg Roedel , Will Deacon , Robin Murphy , Jason Gunthorpe , Wenjia Zhang Cc: Matthew Rosato , Gerd Bayer , Pierre Morel , iommu@lists.linux.dev, linux-s390@vger.kernel.org, borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com, svens@linux.ibm.com, linux-kernel@vger.kernel.org, Julian Ruess Subject: [PATCH v5 7/7] iommu/dma: Add IOMMU op to choose lazy domain type Date: Tue, 24 Jan 2023 13:50:37 +0100 Message-Id: <20230124125037.3201345-8-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230124125037.3201345-1-schnelle@linux.ibm.com> References: <20230124125037.3201345-1-schnelle@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: zO_AC4edDSfCIehPYCvYGYVEnxChdjo1 X-Proofpoint-ORIG-GUID: zO_AC4edDSfCIehPYCvYGYVEnxChdjo1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-23_12,2023-01-24_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 suspectscore=0 malwarescore=0 spamscore=0 mlxscore=0 mlxlogscore=873 bulkscore=0 phishscore=0 priorityscore=1501 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301240114 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1755908437366593099?= X-GMAIL-MSGID: =?utf-8?q?1755908437366593099?= With two flush queue variants add an IOMMU operation that allows the IOMMU driver to choose its preferred flush queue variant on a per device basis. For s390 use this callback to choose the single queue variant whenever the device requires explicit IOTLB flushes on map indicating that we're running in a paged memory guest with expensive IOTLB flushes. Signed-off-by: Niklas Schnelle --- drivers/iommu/iommu.c | 13 +++++++++++++ drivers/iommu/s390-iommu.c | 11 +++++++++++ include/linux/iommu.h | 5 +++++ 3 files changed, 29 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b58c4313851b..d3508c357d97 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1616,6 +1616,16 @@ static int iommu_get_def_domain_type(struct device *dev) return 0; } +static int iommu_get_lazy_domain_type(struct device *dev) +{ + const struct iommu_ops *ops = dev_iommu_ops(dev); + + if (ops->lazy_domain_type) + return ops->lazy_domain_type(dev); + + return 0; +} + static int iommu_group_alloc_default_domain(struct bus_type *bus, struct iommu_group *group, unsigned int type) @@ -1649,6 +1659,9 @@ static int iommu_alloc_default_domain(struct iommu_group *group, type = iommu_get_def_domain_type(dev) ? : iommu_def_domain_type; + if (!!(type & __IOMMU_DOMAIN_DMA_LAZY)) + type = iommu_get_lazy_domain_type(dev) ? : type; + return iommu_group_alloc_default_domain(dev->bus, group, type); } diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index ff73b75be886..b8aab37e8b15 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -459,6 +459,16 @@ static void s390_iommu_get_resv_regions(struct device *dev, } } +static int s390_iommu_lazy_domain_type(struct device *dev) +{ + struct zpci_dev *zdev = to_zpci_dev(dev); + + if (zdev->tlb_refresh) + return IOMMU_DOMAIN_DMA_SQ; + + return IOMMU_DOMAIN_DMA_FQ; +} + static struct iommu_device *s390_iommu_probe_device(struct device *dev) { struct zpci_dev *zdev; @@ -798,6 +808,7 @@ static const struct iommu_ops s390_iommu_ops = { .device_group = generic_device_group, .pgsize_bitmap = SZ_4K, .get_resv_regions = s390_iommu_get_resv_regions, + .lazy_domain_type = s390_iommu_lazy_domain_type, .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = s390_iommu_attach_device, .detach_dev = s390_iommu_detach_device, diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 74cee59516aa..aec895087f63 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -250,6 +250,10 @@ struct iommu_iotlb_gather { * - IOMMU_DOMAIN_IDENTITY: must use an identity domain * - IOMMU_DOMAIN_DMA: must use a dma domain * - 0: use the default setting + * @lazy_domain_type: Domain type for lazy TLB invalidation, return value: + * - IOMMU_DOMAIN_DMA_FQ: Use per-CPU flush queue + * - IOMMU_DOMAIN_DMA_SQ: Use single flush queue + * - 0: use the default setting * @default_domain_ops: the default ops for domains * @remove_dev_pasid: Remove any translation configurations of a specific * pasid, so that any DMA transactions with this pasid @@ -283,6 +287,7 @@ struct iommu_ops { struct iommu_page_response *msg); int (*def_domain_type)(struct device *dev); + int (*lazy_domain_type)(struct device *dev); void (*remove_dev_pasid)(struct device *dev, ioasid_t pasid); const struct iommu_domain_ops *default_domain_ops;