From patchwork Wed Jun 14 15:41:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Shavit X-Patchwork-Id: 108038 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp77447vqr; Wed, 14 Jun 2023 09:33:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7tviKV7hLrY+XC9qoPaIO1jzSVZ5TonfnUEJSS1ytmfEcO6id4OCZ6KE9C0pDWtH+moUdw X-Received: by 2002:a17:907:1b1a:b0:978:992e:efd1 with SMTP id mp26-20020a1709071b1a00b00978992eefd1mr14237047ejc.35.1686760421068; Wed, 14 Jun 2023 09:33:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686760421; cv=none; d=google.com; s=arc-20160816; b=OjFTjKO4z2Ok1DvXDiYHHRSFOtQ8chivD5i9tSsR5Tm2u9NQv2p1F5r34IV55fevoV uYWj4p5AaLBGIjFmZeapfg9epUNFIZsW4cN1n9kTyHm0ImzcYocttNuK6fMil0Vgv62x 4Zyy0ytaSaYw3aeMts+nIDnKV7+C1AFHT2DhJgrSkaBgocrmMQqY9mIX5EsWWPfpy9H5 kwSzqcSlwWr2qEtUoAKjytDKLGIVlpFN5OmnrxzrCYxCJ2we302lLQb1z5uZfVp04TgQ Qe5Q2KCMI4p+nmPWVIx4Bpwr6P75JD7Va/s94lp3JGfkzu28imPp6e7iD9IYWIQMvv2L HDtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=T3tJEpWHCJm3S04xQ0T/8NFLZuI/7sadQhRIHV1s9Zo=; b=cJHNGBQirpl7EAhOXBZ9k6v3ri27F3YM0+dYIzqLpUOLapY0ePxSlMaeQDRAx8SfIH 02YY5AhgTMzTNmTRRgghZQKyW0B9LcKrdcgcOGNSGhPslDetgtoVRu180L4H0A/crngS lZR8F5I+19DGcKjx+jLx6h6q1O173tAZ3gTNJqWKkBLBitY0qAT4PPEHQRPHJeAvclyt X7ZCjxpFDo48xlDxbUDQlnK9wbzrCpgZAXKUtinNgVsB5tR3xQzBSWNMobmf5ywohjvH JfWuUhQrDl958ZTjjpu4V0nbJVTizkyr/I74W5fi/Jrd4BBEQVAVoHSZNqAkGlv7YxBq dH3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yTgaJKyh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x14-20020a1709060a4e00b00982278e2ce0si3606008ejf.210.2023.06.14.09.33.11; Wed, 14 Jun 2023 09:33:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yTgaJKyh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244647AbjFNPoH (ORCPT + 99 others); Wed, 14 Jun 2023 11:44:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244568AbjFNPoB (ORCPT ); Wed, 14 Jun 2023 11:44:01 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAB552107 for ; Wed, 14 Jun 2023 08:43:59 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bd5059e5347so1729629276.0 for ; Wed, 14 Jun 2023 08:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686757439; x=1689349439; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T3tJEpWHCJm3S04xQ0T/8NFLZuI/7sadQhRIHV1s9Zo=; b=yTgaJKyhLJN0Sp5lo50YLdxEaUm3GgA7aXol24wNrGXS9BmaZCSXsBzcwf4Qnp+cxw MlQpJOuDL9kZUj/a5rrSo8t1KtW6798cWJvnCfnrfPiJbxg2BRihO+S5JisosY1Tqp58 yIGYg9aw8YKoSYorcITNOl8mk05e/qbpqsKk+hFZOkQ5ufs0Q6KiWNeexHmceSWU2WQK Dqf5E9lb4jGnDahmR9g4vcKZuxX76Jk2uA+xNW4FmWQJnFHkM1e1iREGtDSxGiH1DxEn 6c+PtrzgFhbj6sFtPYCjfglMkJQSVX6bEJdWxnPbAvYSq+LJSgSsOfzjwqdAqEdYQwTX uONQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686757439; x=1689349439; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T3tJEpWHCJm3S04xQ0T/8NFLZuI/7sadQhRIHV1s9Zo=; b=TJuaty6HiFfgC8ewmnRgs9d4yaxX6d+s+4YaZ/6VR4AuWYB3L8wqV7mvCqV0mb/KEK erOI11O+xyYxL820IBWXNxW/ktwVLA7f3YQXvoxBOpO4QKKFokSwYUgrBR/iMF6BrsUn KcQyfB4fX8AeDVl0vU+o3gFlXtHjGzKEkr1xGeQJBokasG1DeKJrp5eqCelQrJYfpZIY c5Xe8lTwoj5Y/PaTeBWmwS7iQ4TCSw+u34LiYI3hE2crZA51WoQqyGOtrEON/1/6RG5X S95fWHulZ1a4auEYnwF9+RVbLnFx0/eR/wrNjYtZKx8Xm0QtNY3iDR9DavDKDqLgVq76 a16g== X-Gm-Message-State: AC+VfDxZoQIfVuBDVoNwJuS4CyHSnL2wav0VBGeC/D2RiIzF43RWha30 QK8wQEtccL8vlymmEBZQ4K7xjUj3VI0n X-Received: from mshavit.ntc.corp.google.com ([2401:fa00:95:20c:c6e6:49bf:5c44:5965]) (user=mshavit job=sendgmr) by 2002:a05:6902:1185:b0:ba8:381b:f764 with SMTP id m5-20020a056902118500b00ba8381bf764mr887496ybu.3.1686757438966; Wed, 14 Jun 2023 08:43:58 -0700 (PDT) Date: Wed, 14 Jun 2023 23:41:56 +0800 In-Reply-To: <20230614154304.2860121-1-mshavit@google.com> Mime-Version: 1.0 References: <20230614154304.2860121-1-mshavit@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230614154304.2860121-5-mshavit@google.com> Subject: [PATCH v3 04/13] iommu/arm-smmu-v3: Refactor write_ctx_desc From: Michael Shavit To: Will Deacon , Robin Murphy , Joerg Roedel Cc: Michael Shavit , jean-philippe@linaro.org, nicolinc@nvidia.com, jgg@nvidia.com, baolu.lu@linux.intel.com, linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768696495429227912?= X-GMAIL-MSGID: =?utf-8?q?1768696495429227912?= Update arm_smmu_write_ctx_desc and downstream functions to be agnostic of the CD table's ownership. Note that in practice, it will only be called in cases where the table is owned by arm_smmu_master. Whether arm_smmu_write_ctx_desc will trigger a sync is also made part of the API. Note that this change isn't a nop refactor since SVA will call arm_smmu_write_ctx_desc in a loop for every master the domain is attached to despite the fact that they all share the same CD table. Note that the next commit ceases this sharing of the s1_cfg which makes that loop necessary. Signed-off-by: Michael Shavit --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 36 ++++++++- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 80 +++++++++++-------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 6 +- 3 files changed, 81 insertions(+), 41 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 968559d625c40..48fa8eb271a45 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -45,10 +45,12 @@ static struct arm_smmu_ctx_desc * arm_smmu_share_asid(struct mm_struct *mm, u16 asid) { int ret; + unsigned long flags; u32 new_asid; struct arm_smmu_ctx_desc *cd; struct arm_smmu_device *smmu; struct arm_smmu_domain *smmu_domain; + struct arm_smmu_master *master; cd = xa_load(&arm_smmu_asid_xa, asid); if (!cd) @@ -80,7 +82,11 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid) * be some overlap between use of both ASIDs, until we invalidate the * TLB. */ - arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + arm_smmu_write_ctx_desc(smmu, master->s1_cfg, master, 0, cd); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); /* Invalidate TLB entries previously associated with that context */ arm_smmu_tlb_inv_asid(smmu, asid); @@ -211,6 +217,8 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) { struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn); struct arm_smmu_domain *smmu_domain = smmu_mn->domain; + struct arm_smmu_master *master; + unsigned long flags; mutex_lock(&sva_lock); if (smmu_mn->cleared) { @@ -222,7 +230,12 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) * DMA may still be running. Keep the cd valid to avoid C_BAD_CD events, * but disable translation. */ - arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, &quiet_cd); + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, master, + mm->pasid, &quiet_cd); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid); arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0); @@ -248,8 +261,10 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, struct mm_struct *mm) { int ret; + unsigned long flags; struct arm_smmu_ctx_desc *cd; struct arm_smmu_mmu_notifier *smmu_mn; + struct arm_smmu_master *master; list_for_each_entry(smmu_mn, &smmu_domain->mmu_notifiers, list) { if (smmu_mn->mn.mm == mm) { @@ -279,7 +294,12 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, goto err_free_cd; } - ret = arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, cd); + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + ret = arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, + master, mm->pasid, cd); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); if (ret) goto err_put_notifier; @@ -296,15 +316,23 @@ arm_smmu_mmu_notifier_get(struct arm_smmu_domain *smmu_domain, static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn) { + unsigned long flags; struct mm_struct *mm = smmu_mn->mn.mm; struct arm_smmu_ctx_desc *cd = smmu_mn->cd; + struct arm_smmu_master *master; struct arm_smmu_domain *smmu_domain = smmu_mn->domain; if (!refcount_dec_and_test(&smmu_mn->refs)) return; list_del(&smmu_mn->list); - arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, NULL); + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + arm_smmu_write_ctx_desc(master->smmu, master->s1_cfg, master, + mm->pasid, NULL); + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); /* * If we went through clear(), we've already invalidated, and no diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index d79c6ef5d6ed4..b6f7cf60f8f3d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -965,14 +965,13 @@ void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); } -static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, +/* master may be null */ +static void arm_smmu_sync_cd(struct arm_smmu_master *master, int ssid, bool leaf) { size_t i; - unsigned long flags; - struct arm_smmu_master *master; struct arm_smmu_cmdq_batch cmds; - struct arm_smmu_device *smmu = smmu_domain->smmu; + struct arm_smmu_device *smmu; struct arm_smmu_cmdq_ent cmd = { .opcode = CMDQ_OP_CFGI_CD, .cfgi = { @@ -981,16 +980,15 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, }, }; - cmds.num = 0; + if (!master) + return; - spin_lock_irqsave(&smmu_domain->devices_lock, flags); - list_for_each_entry(master, &smmu_domain->devices, domain_head) { - for (i = 0; i < master->num_streams; i++) { - cmd.cfgi.sid = master->streams[i].id; - arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); - } + smmu = master->smmu; + cmds.num = 0; + for (i = 0; i < master->num_streams; i++) { + cmd.cfgi.sid = master->streams[i].id; + arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); } - spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); arm_smmu_cmdq_batch_submit(smmu, &cmds); } @@ -1020,16 +1018,18 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, WRITE_ONCE(*dst, cpu_to_le64(val)); } -static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, +/* master may be null */ +static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_device *smmu, + struct arm_smmu_s1_cfg *s1_cfg, + struct arm_smmu_master *master, u32 ssid) { __le64 *l1ptr; unsigned int idx; struct arm_smmu_l1_ctx_desc *l1_desc; - struct arm_smmu_device *smmu = smmu_domain->smmu; - struct arm_smmu_ctx_desc_cfg *cdcfg = &smmu_domain->s1_cfg.cdcfg; + struct arm_smmu_ctx_desc_cfg *cdcfg = &s1_cfg->cdcfg; - if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR) + if (s1_cfg->s1fmt == STRTAB_STE_0_S1FMT_LINEAR) return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS; idx = ssid >> CTXDESC_SPLIT; @@ -1041,13 +1041,21 @@ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, l1ptr = cdcfg->cdtab + idx * CTXDESC_L1_DESC_DWORDS; arm_smmu_write_cd_l1_desc(l1ptr, l1_desc); /* An invalid L1CD can be cached */ - arm_smmu_sync_cd(smmu_domain, ssid, false); + arm_smmu_sync_cd(master, ssid, false); } idx = ssid & (CTXDESC_L2_ENTRIES - 1); return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS; } -int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, +/* + * master must be provided if a CD sync is required but may be null otherwise + * (such as when the CD table isn't inserted into the STE yet, or is about to + * be detached. + */ +int arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu, + struct arm_smmu_s1_cfg *s1_cfg, + struct arm_smmu_master *master, + int ssid, struct arm_smmu_ctx_desc *cd) { /* @@ -1065,10 +1073,10 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, bool cd_live; __le64 *cdptr; - if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax))) + if (WARN_ON(ssid >= (1 << s1_cfg->s1cdmax))) return -E2BIG; - cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid); + cdptr = arm_smmu_get_cd_ptr(smmu, s1_cfg, master, ssid); if (!cdptr) return -ENOMEM; @@ -1092,11 +1100,11 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, cdptr[3] = cpu_to_le64(cd->mair); /* - * STE is live, and the SMMU might read dwords of this CD in any - * order. Ensure that it observes valid values before reading - * V=1. + * STE may be live, and the SMMU might read dwords of this CD + * in any order. Ensure that it observes valid values before + * reading V=1. */ - arm_smmu_sync_cd(smmu_domain, ssid, true); + arm_smmu_sync_cd(master, ssid, true); val = cd->tcr | #ifdef __BIG_ENDIAN @@ -1108,7 +1116,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V; - if (smmu_domain->stall_enabled) + if (s1_cfg->stall_enabled) val |= CTXDESC_CD_0_S; } @@ -1122,7 +1130,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, * without first making the structure invalid. */ WRITE_ONCE(cdptr[0], cpu_to_le64(val)); - arm_smmu_sync_cd(smmu_domain, ssid, true); + arm_smmu_sync_cd(master, ssid, true); return 0; } @@ -1136,6 +1144,7 @@ static int arm_smmu_init_s1_cfg(struct arm_smmu_master *master, struct arm_smmu_ctx_desc_cfg *cdcfg = &cfg->cdcfg; cfg->s1cdmax = master->ssid_bits; + cfg->stall_enabled = master->stall_enabled; max_contexts = 1 << cfg->s1cdmax; if (!(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB) || @@ -2093,8 +2102,6 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, if (ret) goto out_unlock; - smmu_domain->stall_enabled = master->stall_enabled; - ret = arm_smmu_init_s1_cfg(master, cfg); if (ret) goto out_free_asid; @@ -2110,12 +2117,9 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; cd->mair = pgtbl_cfg->arm_lpae_s1_cfg.mair; - /* - * Note that this will end up calling arm_smmu_sync_cd() before - * the master has been added to the devices list for this domain. - * This isn't an issue because the STE hasn't been installed yet. - */ - ret = arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + ret = arm_smmu_write_ctx_desc(smmu, cfg, + NULL /*Not attached to a master yet */, + 0, cd); if (ret) goto out_free_cd_tables; @@ -2386,6 +2390,11 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master) master->domain = NULL; master->ats_enabled = false; + if (master->s1_cfg) + arm_smmu_write_ctx_desc( + master->smmu, master->s1_cfg, + NULL /* Skip sync since we detach the CD table next*/, + 0, NULL); master->s1_cfg = NULL; master->s2_cfg = NULL; arm_smmu_install_ste_for_dev(master); @@ -2435,7 +2444,8 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = -EINVAL; goto out_unlock; } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1 && - smmu_domain->stall_enabled != master->stall_enabled) { + smmu_domain->s1_cfg.stall_enabled != + master->stall_enabled) { ret = -EINVAL; goto out_unlock; } diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 3c614fbe2b8b9..00a493442d6f9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -595,6 +595,7 @@ struct arm_smmu_s1_cfg { struct arm_smmu_ctx_desc_cfg cdcfg; u8 s1fmt; u8 s1cdmax; + bool stall_enabled; }; struct arm_smmu_s2_cfg { @@ -713,7 +714,6 @@ struct arm_smmu_domain { struct mutex init_mutex; /* Protects smmu pointer */ struct io_pgtable_ops *pgtbl_ops; - bool stall_enabled; atomic_t nr_ats_masters; enum arm_smmu_domain_stage stage; @@ -742,7 +742,9 @@ extern struct xarray arm_smmu_asid_xa; extern struct mutex arm_smmu_asid_lock; extern struct arm_smmu_ctx_desc quiet_cd; -int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, +int arm_smmu_write_ctx_desc(struct arm_smmu_device *smmu, + struct arm_smmu_s1_cfg *s1_cfg, + struct arm_smmu_master *smmu_master, int ssid, struct arm_smmu_ctx_desc *cd); void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid); void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,