From patchwork Mon Mar 13 12:00:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Hovold X-Patchwork-Id: 68803 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp1145188wrd; Mon, 13 Mar 2023 05:10:54 -0700 (PDT) X-Google-Smtp-Source: AK7set/PZ9+bs1ziP7TlMhsOLYH0lZS6TM8utRu8SDsDLg1lTQPVKirzw6w2qP9xGTSkh0fonwKG X-Received: by 2002:a17:90a:350:b0:23a:340d:fa49 with SMTP id 16-20020a17090a035000b0023a340dfa49mr18829869pjf.32.1678709454459; Mon, 13 Mar 2023 05:10:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1678709454; cv=none; d=google.com; s=arc-20160816; b=G586tI3oJokq/Uc7f3PVIlKidJQmM5I7NK5RYOSeh6T2RIge+yAXYGoQtkf5zJjnbc VcCzCKT76FnP9c1CX9M16t2/YcwKHrh212al8Qxy5wn3O3O489mtkc0gvdqoIRmno65b O589UofD1+DucaWZuu338RX++RjTh0f+V6IwD+KsD0fGMpG2QyUvtQMTAFuagfaADaUU b9D0izXszoqm/0nTotpSYoSEGUinAKTAyqWddrnOXJZCYgytPR3MrDNsjM2vfF+qbXtj z9GJjReAewGjE3qtaeockw5EpC2AAB5dz18yGo947xj7P33Vd5hZ7f+7WpV/UwrCyGxF UEaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=sOpSefU37FQ7J2fh6nyKFfUmLqLt3iBEBywahIWzTIU=; b=gD5TFuZQhBROWi2u1PavGCeyLNxC3Fd++apK/1MbzYJ47K4e3yUr7hC6p6lEv9r+AG ZIr8EV08xeoPPGPmgVbMd0G5TmP0E5VunCRE49iT8OvNiO7Rfqr4/yCX/9kyLNA8oI/o 2Snkn5BTeVZW+rdhPxPgS7Fa+ltvJyC5YzdRjJnQAA1YXyFT6LX7WLRK7dKZqiB4bCcR MuT3wbpTui1hyHTlNZR7vFjbS5WgK7E7xeb9jViZV0h2kAZ963v6gH1jEuyu23rHQW0Y +8OCZTvEmLhKlx9GyfcgOsCV48hhgimHBB6g2QQviI+ymfOAv1gKbhfEGxrEPdpJBG0n 8gMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=t9AUnUi+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pc14-20020a17090b3b8e00b0023d16d688casi1692083pjb.10.2023.03.13.05.10.41; Mon, 13 Mar 2023 05:10:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=t9AUnUi+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230286AbjCML7h (ORCPT + 99 others); Mon, 13 Mar 2023 07:59:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229896AbjCML7e (ORCPT ); Mon, 13 Mar 2023 07:59:34 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 505953526E; Mon, 13 Mar 2023 04:59:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B1A8461214; Mon, 13 Mar 2023 11:59:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19667C433D2; Mon, 13 Mar 2023 11:59:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678708768; bh=O9Sj409rearfAyt0r5sDA9sRoEWJ/F3gUbbjo5CTXg4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t9AUnUi+NhgJgbj6rbGQ0xJT7SNBRXcNjsznrjqYb9/+B/5w7HfqWs6oURGWnemZT jt5/goKVA7DOJEyKArM6F/i7R13mKC7rWacqn4MYQURnvQ3crB6wRZRk0Di47zXzgX nyGupzD7Yk7qyS986AzhkksvcRILPisgva1sygXHF8FtCOofx7i8zU/FMoFJAquF1T FzxCpMwx3OtmBIp7TwRG8OCUaUl+OPHE/39Ml/1BR6qqs6/zXZ2si6YLY3OOCoYQKD /648g5iZBgB9jgxkdyDcSXmSmFZGR5TwqBPBUf247QXaKCRO7LhfRj+HN2FmjAG8gb LC7cEof00oBGg== Received: from johan by xi.lan with local (Exim 4.94.2) (envelope-from ) id 1pbgqr-0006kp-6a; Mon, 13 Mar 2023 13:00:29 +0100 From: Johan Hovold To: Greg Kroah-Hartman Cc: Marc Zyngier , stable@vger.kernel.org, linux-kernel@vger.kernel.org, Johan Hovold , Hsin-Yi Wang , Mark-PK Tsai Subject: [PATCH stable-5.15 1/2] irqdomain: Refactor __irq_domain_alloc_irqs() Date: Mon, 13 Mar 2023 13:00:06 +0100 Message-Id: <20230313120007.25938-2-johan@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313120007.25938-1-johan@kernel.org> References: <20230313120007.25938-1-johan@kernel.org> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760254445179595731?= X-GMAIL-MSGID: =?utf-8?q?1760254445179595731?= From: Johan Hovold Refactor __irq_domain_alloc_irqs() so that it can be called internally while holding the irq_domain_mutex. This will be used to fix a shared-interrupt mapping race, hence the Fixes tag. Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings") Cc: stable@vger.kernel.org # 4.8 Tested-by: Hsin-Yi Wang Tested-by: Mark-PK Tsai Signed-off-by: Johan Hovold Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20230213104302.17307-6-johan+linaro@kernel.org [ johan: backport to 5.15; adjust context ] Signed-off-by: Johan Hovold --- kernel/irq/irqdomain.c | 88 +++++++++++++++++++++++------------------- 1 file changed, 48 insertions(+), 40 deletions(-) diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c index 298f9c12023c..f95196063884 100644 --- a/kernel/irq/irqdomain.c +++ b/kernel/irq/irqdomain.c @@ -1464,40 +1464,12 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain, return domain->ops->alloc(domain, irq_base, nr_irqs, arg); } -/** - * __irq_domain_alloc_irqs - Allocate IRQs from domain - * @domain: domain to allocate from - * @irq_base: allocate specified IRQ number if irq_base >= 0 - * @nr_irqs: number of IRQs to allocate - * @node: NUMA node id for memory allocation - * @arg: domain specific argument - * @realloc: IRQ descriptors have already been allocated if true - * @affinity: Optional irq affinity mask for multiqueue devices - * - * Allocate IRQ numbers and initialized all data structures to support - * hierarchy IRQ domains. - * Parameter @realloc is mainly to support legacy IRQs. - * Returns error code or allocated IRQ number - * - * The whole process to setup an IRQ has been split into two steps. - * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ - * descriptor and required hardware resources. The second step, - * irq_domain_activate_irq(), is to program the hardware with preallocated - * resources. In this way, it's easier to rollback when failing to - * allocate resources. - */ -int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, - unsigned int nr_irqs, int node, void *arg, - bool realloc, const struct irq_affinity_desc *affinity) +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base, + unsigned int nr_irqs, int node, void *arg, + bool realloc, const struct irq_affinity_desc *affinity) { int i, ret, virq; - if (domain == NULL) { - domain = irq_default_domain; - if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n")) - return -EINVAL; - } - if (realloc && irq_base >= 0) { virq = irq_base; } else { @@ -1516,24 +1488,18 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, goto out_free_desc; } - mutex_lock(&irq_domain_mutex); ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg); - if (ret < 0) { - mutex_unlock(&irq_domain_mutex); + if (ret < 0) goto out_free_irq_data; - } for (i = 0; i < nr_irqs; i++) { ret = irq_domain_trim_hierarchy(virq + i); - if (ret) { - mutex_unlock(&irq_domain_mutex); + if (ret) goto out_free_irq_data; - } } - + for (i = 0; i < nr_irqs; i++) irq_domain_insert_irq(virq + i); - mutex_unlock(&irq_domain_mutex); return virq; @@ -1544,6 +1510,48 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, return ret; } +/** + * __irq_domain_alloc_irqs - Allocate IRQs from domain + * @domain: domain to allocate from + * @irq_base: allocate specified IRQ number if irq_base >= 0 + * @nr_irqs: number of IRQs to allocate + * @node: NUMA node id for memory allocation + * @arg: domain specific argument + * @realloc: IRQ descriptors have already been allocated if true + * @affinity: Optional irq affinity mask for multiqueue devices + * + * Allocate IRQ numbers and initialized all data structures to support + * hierarchy IRQ domains. + * Parameter @realloc is mainly to support legacy IRQs. + * Returns error code or allocated IRQ number + * + * The whole process to setup an IRQ has been split into two steps. + * The first step, __irq_domain_alloc_irqs(), is to allocate IRQ + * descriptor and required hardware resources. The second step, + * irq_domain_activate_irq(), is to program the hardware with preallocated + * resources. In this way, it's easier to rollback when failing to + * allocate resources. + */ +int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, + unsigned int nr_irqs, int node, void *arg, + bool realloc, const struct irq_affinity_desc *affinity) +{ + int ret; + + if (domain == NULL) { + domain = irq_default_domain; + if (WARN(!domain, "domain is NULL; cannot allocate IRQ\n")) + return -EINVAL; + } + + mutex_lock(&irq_domain_mutex); + ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg, + realloc, affinity); + mutex_unlock(&irq_domain_mutex); + + return ret; +} + /* The irq_data was moved, fix the revmap to refer to the new location */ static void irq_domain_fix_revmap(struct irq_data *d) { From patchwork Mon Mar 13 12:00:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johan Hovold X-Patchwork-Id: 68800 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp1143641wrd; Mon, 13 Mar 2023 05:08:16 -0700 (PDT) X-Google-Smtp-Source: AK7set+F+wr16LCn6UqUENiz4IAinFqE36oIvrf4NQmx8uZja6vSFxMcZWAwRXC23aQUCx8zTRCn X-Received: by 2002:a17:902:e882:b0:19d:1720:3873 with SMTP id w2-20020a170902e88200b0019d17203873mr43877468plg.57.1678709296432; Mon, 13 Mar 2023 05:08:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1678709296; cv=none; d=google.com; s=arc-20160816; b=D91zCeZOB1QjSWK2gD/andY1SBi4mj0DfY45HFbZywO5skaUzjJyeLuHX5yyzQiVJ+ bqnuZgnFBPXr+KHcTUFq5yU3iAqUmRbWTSJdJ6g6iZyv950QUtGLo65MTDh7ysrzsW8H Hh6CqvgE5G0DiFIv1uJfmba4NaGplzdE2BwSRNmP6rSrSiUYAFXXovi585FB+9EsfVMO M/CKHiaPG0zJz0ONiRXsSAVgxTuUgPfEX+rOeTvkhKzV8ea+NppdoxOdt0djQSIAtzRJ F8SK+nxHIq/mtIHsZAkLfU4sX+5eo70ugwodJxjPTbESXcEH2G8H/GSDk++L8lemE67A hzmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=06wlUQKsHZsQhCP0WEJMK5YRsIFyah3ZzHUSKG755Z0=; b=fuSWsseQZsHNhOFcoiNNk8KgT+LZuL3Dpnffh6ke0+qK41qbpJ3IB6qSulMD16wPyK PQqPvamjKy0K57JSPODyTzOu4UfLFr9lOMbwmGMK78OAY33YHAEHQxhby4FSFI0TlN+C QHkLVxnmQ9agNXJnXXF747nk7MG0KteV9QxwQxUt8HWxO5QQpn2F2zkqnCYE472/jaOW dWXdJBm2v34s7LDgvKXXw2PdInEzw3UMNjUyNF5nGypmrnRHRFl6c4aqOjlaC1Ka5ddy n2J9DAzglENL94BrQJwNARzFvgt+xM9uZE01F/g6gGY47UTje323qJhmrCx26q4svBqn yz7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=J+N3OxaH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji10-20020a170903324a00b001a04d453b9dsi2059668plb.186.2023.03.13.05.08.01; Mon, 13 Mar 2023 05:08:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=J+N3OxaH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230415AbjCML7q (ORCPT + 99 others); Mon, 13 Mar 2023 07:59:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230320AbjCML7k (ORCPT ); Mon, 13 Mar 2023 07:59:40 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D81FD4AFE2; Mon, 13 Mar 2023 04:59:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 84962B80FED; Mon, 13 Mar 2023 11:59:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2752CC4339B; Mon, 13 Mar 2023 11:59:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678708768; bh=1w/aESkin8G8MofLd68GuZos2hdpzyTcWHpQ8GhMALs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J+N3OxaHHUr8iteCIG3O6oe7oXZ7mxv+rg5TJvgWruqACOSVmt/cpJ7ZblovfD6rE lQixSEbTdVUpcanSxP042T4lcKv7PMJhIfyYa5Spab/y7YqaN/W++qVAdOD9CGMyZ5 m22nFcU+EXnmCGBfW9XeLvfsT/w3zgeXl9X9XUYlAAdrWwaKZ7NcnyrYpsFnq4o9P5 GMARR2irGzLOYBef6S1r7Bup8OzhMYZeLORpmnCmlmMxNBt08s4dLP37cyg3rERpZE w59sTPG4r+cr4/REDoj6V5Lvjhb9EYzZwsWu6LzjgtlJ8zvHrn4LH/6tdhlRQMaDca BJq6kgUC4uUBw== Received: from johan by xi.lan with local (Exim 4.94.2) (envelope-from ) id 1pbgqr-0006kr-9s; Mon, 13 Mar 2023 13:00:29 +0100 From: Johan Hovold To: Greg Kroah-Hartman Cc: Marc Zyngier , stable@vger.kernel.org, linux-kernel@vger.kernel.org, Johan Hovold , Dmitry Torokhov , Jon Hunter , Hsin-Yi Wang , Mark-PK Tsai Subject: [PATCH stable-5.15 2/2] irqdomain: Fix mapping-creation race Date: Mon, 13 Mar 2023 13:00:07 +0100 Message-Id: <20230313120007.25938-3-johan@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230313120007.25938-1-johan@kernel.org> References: <20230313120007.25938-1-johan@kernel.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1760254258070672633?= X-GMAIL-MSGID: =?utf-8?q?1760254278997824482?= From: Johan Hovold Parallel probing of devices that share interrupts (e.g. when a driver uses asynchronous probing) can currently result in two mappings for the same hardware interrupt to be created due to missing serialisation. Make sure to hold the irq_domain_mutex when creating mappings so that looking for an existing mapping before creating a new one is done atomically. Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers") Fixes: b62b2cf5759b ("irqdomain: Fix handling of type settings for existing mappings") Link: https://lore.kernel.org/r/YuJXMHoT4ijUxnRb@hovoldconsulting.com Cc: stable@vger.kernel.org # 4.8 Cc: Dmitry Torokhov Cc: Jon Hunter Tested-by: Hsin-Yi Wang Tested-by: Mark-PK Tsai Signed-off-by: Johan Hovold Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20230213104302.17307-7-johan+linaro@kernel.org Signed-off-by: Johan Hovold --- kernel/irq/irqdomain.c | 64 ++++++++++++++++++++++++++++++------------ 1 file changed, 46 insertions(+), 18 deletions(-) diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c index f95196063884..e0b67784ac1e 100644 --- a/kernel/irq/irqdomain.c +++ b/kernel/irq/irqdomain.c @@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex); static struct irq_domain *irq_default_domain; +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base, + unsigned int nr_irqs, int node, void *arg, + bool realloc, const struct irq_affinity_desc *affinity); static void irq_domain_check_hierarchy(struct irq_domain *domain); struct irqchip_fwid { @@ -703,9 +706,9 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain) EXPORT_SYMBOL_GPL(irq_create_direct_mapping); #endif -static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain, - irq_hw_number_t hwirq, - const struct irq_affinity_desc *affinity) +static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain, + irq_hw_number_t hwirq, + const struct irq_affinity_desc *affinity) { struct device_node *of_node = irq_domain_get_of_node(domain); int virq; @@ -720,7 +723,7 @@ static unsigned int __irq_create_mapping_affinity(struct irq_domain *domain, return 0; } - if (irq_domain_associate(domain, virq, hwirq)) { + if (irq_domain_associate_locked(domain, virq, hwirq)) { irq_free_desc(virq); return 0; } @@ -756,14 +759,20 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain, return 0; } + mutex_lock(&irq_domain_mutex); + /* Check if mapping already exists */ virq = irq_find_mapping(domain, hwirq); if (virq) { pr_debug("existing mapping on virq %d\n", virq); - return virq; + goto out; } - return __irq_create_mapping_affinity(domain, hwirq, affinity); + virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity); +out: + mutex_unlock(&irq_domain_mutex); + + return virq; } EXPORT_SYMBOL_GPL(irq_create_mapping_affinity); @@ -830,6 +839,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec) if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK)) type &= IRQ_TYPE_SENSE_MASK; + mutex_lock(&irq_domain_mutex); + /* * If we've already configured this interrupt, * don't do it again, or hell will break loose. @@ -842,7 +853,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec) * interrupt number. */ if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq)) - return virq; + goto out; /* * If the trigger type has not been set yet, then set @@ -850,35 +861,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec) */ if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) { irq_data = irq_get_irq_data(virq); - if (!irq_data) - return 0; + if (!irq_data) { + virq = 0; + goto out; + } irqd_set_trigger_type(irq_data, type); - return virq; + goto out; } pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n", hwirq, of_node_full_name(to_of_node(fwspec->fwnode))); - return 0; + virq = 0; + goto out; } if (irq_domain_is_hierarchy(domain)) { - virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec); - if (virq <= 0) - return 0; + virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE, + fwspec, false, NULL); + if (virq <= 0) { + virq = 0; + goto out; + } } else { /* Create mapping */ - virq = __irq_create_mapping_affinity(domain, hwirq, NULL); + virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL); if (!virq) - return virq; + goto out; } irq_data = irq_get_irq_data(virq); - if (WARN_ON(!irq_data)) - return 0; + if (WARN_ON(!irq_data)) { + virq = 0; + goto out; + } /* Store trigger type */ irqd_set_trigger_type(irq_data, type); +out: + mutex_unlock(&irq_domain_mutex); return virq; } @@ -1910,6 +1931,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq, irq_set_handler_data(virq, handler_data); } +static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base, + unsigned int nr_irqs, int node, void *arg, + bool realloc, const struct irq_affinity_desc *affinity) +{ + return -EINVAL; +} + static void irq_domain_check_hierarchy(struct irq_domain *domain) { }