Message ID | 20221205084127.535-2-xieyongji@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2133839wrr; Mon, 5 Dec 2022 00:44:38 -0800 (PST) X-Google-Smtp-Source: AA0mqf6LiaIIkFz5+P0tyOKHPdznrJWPB91lKAjeXQSAKH1NakxHkyTGjQQAirsIjBFdocJfebJY X-Received: by 2002:a17:90b:1997:b0:219:8ee5:a226 with SMTP id mv23-20020a17090b199700b002198ee5a226mr14473291pjb.13.1670229878443; Mon, 05 Dec 2022 00:44:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670229878; cv=none; d=google.com; s=arc-20160816; b=mCAAJXyVqiLfovyfIMP4X0Stl2onTu+fHigWIiPfUHN7kH0epJZ4zD+Z1NjvdUe13s SoPVfsNOqx1WIF1wmCo5gPvdl/w1SlEkQuhZn8x2hL3VxKaV/XlFSkItBbuTdHIbrAUh FvVRSwx5X1fQOoJHu2N857o+EjgMjQRE9wghxOo1KWExi+m5r9W5+A0IcTVGpNgZrIHa Z/sDVF1jmpBpZDGOAYVUOO5iCnIvYDNYSQYLvlwiTNlVHN1fxRBWDMUXOU3kkV0wmFaN rgSSOUc9v/hrSv3aE3UgsDhOVGJUvArFfVRB0KHjaeZIGibArlWxJMHSG7rS5Q//xGqA CmKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=jxW/Vxu3X7sUPIxDEcWDVrIdRxXj7YBkmL0q7WUBeFw=; b=w8K3rQ5gRAZDtOcegY5fl2FVyIZGR6NbFiDFEEhqPpj/1mbpDGzdLmSr4Xj9XFGTm6 PUgelGRsOwWJxUz1jtu32HwOGqYqX2153g9QTfQjUZICNhK76SePGRwW4oYG1OSDOsdx dTzLMfOXQfwCBxEEGB38NOTV4jonI/PE/I4akTGib9krSSRo7Z3bdOjzvbVlggsq7/nU AVTYk+LG2iT05rY85Zf/2Dp6I3T5r4sadAhoNNF9wSoHn6V2yDp9aOvbT06fYw2rx/Tn MTts7YthwIZHcc1h6w1vRg9b6itCLCFUwOmqIHdac4S4E18CuH5mMds2Y79C/HART8mj JZiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=bLuzMxNG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nk22-20020a17090b195600b001fe1cc52234si13379144pjb.67.2022.12.05.00.44.24; Mon, 05 Dec 2022 00:44:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=bLuzMxNG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232234AbiLEInG (ORCPT <rfc822;jaysivo@gmail.com> + 99 others); Mon, 5 Dec 2022 03:43:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232167AbiLEInB (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 5 Dec 2022 03:43:01 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BC875FA9 for <linux-kernel@vger.kernel.org>; Mon, 5 Dec 2022 00:42:49 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id m4so3227585pls.4 for <linux-kernel@vger.kernel.org>; Mon, 05 Dec 2022 00:42:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jxW/Vxu3X7sUPIxDEcWDVrIdRxXj7YBkmL0q7WUBeFw=; b=bLuzMxNGC9V/pkvZUfNXlaMMl1sDObrP7LLx8vFgFMOrQ8lVdIVp9DbXC9+coJ5Quj 4idlgwdT93B///LhYfpfCTnPwnjxK8TXq0TaFI0A46NPSU07Lnn1KCquLmZJE+bTaUct yxYU1jD0WKMbBKU4b4uE57d/xzCLMxdgGmD5fR65KhXDxJTivXM0OOj7l3Jsqi56iEVV h+ldIx/6Y0xFCYrFARTG99ltCbtx3H0QznGHZ9+CgPJ3Jb//GvjwG3SOGCrpJme0fXe8 kZfCUM0dIalsYXtrtIu0/jNKHA+t53H3Hd8pU6iuZ79+qY6isbpuDe+m/C45TLc/SxTY I4gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jxW/Vxu3X7sUPIxDEcWDVrIdRxXj7YBkmL0q7WUBeFw=; b=QqYfhWz4BpFAPdzSAAiGnUP4vlL+Z7VlgjzIBHxgLpxdA/q99ITq6todf5LbtlF1Kp myAM1nNImstsmcLLz5SIwlGy0R2ShEWupEnKF77xZO2nLWu44CwTJphowiXcNz5BE8GG 5NCxJ7NYq1y2qtbip8JAsVt/Vwj8q+zFG4awIiAF92gyTERc4lXzgZ2ebqqPQRQm/Umi EZcUZ6kn6WaKY3UrTrSoDhWMdx0vYzawUaT7OXuucE2DpAQoWXA5YOdoLVXsmJdN/9Fl AjnDy7nLP91FQomkTs8dndwwRAQOl/ml6w9Tx4c5ncqb2mRFkTxElI06SJPDoeAVdF9c TgDA== X-Gm-Message-State: ANoB5pmIfMRqiL8UW09l0Rxn5To71nHmYQ1khGfJi2B8Ed8UPeryjnAy 4n5bnYk2dcFcUvdwavS003Xm X-Received: by 2002:a17:90a:a095:b0:218:e358:12d2 with SMTP id r21-20020a17090aa09500b00218e35812d2mr58560153pjp.208.1670229769512; Mon, 05 Dec 2022 00:42:49 -0800 (PST) Received: from localhost ([101.229.232.206]) by smtp.gmail.com with ESMTPSA id z19-20020aa79493000000b00576670cc170sm5601420pfk.93.2022.12.05.00.42.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Dec 2022 00:42:49 -0800 (PST) From: Xie Yongji <xieyongji@bytedance.com> To: mst@redhat.com, jasowang@redhat.com, tglx@linutronix.de, hch@lst.de Cc: virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 01/11] genirq/affinity:: Export irq_create_affinity_masks() Date: Mon, 5 Dec 2022 16:41:17 +0800 Message-Id: <20221205084127.535-2-xieyongji@bytedance.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221205084127.535-1-xieyongji@bytedance.com> References: <20221205084127.535-1-xieyongji@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751362965180809602?= X-GMAIL-MSGID: =?utf-8?q?1751362965180809602?= |
Series |
VDUSE: Improve performance
|
|
Commit Message
Yongji Xie
Dec. 5, 2022, 8:41 a.m. UTC
Export irq_create_affinity_masks() so that some modules
can make use of it to implement interrupt affinity
spreading mechanism.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
---
kernel/irq/affinity.c | 1 +
1 file changed, 1 insertion(+)
Comments
On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > Export irq_create_affinity_masks() so that some modules > can make use of it to implement interrupt affinity > spreading mechanism. I don't think driver should be building low-level affinity masks.
On Tue, Dec 6, 2022 at 4:18 PM Christoph Hellwig <hch@lst.de> wrote: > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > Export irq_create_affinity_masks() so that some modules > > can make use of it to implement interrupt affinity > > spreading mechanism. > > I don't think driver should be building low-level affinity masks. With the vDPA framework, some drivers (vduse, vdpa-sim) can create software-defined virtio devices and attach them to the virtio bus. This kind of virtio device is not a pci device or a platform device. So it would be needed to export this function if we want to implement the automatic affinity management for the virtio device driver which is binded to this device. Thanks, Yongji
On Tue, Dec 06, 2022 at 04:40:37PM +0800, Yongji Xie wrote: > With the vDPA framework, some drivers (vduse, vdpa-sim) can create > software-defined virtio devices and attach them to the virtio bus. > This kind of virtio device is not a pci device or a platform device. > So it would be needed to export this function if we want to implement > the automatic affinity management for the virtio device driver which > is binded to this device. Why are these devices even using interrupts? The whjole vdpa thing is a mess, I also still need to fix up the horrible abuse of the DMA API for something that isn't even DMA, and this just seems to spread that same mistake even further.
On Tue, Dec 6, 2022 at 4:47 PM Christoph Hellwig <hch@lst.de> wrote: > > On Tue, Dec 06, 2022 at 04:40:37PM +0800, Yongji Xie wrote: > > With the vDPA framework, some drivers (vduse, vdpa-sim) can create > > software-defined virtio devices and attach them to the virtio bus. > > This kind of virtio device is not a pci device or a platform device. > > So it would be needed to export this function if we want to implement > > the automatic affinity management for the virtio device driver which > > is binded to this device. > > Why are these devices even using interrupts? They don't use interrupt. But they use a bound workqueue to run the interrupt callback. So the driver needs an algorithm to choose which cpu to run the interrupt callback. Then we found the existing interrupt affinity spreading mechanism is very suitable for this scenario, so we try to export this function to reuse it. > The whjole vdpa thing > is a mess, I also still need to fix up the horrible abuse of the DMA > API for something that isn't even DMA, and this just seems to spread > that same mistake even further. We just want to reuse this algorithm. And it is completely independent of the IRQ subsystem. I guess it would not mess things up. Thanks, Yongji
On Tue, Dec 6, 2022 at 5:28 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > On Tue, Dec 6, 2022 at 4:47 PM Christoph Hellwig <hch@lst.de> wrote: > > > > On Tue, Dec 06, 2022 at 04:40:37PM +0800, Yongji Xie wrote: > > > With the vDPA framework, some drivers (vduse, vdpa-sim) can create > > > software-defined virtio devices and attach them to the virtio bus. > > > This kind of virtio device is not a pci device or a platform device. > > > So it would be needed to export this function if we want to implement > > > the automatic affinity management for the virtio device driver which > > > is binded to this device. > > > > Why are these devices even using interrupts? > > They don't use interrupt. But they use a bound workqueue to run the > interrupt callback. So the driver needs an algorithm to choose which > cpu to run the interrupt callback. Then we found the existing > interrupt affinity spreading mechanism is very suitable for this > scenario, so we try to export this function to reuse it. > > > The whjole vdpa thing > > is a mess, I also still need to fix up the horrible abuse of the DMA > > API for something that isn't even DMA, and this just seems to spread > > that same mistake even further. I think it's mostly an issue of some vDPA parents, not the vDPA itself. I had patches to get rid of the DMA API for vDPA simulators. Will post. > > We just want to reuse this algorithm. And it is completely independent > of the IRQ subsystem. I guess it would not mess things up. I think so, it's about which CPU do we want to run the callback and the callback is not necessarily triggered by an IRQ. Thanks > > Thanks, > Yongji >
On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > Export irq_create_affinity_masks() so that some modules > can make use of it to implement interrupt affinity > spreading mechanism. > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> So this got nacked, what's the plan now? > --- > kernel/irq/affinity.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > index d9a5c1d65a79..f074a7707c6d 100644 > --- a/kernel/irq/affinity.c > +++ b/kernel/irq/affinity.c > @@ -487,6 +487,7 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) > > return masks; > } > +EXPORT_SYMBOL_GPL(irq_create_affinity_masks); > > /** > * irq_calc_affinity_vectors - Calculate the optimal number of vectors > -- > 2.20.1
On Mon, Dec 19, 2022 at 3:33 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > Export irq_create_affinity_masks() so that some modules > > can make use of it to implement interrupt affinity > > spreading mechanism. > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > So this got nacked, what's the plan now? > I‘d like to check with Christoph again first. Hi Christoph, Jason will post some patches to get rid of the DMA API for vDPA simulators. And the irq affinity algorithm is independent of the IRQ subsystem IIUC. So could you allow this patch so that we can reuse the algorithm to select the best CPU (per-cpu affinity if possible, or at least per-node) to run the virtqueue's irq callback. Thanks, Yongji > > --- > > kernel/irq/affinity.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > > index d9a5c1d65a79..f074a7707c6d 100644 > > --- a/kernel/irq/affinity.c > > +++ b/kernel/irq/affinity.c > > @@ -487,6 +487,7 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) > > > > return masks; > > } > > +EXPORT_SYMBOL_GPL(irq_create_affinity_masks); > > > > /** > > * irq_calc_affinity_vectors - Calculate the optimal number of vectors > > -- > > 2.20.1 >
On Mon, Dec 19, 2022 at 05:36:02PM +0800, Yongji Xie wrote: > On Mon, Dec 19, 2022 at 3:33 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > > Export irq_create_affinity_masks() so that some modules > > > can make use of it to implement interrupt affinity > > > spreading mechanism. > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > So this got nacked, what's the plan now? > > > > I‘d like to check with Christoph again first. > > Hi Christoph, > > Jason will post some patches to get rid of the DMA API for vDPA > simulators. And the irq affinity algorithm is independent of the IRQ > subsystem IIUC. So could you allow this patch so that we can reuse the > algorithm to select the best CPU (per-cpu affinity if possible, or at > least per-node) to run the virtqueue's irq callback. > > Thanks, > Yongji I think you need to explain why you are building low level affinity masks. what's the plan now? > > > --- > > > kernel/irq/affinity.c | 1 + > > > 1 file changed, 1 insertion(+) > > > > > > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > > > index d9a5c1d65a79..f074a7707c6d 100644 > > > --- a/kernel/irq/affinity.c > > > +++ b/kernel/irq/affinity.c > > > @@ -487,6 +487,7 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) > > > > > > return masks; > > > } > > > +EXPORT_SYMBOL_GPL(irq_create_affinity_masks); > > > > > > /** > > > * irq_calc_affinity_vectors - Calculate the optimal number of vectors > > > -- > > > 2.20.1 > >
On Fri, Jan 27, 2023 at 4:22 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Mon, Dec 19, 2022 at 05:36:02PM +0800, Yongji Xie wrote: > > On Mon, Dec 19, 2022 at 3:33 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > > > Export irq_create_affinity_masks() so that some modules > > > > can make use of it to implement interrupt affinity > > > > spreading mechanism. > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > > So this got nacked, what's the plan now? > > > > > > > I‘d like to check with Christoph again first. > > > > Hi Christoph, > > > > Jason will post some patches to get rid of the DMA API for vDPA > > simulators. And the irq affinity algorithm is independent of the IRQ > > subsystem IIUC. So could you allow this patch so that we can reuse the > > algorithm to select the best CPU (per-cpu affinity if possible, or at > > least per-node) to run the virtqueue's irq callback. > > > > Thanks, > > Yongji > > I think you need to explain why you are building low level > affinity masks. In VDUSE case, we use workqueue to run the virtqueue's irq callback. Now I want to queue the irq callback kwork to one specific CPU to get per-cpu affinity if possible, or at least per-node. So I need to use this function to build the low level affinity masks for each virtqueue. > what's the plan now? > If there is no objection, I'll post a new version. Thanks, Yongji
On Mon, Jan 30, 2023 at 07:53:55PM +0800, Yongji Xie wrote: > On Fri, Jan 27, 2023 at 4:22 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Mon, Dec 19, 2022 at 05:36:02PM +0800, Yongji Xie wrote: > > > On Mon, Dec 19, 2022 at 3:33 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > > > > Export irq_create_affinity_masks() so that some modules > > > > > can make use of it to implement interrupt affinity > > > > > spreading mechanism. > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > > > > So this got nacked, what's the plan now? > > > > > > > > > > I‘d like to check with Christoph again first. > > > > > > Hi Christoph, > > > > > > Jason will post some patches to get rid of the DMA API for vDPA > > > simulators. And the irq affinity algorithm is independent of the IRQ > > > subsystem IIUC. So could you allow this patch so that we can reuse the > > > algorithm to select the best CPU (per-cpu affinity if possible, or at > > > least per-node) to run the virtqueue's irq callback. > > > > > > Thanks, > > > Yongji > > > > I think you need to explain why you are building low level > > affinity masks. > > In VDUSE case, we use workqueue to run the virtqueue's irq callback. > Now I want to queue the irq callback kwork to one specific CPU to get > per-cpu affinity if possible, or at least per-node. So I need to use > this function to build the low level affinity masks for each > virtqueue. > > > what's the plan now? > > > > If there is no objection, I'll post a new version. > > Thanks, > Yongji I doubt you made a convicing case here - I think Christoph was saying if it is not an irq it should not use an irq affinity API. So a new API possibly sharing implementation with irq affinity is called for then? Maybe.
On Mon, Feb 13, 2023 at 8:00 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Mon, Jan 30, 2023 at 07:53:55PM +0800, Yongji Xie wrote: > > On Fri, Jan 27, 2023 at 4:22 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > On Mon, Dec 19, 2022 at 05:36:02PM +0800, Yongji Xie wrote: > > > > On Mon, Dec 19, 2022 at 3:33 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > > > > > > > On Mon, Dec 05, 2022 at 04:41:17PM +0800, Xie Yongji wrote: > > > > > > Export irq_create_affinity_masks() so that some modules > > > > > > can make use of it to implement interrupt affinity > > > > > > spreading mechanism. > > > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > > > > > > So this got nacked, what's the plan now? > > > > > > > > > > > > > I‘d like to check with Christoph again first. > > > > > > > > Hi Christoph, > > > > > > > > Jason will post some patches to get rid of the DMA API for vDPA > > > > simulators. And the irq affinity algorithm is independent of the IRQ > > > > subsystem IIUC. So could you allow this patch so that we can reuse the > > > > algorithm to select the best CPU (per-cpu affinity if possible, or at > > > > least per-node) to run the virtqueue's irq callback. > > > > > > > > Thanks, > > > > Yongji > > > > > > I think you need to explain why you are building low level > > > affinity masks. > > > > In VDUSE case, we use workqueue to run the virtqueue's irq callback. > > Now I want to queue the irq callback kwork to one specific CPU to get > > per-cpu affinity if possible, or at least per-node. So I need to use > > this function to build the low level affinity masks for each > > virtqueue. > > > > > what's the plan now? > > > > > > > If there is no objection, I'll post a new version. > > > > Thanks, > > Yongji > > I doubt you made a convicing case here - I think Christoph was saying if > it is not an irq it should not use an irq affinity API. > So a new API possibly sharing implementation with irq affinity > is called for then? Maybe. > I'm not sure I get your point on sharing implementation. I can try to split irq_create_affinity_masks() into a common part and an irq specific part, and move the common part to a common dir such as /lib and export it. Then we can use the common part to build a new API for usage. But I'm afraid that there will be no difference between the new API and the irq affinity API except for the name since the new API is still used for irq affinity management. That means we may still need the irq specific part in the new API. For example, the virtio-blk driver doesn't know whether the virtio device is a software-defined vDPA device or a PCI device, so it will pass a structure irq_affinity to those APIs, then both the new API and irq affinity API still need to handle it. Thanks, Yongji
On Mon, Feb 13 2023 at 22:50, Yongji Xie wrote: > On Mon, Feb 13, 2023 at 8:00 PM Michael S. Tsirkin <mst@redhat.com> wrote: > I can try to split irq_create_affinity_masks() into a common part and > an irq specific part, and move the common part to a common dir such as > /lib and export it. Then we can use the common part to build a new API > for usage. https://lore.kernel.org/all/20221227022905.352674-1-ming.lei@redhat.com/ Thanks, tglx
On Tue, Feb 14, 2023 at 2:54 AM Thomas Gleixner <tglx@linutronix.de> wrote: > > On Mon, Feb 13 2023 at 22:50, Yongji Xie wrote: > > On Mon, Feb 13, 2023 at 8:00 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > I can try to split irq_create_affinity_masks() into a common part and > > an irq specific part, and move the common part to a common dir such as > > /lib and export it. Then we can use the common part to build a new API > > for usage. > > https://lore.kernel.org/all/20221227022905.352674-1-ming.lei@redhat.com/ > Thanks for the kind reminder! I will rebase my patchset on it. Thanks, Yongji
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index d9a5c1d65a79..f074a7707c6d 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -487,6 +487,7 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) return masks; } +EXPORT_SYMBOL_GPL(irq_create_affinity_masks); /** * irq_calc_affinity_vectors - Calculate the optimal number of vectors