From patchwork Tue Jul 25 06:47:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 125365 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp2295631vqg; Tue, 25 Jul 2023 00:27:51 -0700 (PDT) X-Google-Smtp-Source: APBJJlEucJRavkN45FO4tjawzOLRdNqT7R9oij49jI8/zw/4VS2FzohDnRNCQLlvfFd5U/9U6B2v X-Received: by 2002:a17:907:2715:b0:993:e691:6dd5 with SMTP id w21-20020a170907271500b00993e6916dd5mr11868948ejk.7.1690270071250; Tue, 25 Jul 2023 00:27:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690270071; cv=none; d=google.com; s=arc-20160816; b=ExCaHyrWupDTxRCDlWztO6W2VWQdUcA3zH9ceZ4FEt2/IGdZX1/rWbW4CxPCby/mCz mCumzXJfMUT6+XjFxD/aMm3RCOej16luo38VoFe2Oskzew61a6AQwUrISmA1pw/OG2ms X0ZVR79SnY0mR9QGQkfei2pR0J0buD1/XUaogEWr2qPwpOYplCtK7stgvhaICOoH3wmE HE+wm03rY6yvX6xz0lx526n9BMjFo79zidYlB0YbZKZueIn2uAlOd9w1aGCz8Zmc6UwH mNn/Ck+fDlRhJakGry6FHjQqk1y2LJKYNw+jJtaq5ZlgbKPRgY0G3z1nsSJaVKIdeBdW c4PQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=rWzN+HUQXSE8CC0n86w1+/sl9/LWI75RHk+lRbcP7uQ=; fh=mqObDQI80siQiP8z2xLo8twiNKHjoRrlGuO/h9DrNYA=; b=WaMqrBoJ1klGqn39MLv99yTOzNXrNagK9ZEHfJ7zg/mfeVELzN+Z0jLBvztvyVEi1s XfVIQNpKlpv8vn6GwSQOLxDUQbA38LB9PVAMrsFUSdruPsZ/eorTMvbA7Fh0NOJR6ono n95hBy2ad3JHTcgbzmXTg3gOaT7U5Y9czCePGVOfVmsTZ8hhRoe9o7tqJ89bLdy5tbud AnKvS3UxpxZwWB+PmIwOWUAiOEbJap4w1DrkHzcQ4OUMfudjwvuHiyo0tEb+ijHdrc7t 3Xc9CtUMU1IkfjHHAWsORT4E8A5MTvReq51WZG1dBMbDNat5QZs4kF/pFCKHr+GNA146 xVcQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="vK/cpq+X"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w11-20020a170906480b00b00992a9bc7432si8138744ejq.936.2023.07.25.00.27.27; Tue, 25 Jul 2023 00:27:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="vK/cpq+X"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229650AbjGYGsH (ORCPT + 99 others); Tue, 25 Jul 2023 02:48:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231273AbjGYGr5 (ORCPT ); Tue, 25 Jul 2023 02:47:57 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 332D4172B for ; Mon, 24 Jul 2023 23:47:32 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-6687096c6ddso2903811b3a.0 for ; Mon, 24 Jul 2023 23:47:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1690267651; x=1690872451; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=rWzN+HUQXSE8CC0n86w1+/sl9/LWI75RHk+lRbcP7uQ=; b=vK/cpq+XWmV8elkOjGqR/FSbCh2+QA4jEpC+cdRsN1NvyTMYsKV900RWFOhyZ9DDiz JPo2TwZXFnBZf8fYGuMsI07osGoJvNY14R0ysPvEc6Ft9LJsCyPJtA8fuelswb6wIEP4 WiOBcqZ6dSb68yF/rku5wz1zmo7QpEE4diqQmccpU5rSzXh5SIhhG4hFZY+eUM92M691 hHwUJILbZQ40y5SmT8UTK7wrSrDx2jt3LfRHbwIAunSeJcGtMDYePQOoErkIQ/EKeeoh b9XZ3hK6PvAww6Plv4TTmERMqij2sTkksKPGlH9MhH37EdZEuM0c+Gz2DFq5Q4+xWbW+ k2Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690267651; x=1690872451; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=rWzN+HUQXSE8CC0n86w1+/sl9/LWI75RHk+lRbcP7uQ=; b=EMWdlLoFnCfle/jWLkRvZD4V6p8US1MHmK5M6m6iGd2o5FBAVbqnzzxF5jyIT75S4q BRoXns7waFb8LIXtIKQN95ILaR+as9J8gaznDcFuLnkzoo+b5e7GYYELQjyeRNi3J9R8 7ETKWXu/VXRg1G6p17U7KMBIAZex+ylyQYHAkHGWYsnQ8yWSJXYODYUC3svhCM0VgbeU /pbRVGNIGvDR4UHH/uQk6XdYwK7xryPLImr+HZDvqHamiaUACYMzhOCc8pX8JtUmQTmJ utN7Vd/FzKDdsynRdSFVbPqgpvwbhwboiczuOETCUoTPOLSP8S80aOeDVUyRjmMit0Ju 76CA== X-Gm-Message-State: ABy/qLYzyUbeTJUQJ1/p8OpJ67GdwMg3Rk3reA6qcm3n/E9tNWM90NnJ nBoUamZccQny9czsONYEROJxWA== X-Received: by 2002:a05:6a20:42a5:b0:126:306b:c6d4 with SMTP id o37-20020a056a2042a500b00126306bc6d4mr9851386pzj.23.1690267650623; Mon, 24 Jul 2023 23:47:30 -0700 (PDT) Received: from localhost ([122.172.87.195]) by smtp.gmail.com with ESMTPSA id p18-20020a170902a41200b001bb9f104333sm3906512plq.12.2023.07.24.23.47.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 23:47:30 -0700 (PDT) From: Viresh Kumar To: Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko Cc: Viresh Kumar , Vincent Guittot , =?utf-8?q?Alex_Benn=C3=A9e?= , stratos-dev@op-lists.linaro.org, Erik Schilling , Manos Pitsidianakis , Mathieu Poirier , xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Subject: [PATCH V3 1/2] xen: Update dm_op.h from Xen public header Date: Tue, 25 Jul 2023 12:17:25 +0530 Message-Id: X-Mailer: git-send-email 2.31.1.272.g89b43f80a514 MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772376630009591223 X-GMAIL-MSGID: 1772376630009591223 Update the definitions in dm_op.h from Xen public header (latest commit 0c53c638e162 ("tools/xenstore: fix XSA-417 patch")). List of changes done to the file after copying from Xen: - Updated paths of included files. - Removed typedef definitions, as is done for other Xen headers (as per kernel coding guidelines). - Removed some not-so-useful comments. Signed-off-by: Viresh Kumar --- V2->V3: - Updated commit log with more details. V1->V2: - New commit. include/xen/interface/hvm/dm_op.h | 445 ++++++++++++++++++++++++++++++ 1 file changed, 445 insertions(+) diff --git a/include/xen/interface/hvm/dm_op.h b/include/xen/interface/hvm/dm_op.h index 08d972f87c7b..bc6948fd1815 100644 --- a/include/xen/interface/hvm/dm_op.h +++ b/include/xen/interface/hvm/dm_op.h @@ -6,6 +6,451 @@ #ifndef __XEN_PUBLIC_HVM_DM_OP_H__ #define __XEN_PUBLIC_HVM_DM_OP_H__ +#include +#include + +#ifndef uint64_aligned_t +#define uint64_aligned_t uint64_t +#endif + +/* + * IOREQ Servers + * + * The interface between an I/O emulator and Xen is called an IOREQ Server. + * A domain supports a single 'legacy' IOREQ Server which is instantiated if + * parameter... + * + * HVM_PARAM_IOREQ_PFN is read (to get the gfn containing the synchronous + * ioreq structures), or... + * HVM_PARAM_BUFIOREQ_PFN is read (to get the gfn containing the buffered + * ioreq ring), or... + * HVM_PARAM_BUFIOREQ_EVTCHN is read (to get the event channel that Xen uses + * to request buffered I/O emulation). + * + * The following hypercalls facilitate the creation of IOREQ Servers for + * 'secondary' emulators which are invoked to implement port I/O, memory, or + * PCI config space ranges which they explicitly register. + */ + +typedef uint16_t ioservid_t; + +/* + * XEN_DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a + * secondary emulator. + * + * The handed back is unique for target domain. The valur of + * should be one of HVM_IOREQSRV_BUFIOREQ_* defined in + * hvm_op.h. If the value is HVM_IOREQSRV_BUFIOREQ_OFF then the buffered + * ioreq ring will not be allocated and hence all emulation requests to + * this server will be synchronous. + */ +#define XEN_DMOP_create_ioreq_server 1 + +struct xen_dm_op_create_ioreq_server { + /* IN - should server handle buffered ioreqs */ + uint8_t handle_bufioreq; + uint8_t pad[3]; + /* OUT - server id */ + ioservid_t id; +}; + +/* + * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to + * access IOREQ Server . + * + * If the IOREQ Server is handling buffered emulation requests, the + * emulator needs to bind to event channel to listen for + * them. (The event channels used for synchronous emulation requests are + * specified in the per-CPU ioreq structures). + * In addition, if the XENMEM_acquire_resource memory op cannot be used, + * the emulator will need to map the synchronous ioreq structures and + * buffered ioreq ring (if it exists) from guest memory. If does + * not contain XEN_DMOP_no_gfns then these pages will be made available and + * the frame numbers passed back in gfns and + * respectively. (If the IOREQ Server is not handling buffered emulation + * only will be valid). + * + * NOTE: To access the synchronous ioreq structures and buffered ioreq + * ring, it is preferable to use the XENMEM_acquire_resource memory + * op specifying resource type XENMEM_resource_ioreq_server. + */ +#define XEN_DMOP_get_ioreq_server_info 2 + +struct xen_dm_op_get_ioreq_server_info { + /* IN - server id */ + ioservid_t id; + /* IN - flags */ + uint16_t flags; + +#define _XEN_DMOP_no_gfns 0 +#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns) + + /* OUT - buffered ioreq port */ + evtchn_port_t bufioreq_port; + /* OUT - sync ioreq gfn (see block comment above) */ + uint64_aligned_t ioreq_gfn; + /* OUT - buffered ioreq gfn (see block comment above)*/ + uint64_aligned_t bufioreq_gfn; +}; + +/* + * XEN_DMOP_map_io_range_to_ioreq_server: Register an I/O range for + * emulation by the client of + * IOREQ Server . + * XEN_DMOP_unmap_io_range_from_ioreq_server: Deregister an I/O range + * previously registered for + * emulation by the client of + * IOREQ Server . + * + * There are three types of I/O that can be emulated: port I/O, memory + * accesses and PCI config space accesses. The field denotes which + * type of range* the and (inclusive) fields are specifying. + * PCI config space ranges are specified by segment/bus/device/function + * values which should be encoded using the DMOP_PCI_SBDF helper macro + * below. + * + * NOTE: unless an emulation request falls entirely within a range mapped + * by a secondary emulator, it will not be passed to that emulator. + */ +#define XEN_DMOP_map_io_range_to_ioreq_server 3 +#define XEN_DMOP_unmap_io_range_from_ioreq_server 4 + +struct xen_dm_op_ioreq_server_range { + /* IN - server id */ + ioservid_t id; + uint16_t pad; + /* IN - type of range */ + uint32_t type; +# define XEN_DMOP_IO_RANGE_PORT 0 /* I/O port range */ +# define XEN_DMOP_IO_RANGE_MEMORY 1 /* MMIO range */ +# define XEN_DMOP_IO_RANGE_PCI 2 /* PCI segment/bus/dev/func range */ + /* IN - inclusive start and end of range */ + uint64_aligned_t start, end; +}; + +#define XEN_DMOP_PCI_SBDF(s,b,d,f) \ + ((((s) & 0xffff) << 16) | \ + (((b) & 0xff) << 8) | \ + (((d) & 0x1f) << 3) | \ + ((f) & 0x07)) + +/* + * XEN_DMOP_set_ioreq_server_state: Enable or disable the IOREQ Server + * + * The IOREQ Server will not be passed any emulation requests until it is + * in the enabled state. + * Note that the contents of the ioreq_gfn and bufioreq_gfn (see + * XEN_DMOP_get_ioreq_server_info) are not meaningful until the IOREQ Server + * is in the enabled state. + */ +#define XEN_DMOP_set_ioreq_server_state 5 + +struct xen_dm_op_set_ioreq_server_state { + /* IN - server id */ + ioservid_t id; + /* IN - enabled? */ + uint8_t enabled; + uint8_t pad; +}; + +/* + * XEN_DMOP_destroy_ioreq_server: Destroy the IOREQ Server . + * + * Any registered I/O ranges will be automatically deregistered. + */ +#define XEN_DMOP_destroy_ioreq_server 6 + +struct xen_dm_op_destroy_ioreq_server { + /* IN - server id */ + ioservid_t id; + uint16_t pad; +}; + +/* + * XEN_DMOP_track_dirty_vram: Track modifications to the specified pfn + * range. + * + * NOTE: The bitmap passed back to the caller is passed in a + * secondary buffer. + */ +#define XEN_DMOP_track_dirty_vram 7 + +struct xen_dm_op_track_dirty_vram { + /* IN - number of pages to be tracked */ + uint32_t nr; + uint32_t pad; + /* IN - first pfn to track */ + uint64_aligned_t first_pfn; +}; + +/* + * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's + * PCI INTx pins. + */ +#define XEN_DMOP_set_pci_intx_level 8 + +struct xen_dm_op_set_pci_intx_level { + /* IN - PCI INTx identification (domain:bus:device:intx) */ + uint16_t domain; + uint8_t bus, device, intx; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; +}; + +/* + * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's + * ISA IRQ lines. + */ +#define XEN_DMOP_set_isa_irq_level 9 + +struct xen_dm_op_set_isa_irq_level { + /* IN - ISA IRQ (0-15) */ + uint8_t isa_irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; +}; + +/* + * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line. + */ +#define XEN_DMOP_set_pci_link_route 10 + +struct xen_dm_op_set_pci_link_route { + /* PCI INTx line (0-3) */ + uint8_t link; + /* ISA IRQ (1-15) or 0 -> disable link */ + uint8_t isa_irq; +}; + +/* + * XEN_DMOP_modified_memory: Notify that a set of pages were modified by + * an emulator. + * + * DMOP buf 1 contains an array of xen_dm_op_modified_memory_extent with + * @nr_extents entries. + * + * On error, @nr_extents will contain the index+1 of the extent that + * had the error. It is not defined if or which pages may have been + * marked as dirty, in this event. + */ +#define XEN_DMOP_modified_memory 11 + +struct xen_dm_op_modified_memory { + /* + * IN - Number of extents to be processed + * OUT -returns n+1 for failing extent + */ + uint32_t nr_extents; + /* IN/OUT - Must be set to 0 */ + uint32_t opaque; +}; + +struct xen_dm_op_modified_memory_extent { + /* IN - number of contiguous pages modified */ + uint32_t nr; + uint32_t pad; + /* IN - first pfn modified */ + uint64_aligned_t first_pfn; +}; + +/* + * XEN_DMOP_set_mem_type: Notify that a region of memory is to be treated + * in a specific way. (See definition of + * hvmmem_type_t). + * + * NOTE: In the event of a continuation (return code -ERESTART), the + * @first_pfn is set to the value of the pfn of the remaining + * region and @nr reduced to the size of the remaining region. + */ +#define XEN_DMOP_set_mem_type 12 + +struct xen_dm_op_set_mem_type { + /* IN - number of contiguous pages */ + uint32_t nr; + /* IN - new hvmmem_type_t of region */ + uint16_t mem_type; + uint16_t pad; + /* IN - first pfn in region */ + uint64_aligned_t first_pfn; +}; + +/* + * XEN_DMOP_inject_event: Inject an event into a VCPU, which will + * get taken up when it is next scheduled. + * + * Note that the caller should know enough of the state of the CPU before + * injecting, to know what the effect of injecting the event will be. + */ +#define XEN_DMOP_inject_event 13 + +struct xen_dm_op_inject_event { + /* IN - index of vCPU */ + uint32_t vcpuid; + /* IN - interrupt vector */ + uint8_t vector; + /* IN - event type (DMOP_EVENT_* ) */ + uint8_t type; +/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */ +# define XEN_DMOP_EVENT_ext_int 0 /* external interrupt */ +# define XEN_DMOP_EVENT_nmi 2 /* nmi */ +# define XEN_DMOP_EVENT_hw_exc 3 /* hardware exception */ +# define XEN_DMOP_EVENT_sw_int 4 /* software interrupt (CD nn) */ +# define XEN_DMOP_EVENT_pri_sw_exc 5 /* ICEBP (F1) */ +# define XEN_DMOP_EVENT_sw_exc 6 /* INT3 (CC), INTO (CE) */ + /* IN - instruction length */ + uint8_t insn_len; + uint8_t pad0; + /* IN - error code (or ~0 to skip) */ + uint32_t error_code; + uint32_t pad1; + /* IN - type-specific extra data (%cr2 for #PF, pending_dbg for #DB) */ + uint64_aligned_t cr2; +}; + +/* + * XEN_DMOP_inject_msi: Inject an MSI for an emulated device. + */ +#define XEN_DMOP_inject_msi 14 + +struct xen_dm_op_inject_msi { + /* IN - MSI data (lower 32 bits) */ + uint32_t data; + uint32_t pad; + /* IN - MSI address (0xfeexxxxx) */ + uint64_aligned_t addr; +}; + +/* + * XEN_DMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server + * to specific memory type + * for specific accesses + * + * For now, flags only accept the value of XEN_DMOP_IOREQ_MEM_ACCESS_WRITE, + * which means only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in future. + */ +#define XEN_DMOP_map_mem_type_to_ioreq_server 15 + +struct xen_dm_op_map_mem_type_to_ioreq_server { + ioservid_t id; /* IN - ioreq server id */ + uint16_t type; /* IN - memory type */ + uint32_t flags; /* IN - types of accesses to be forwarded to the + ioreq server. flags with 0 means to unmap the + ioreq server */ + +#define XEN_DMOP_IOREQ_MEM_ACCESS_READ (1u << 0) +#define XEN_DMOP_IOREQ_MEM_ACCESS_WRITE (1u << 1) + + uint64_t opaque; /* IN/OUT - only used for hypercall continuation, + has to be set to zero by the caller */ +}; + +/* + * XEN_DMOP_remote_shutdown : Declare a shutdown for another domain + * Identical to SCHEDOP_remote_shutdown + */ +#define XEN_DMOP_remote_shutdown 16 + +struct xen_dm_op_remote_shutdown { + uint32_t reason; /* SHUTDOWN_* => enum sched_shutdown_reason */ + /* (Other reason values are not blocked) */ +}; + +/* + * XEN_DMOP_relocate_memory : Relocate GFNs for the specified guest. + * Identical to XENMEM_add_to_physmap with + * space == XENMAPSPACE_gmfn_range. + */ +#define XEN_DMOP_relocate_memory 17 + +struct xen_dm_op_relocate_memory { + /* All fields are IN/OUT, with their OUT state undefined. */ + /* Number of GFNs to process. */ + uint32_t size; + uint32_t pad; + /* Starting GFN to relocate. */ + uint64_aligned_t src_gfn; + /* Starting GFN where GFNs should be relocated. */ + uint64_aligned_t dst_gfn; +}; + +/* + * XEN_DMOP_pin_memory_cacheattr : Pin caching type of RAM space. + * Identical to XEN_DOMCTL_pin_mem_cacheattr. + */ +#define XEN_DMOP_pin_memory_cacheattr 18 + +struct xen_dm_op_pin_memory_cacheattr { + uint64_aligned_t start; /* Start gfn. */ + uint64_aligned_t end; /* End gfn. */ +/* Caching types: these happen to be the same as x86 MTRR/PAT type codes. */ +#define XEN_DMOP_MEM_CACHEATTR_UC 0 +#define XEN_DMOP_MEM_CACHEATTR_WC 1 +#define XEN_DMOP_MEM_CACHEATTR_WT 4 +#define XEN_DMOP_MEM_CACHEATTR_WP 5 +#define XEN_DMOP_MEM_CACHEATTR_WB 6 +#define XEN_DMOP_MEM_CACHEATTR_UCM 7 +#define XEN_DMOP_DELETE_MEM_CACHEATTR (~(uint32_t)0) + uint32_t type; /* XEN_DMOP_MEM_CACHEATTR_* */ + uint32_t pad; +}; + +/* + * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's + * IRQ lines (currently Arm only). + * Only SPIs are supported. + */ +#define XEN_DMOP_set_irq_level 19 + +struct xen_dm_op_set_irq_level { + uint32_t irq; + /* IN - Level: 0 -> deasserted, 1 -> asserted */ + uint8_t level; + uint8_t pad[3]; +}; + +/* + * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has. + * + * This is the number of vcpu objects allocated in Xen for the domain, and is + * fixed from creation time. This bound is applicable to e.g. the vcpuid + * parameter of XEN_DMOP_inject_event, or number of struct ioreq objects + * mapped via XENMEM_acquire_resource. + */ +#define XEN_DMOP_nr_vcpus 20 + +struct xen_dm_op_nr_vcpus { + uint32_t vcpus; /* OUT */ +}; + +struct xen_dm_op { + uint32_t op; + uint32_t pad; + union { + struct xen_dm_op_create_ioreq_server create_ioreq_server; + struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info; + struct xen_dm_op_ioreq_server_range map_io_range_to_ioreq_server; + struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server; + struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state; + struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server; + struct xen_dm_op_track_dirty_vram track_dirty_vram; + struct xen_dm_op_set_pci_intx_level set_pci_intx_level; + struct xen_dm_op_set_isa_irq_level set_isa_irq_level; + struct xen_dm_op_set_irq_level set_irq_level; + struct xen_dm_op_set_pci_link_route set_pci_link_route; + struct xen_dm_op_modified_memory modified_memory; + struct xen_dm_op_set_mem_type set_mem_type; + struct xen_dm_op_inject_event inject_event; + struct xen_dm_op_inject_msi inject_msi; + struct xen_dm_op_map_mem_type_to_ioreq_server map_mem_type_to_ioreq_server; + struct xen_dm_op_remote_shutdown remote_shutdown; + struct xen_dm_op_relocate_memory relocate_memory; + struct xen_dm_op_pin_memory_cacheattr pin_memory_cacheattr; + struct xen_dm_op_nr_vcpus nr_vcpus; + } u; +}; + struct xen_dm_op_buf { GUEST_HANDLE(void) h; xen_ulong_t size; From patchwork Tue Jul 25 06:47:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 125371 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp2296458vqg; Tue, 25 Jul 2023 00:30:06 -0700 (PDT) X-Google-Smtp-Source: APBJJlHUzfX5foHpnn8xstDyY1aotjweT+unk/xdknSRL98T7njk55r3JSmoKQtLdku2SpQU8/47 X-Received: by 2002:a05:6402:148f:b0:522:2a3c:6eb with SMTP id e15-20020a056402148f00b005222a3c06ebmr4217547edv.39.1690270205729; Tue, 25 Jul 2023 00:30:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690270205; cv=none; d=google.com; s=arc-20160816; b=c4m2gNERrIQY7AxqVb8TGJfh4dlsfMjEuIuoHY+oeuxzuYytQfeqv31PYxkXPcyi0x TlQ2/0fJVJKpaHn6npLClVKSh/6sySqCwiqwfU0q2nntKZmC/VL3Lfb3o0cnx5Yj+sJb W+Zu6nDyMr38Yj8vKsovc/NXw4O3WnzL8qNPEgpR5Co3Ultz7S2e0wBT6AWLYFAUQUl+ S9oiUQgR+8EgFe3knbWPBaGzSDzp7x+ZLATRnpGox2/F5tmOpxyoSRN/qsZf4VI42hPH w3EaB5eHN1o8/6AjcSdkdGikM9NnuOBKVZnYrWY+sLRm8Q/zC0jpo/VjqgPeWYo+jKwk wqdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Hcw0vhkpfA1IWVmpW9wPPgV2C/r/WIeNrF6Q4NPx1Cs=; fh=mqObDQI80siQiP8z2xLo8twiNKHjoRrlGuO/h9DrNYA=; b=D8eeAIFuJ6kBxc9vEzQriBO66/Y3dl54vQjIGAH3ulziG9aoC3gaGAZICw2KqprJWc TpiLiS+vS4gA4bkaC12i1cLFnAfnq9EuiTeL5X3A3nlhCGWWXwB4D1Z/7n3ZUcSgYK2g H1hf7DCSgZwax6j0xr3KPTjnAHoKP2JqEI9l8hq6AriDNp/Epcq0M8FOpQIfYozkCEiu 6O8n5U1i2XYU+xET9YW+xbgA/IlHESg8sIQJSmcVylbXQxZPfE5cGMvbixmIY5TW5YxR HfhXJSuh/fEtiypSaKugCoHUi+3Tr2T39w3p0J+PoL5Pi5aS+CgkwdI+qSWUYfAup8cF em2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e2vS6JKd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l6-20020aa7d946000000b005221da47f59si4272950eds.172.2023.07.25.00.29.42; Tue, 25 Jul 2023 00:30:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=e2vS6JKd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231166AbjGYGsL (ORCPT + 99 others); Tue, 25 Jul 2023 02:48:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232297AbjGYGsB (ORCPT ); Tue, 25 Jul 2023 02:48:01 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E259E51 for ; Mon, 24 Jul 2023 23:47:36 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1b89114266dso41678285ad.0 for ; Mon, 24 Jul 2023 23:47:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1690267654; x=1690872454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hcw0vhkpfA1IWVmpW9wPPgV2C/r/WIeNrF6Q4NPx1Cs=; b=e2vS6JKd5Vy/p29drm1dbYw1K/JcNfPQeFaLfaYPx0UwGlXs2imi0lkQVd00y2dXtj m+UpNHaeCs21NcHkcxK+BwICVhhKZQPsfzid5i83VJ7CmWNIpHXlus3VADuc1Wimp90M AxQyNtR1aTZANK9UnHB+z4qfjhm4/QfCmo444N8cSTUlSfF1eEBL10tDYfqMkvRkzUS0 ntCOQmyYsx0/VVfhp3eAgJAsjFs9DPDQ/wCNiHDi6TZkpI9TE7sLJDb35Kh+77XaIqff GMiXpM5DoGXzzdNAGw78cPbRLZBk0lJQG8bWMmXvAi/FJ49Qe7eLOCyXAgDJypSFWQJZ h4Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690267654; x=1690872454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hcw0vhkpfA1IWVmpW9wPPgV2C/r/WIeNrF6Q4NPx1Cs=; b=gMgg87uHkynkmwL83n+Pe8FM2/23hckH0j5aSQgNjVKmcPajSf1X+PaBOH76G+Oky1 fI/MgQV6EIIl0NYGYDzkXlt3Jw5k0HR6Uo2DlbnI0pzYVyjRrjLbDYTDDPZ/7g1xwALH RKhoxDI+VSfV+Mu2Ksu8HKLBjIRMAexsaHG1RKt+DoKPjzo0lk+0Z9G+WLK6D4lYWwZw dJdIbjfL+jzqUfHOsDNUnJ/kbFIj6GjwAqLSe8wT+dDyFvSlPgT3Pi/4gSmFuWxm5oRh /yDt/7Sq/dPgyJ8hrVoRhj42Qww0LfVA7Wgm3JR69ogQxa0xur6u7RgKEdqyEdNveALl DRnw== X-Gm-Message-State: ABy/qLZdKU9sszjJsgnvHA8RBd6izMdlb/NIMH67JHEg+ymZ73QjiK/J 1YGMvKhfklSg5AwLuJFGNdOxWg== X-Received: by 2002:a17:902:cec1:b0:1b8:8dbd:e1a0 with SMTP id d1-20020a170902cec100b001b88dbde1a0mr15735753plg.13.1690267654137; Mon, 24 Jul 2023 23:47:34 -0700 (PDT) Received: from localhost ([122.172.87.195]) by smtp.gmail.com with ESMTPSA id f20-20020a170902ab9400b001bb7a736b46sm6520346plr.104.2023.07.24.23.47.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 23:47:33 -0700 (PDT) From: Viresh Kumar To: Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko Cc: Viresh Kumar , Vincent Guittot , =?utf-8?q?Alex_Benn=C3=A9e?= , stratos-dev@op-lists.linaro.org, Erik Schilling , Manos Pitsidianakis , Mathieu Poirier , xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Subject: [PATCH V3 2/2] xen: privcmd: Add support for irqfd Date: Tue, 25 Jul 2023 12:17:26 +0530 Message-Id: <8c1ea7faa0fe5deed1b2c86ea0f73e824f8a001d.1690190453.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.31.1.272.g89b43f80a514 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772376771044062126 X-GMAIL-MSGID: 1772376771044062126 Xen provides support for injecting interrupts to the guests via the HYPERVISOR_dm_op() hypercall. The same is used by the Virtio based device backend implementations, in an inefficient manner currently. Generally, the Virtio backends are implemented to work with the Eventfd based mechanism. In order to make such backends work with Xen, another software layer needs to poll the Eventfds and raise an interrupt to the guest using the Xen based mechanism. This results in an extra context switch. This is not a new problem in Linux though. It is present with other hypervisors like KVM, etc. as well. The generic solution implemented in the kernel for them is to provide an IOCTL call to pass the interrupt details and eventfd, which lets the kernel take care of polling the eventfd and raising of the interrupt, instead of handling this in user space (which involves an extra context switch). This patch adds support to inject a specific interrupt to guest using the eventfd mechanism, by preventing the extra context switch. Inspired by existing implementations for KVM, etc.. Signed-off-by: Viresh Kumar --- V2.1->V3 - No changes V2->V2.1 - Select EVENTFD from Kconfig V1->V2: - Improve error handling. - Remove the unnecessary usage of list_for_each_entry_safe(). - Restrict the use of XEN_DMOP_set_irq_level to only ARM64. drivers/xen/Kconfig | 1 + drivers/xen/privcmd.c | 276 ++++++++++++++++++++++++++++++++++++- include/uapi/xen/privcmd.h | 14 ++ 3 files changed, 289 insertions(+), 2 deletions(-) diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index d5d7c402b651..7967393c55a4 100644 --- a/drivers/xen/Kconfig +++ b/drivers/xen/Kconfig @@ -261,6 +261,7 @@ config XEN_SCSI_BACKEND config XEN_PRIVCMD tristate "Xen hypercall passthrough driver" depends on XEN + select EVENTFD default m help The hypercall passthrough driver allows privileged user programs to diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index e2f580e30a86..0debc5482253 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -9,11 +9,16 @@ #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt +#include +#include #include #include +#include +#include #include #include #include +#include #include #include #include @@ -833,6 +838,257 @@ static long privcmd_ioctl_mmap_resource(struct file *file, return rc; } +/* Irqfd support */ +static struct workqueue_struct *irqfd_cleanup_wq; +static DEFINE_MUTEX(irqfds_lock); +static LIST_HEAD(irqfds_list); + +struct privcmd_kernel_irqfd { + domid_t dom; + u8 level; + bool error; + u32 irq; + struct eventfd_ctx *eventfd; + struct work_struct shutdown; + wait_queue_entry_t wait; + struct list_head list; + poll_table pt; +}; + +static void irqfd_deactivate(struct privcmd_kernel_irqfd *kirqfd) +{ + lockdep_assert_held(&irqfds_lock); + + list_del_init(&kirqfd->list); + queue_work(irqfd_cleanup_wq, &kirqfd->shutdown); +} + +static void irqfd_shutdown(struct work_struct *work) +{ + struct privcmd_kernel_irqfd *kirqfd = + container_of(work, struct privcmd_kernel_irqfd, shutdown); + u64 cnt; + + eventfd_ctx_remove_wait_queue(kirqfd->eventfd, &kirqfd->wait, &cnt); + eventfd_ctx_put(kirqfd->eventfd); + kfree(kirqfd); +} + +static void irqfd_inject(struct privcmd_kernel_irqfd *kirqfd) +{ + /* Different architectures support this differently */ + struct xen_dm_op dm_op = { +#ifdef CONFIG_ARM64 + .op = XEN_DMOP_set_irq_level, + .u.set_irq_level.irq = kirqfd->irq, + .u.set_irq_level.level = kirqfd->level, +#endif + }; + struct xen_dm_op_buf xbufs = { + .size = sizeof(dm_op), + }; + u64 cnt; + long rc; + + eventfd_ctx_do_read(kirqfd->eventfd, &cnt); + set_xen_guest_handle(xbufs.h, &dm_op); + + xen_preemptible_hcall_begin(); + rc = HYPERVISOR_dm_op(kirqfd->dom, 1, &xbufs); + xen_preemptible_hcall_end(); + + /* Don't repeat the error message for consecutive failures */ + if (rc && !kirqfd->error) { + pr_err("Failed to configure irq: %d to level: %d for guest domain: %d\n", + kirqfd->irq, kirqfd->level, kirqfd->dom); + } + + kirqfd->error = !!rc; +} + +static int +irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, void *key) +{ + struct privcmd_kernel_irqfd *kirqfd = + container_of(wait, struct privcmd_kernel_irqfd, wait); + __poll_t flags = key_to_poll(key); + + if (flags & EPOLLIN) + irqfd_inject(kirqfd); + + if (flags & EPOLLHUP) { + mutex_lock(&irqfds_lock); + irqfd_deactivate(kirqfd); + mutex_unlock(&irqfds_lock); + } + + return 0; +} + +static void +irqfd_poll_func(struct file *file, wait_queue_head_t *wqh, poll_table *pt) +{ + struct privcmd_kernel_irqfd *kirqfd = + container_of(pt, struct privcmd_kernel_irqfd, pt); + + add_wait_queue_priority(wqh, &kirqfd->wait); +} + +static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd) +{ + struct privcmd_kernel_irqfd *kirqfd, *tmp; + struct eventfd_ctx *eventfd; + __poll_t events; + struct fd f; + int ret; + + kirqfd = kzalloc(sizeof(*kirqfd), GFP_KERNEL); + if (!kirqfd) + return -ENOMEM; + + kirqfd->irq = irqfd->irq; + kirqfd->dom = irqfd->dom; + kirqfd->level = irqfd->level; + INIT_LIST_HEAD(&kirqfd->list); + INIT_WORK(&kirqfd->shutdown, irqfd_shutdown); + + f = fdget(irqfd->fd); + if (!f.file) { + ret = -EBADF; + goto error_kfree; + } + + eventfd = eventfd_ctx_fileget(f.file); + if (IS_ERR(eventfd)) { + ret = PTR_ERR(eventfd); + goto error_fd_put; + } + + kirqfd->eventfd = eventfd; + + /* + * Install our own custom wake-up handling so we are notified via a + * callback whenever someone signals the underlying eventfd. + */ + init_waitqueue_func_entry(&kirqfd->wait, irqfd_wakeup); + init_poll_funcptr(&kirqfd->pt, irqfd_poll_func); + + mutex_lock(&irqfds_lock); + + list_for_each_entry(tmp, &irqfds_list, list) { + if (kirqfd->eventfd == tmp->eventfd) { + ret = -EBUSY; + mutex_unlock(&irqfds_lock); + goto error_eventfd; + } + } + + list_add_tail(&kirqfd->list, &irqfds_list); + mutex_unlock(&irqfds_lock); + + /* + * Check if there was an event already pending on the eventfd before we + * registered, and trigger it as if we didn't miss it. + */ + events = vfs_poll(f.file, &kirqfd->pt); + if (events & EPOLLIN) + irqfd_inject(kirqfd); + + /* + * Do not drop the file until the kirqfd is fully initialized, otherwise + * we might race against the EPOLLHUP. + */ + fdput(f); + return 0; + +error_eventfd: + eventfd_ctx_put(eventfd); + +error_fd_put: + fdput(f); + +error_kfree: + kfree(kirqfd); + return ret; +} + +static int privcmd_irqfd_deassign(struct privcmd_irqfd *irqfd) +{ + struct privcmd_kernel_irqfd *kirqfd; + struct eventfd_ctx *eventfd; + + eventfd = eventfd_ctx_fdget(irqfd->fd); + if (IS_ERR(eventfd)) + return PTR_ERR(eventfd); + + mutex_lock(&irqfds_lock); + + list_for_each_entry(kirqfd, &irqfds_list, list) { + if (kirqfd->eventfd == eventfd) { + irqfd_deactivate(kirqfd); + break; + } + } + + mutex_unlock(&irqfds_lock); + + eventfd_ctx_put(eventfd); + + /* + * Block until we know all outstanding shutdown jobs have completed so + * that we guarantee there will not be any more interrupts once this + * deassign function returns. + */ + flush_workqueue(irqfd_cleanup_wq); + + return 0; +} + +static long privcmd_ioctl_irqfd(struct file *file, void __user *udata) +{ + struct privcmd_data *data = file->private_data; + struct privcmd_irqfd irqfd; + + if (copy_from_user(&irqfd, udata, sizeof(irqfd))) + return -EFAULT; + + /* No other flags should be set */ + if (irqfd.flags & ~PRIVCMD_IRQFD_FLAG_DEASSIGN) + return -EINVAL; + + /* If restriction is in place, check the domid matches */ + if (data->domid != DOMID_INVALID && data->domid != irqfd.dom) + return -EPERM; + + if (irqfd.flags & PRIVCMD_IRQFD_FLAG_DEASSIGN) + return privcmd_irqfd_deassign(&irqfd); + + return privcmd_irqfd_assign(&irqfd); +} + +static int privcmd_irqfd_init(void) +{ + irqfd_cleanup_wq = alloc_workqueue("privcmd-irqfd-cleanup", 0, 0); + if (!irqfd_cleanup_wq) + return -ENOMEM; + + return 0; +} + +static void privcmd_irqfd_exit(void) +{ + struct privcmd_kernel_irqfd *kirqfd, *tmp; + + mutex_lock(&irqfds_lock); + + list_for_each_entry_safe(kirqfd, tmp, &irqfds_list, list) + irqfd_deactivate(kirqfd); + + mutex_unlock(&irqfds_lock); + + destroy_workqueue(irqfd_cleanup_wq); +} + static long privcmd_ioctl(struct file *file, unsigned int cmd, unsigned long data) { @@ -868,6 +1124,10 @@ static long privcmd_ioctl(struct file *file, ret = privcmd_ioctl_mmap_resource(file, udata); break; + case IOCTL_PRIVCMD_IRQFD: + ret = privcmd_ioctl_irqfd(file, udata); + break; + default: break; } @@ -992,15 +1252,27 @@ static int __init privcmd_init(void) err = misc_register(&xen_privcmdbuf_dev); if (err != 0) { pr_err("Could not register Xen hypercall-buf device\n"); - misc_deregister(&privcmd_dev); - return err; + goto err_privcmdbuf; + } + + err = privcmd_irqfd_init(); + if (err != 0) { + pr_err("irqfd init failed\n"); + goto err_irqfd; } return 0; + +err_irqfd: + misc_deregister(&xen_privcmdbuf_dev); +err_privcmdbuf: + misc_deregister(&privcmd_dev); + return err; } static void __exit privcmd_exit(void) { + privcmd_irqfd_exit(); misc_deregister(&privcmd_dev); misc_deregister(&xen_privcmdbuf_dev); } diff --git a/include/uapi/xen/privcmd.h b/include/uapi/xen/privcmd.h index d2029556083e..47334bb91a09 100644 --- a/include/uapi/xen/privcmd.h +++ b/include/uapi/xen/privcmd.h @@ -98,6 +98,18 @@ struct privcmd_mmap_resource { __u64 addr; }; +/* For privcmd_irqfd::flags */ +#define PRIVCMD_IRQFD_FLAG_DEASSIGN (1 << 0) + +struct privcmd_irqfd { + __u32 fd; + __u32 flags; + __u32 irq; + domid_t dom; + __u8 level; + __u8 pad; +}; + /* * @cmd: IOCTL_PRIVCMD_HYPERCALL * @arg: &privcmd_hypercall_t @@ -125,5 +137,7 @@ struct privcmd_mmap_resource { _IOC(_IOC_NONE, 'P', 6, sizeof(domid_t)) #define IOCTL_PRIVCMD_MMAP_RESOURCE \ _IOC(_IOC_NONE, 'P', 7, sizeof(struct privcmd_mmap_resource)) +#define IOCTL_PRIVCMD_IRQFD \ + _IOC(_IOC_NONE, 'P', 8, sizeof(struct privcmd_irqfd)) #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */