[v6,0/3] Add sync object UAPI support to VirtIO-GPU driver

Message ID 20230416115237.798604-1-dmitry.osipenko@collabora.com
Headers
Series Add sync object UAPI support to VirtIO-GPU driver |

Message

Dmitry Osipenko April 16, 2023, 11:52 a.m. UTC
  We have multiple Vulkan context types that are awaiting for the addition
of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:

 1. Venus context
 2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)

Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
generic fencing implementation that we want to utilize.

This patch adds initial sync objects support. It creates fundament for a
further fencing improvements. Later on we will want to extend the VirtIO-GPU
fencing API with passing fence IDs to host for waiting, it will be a new
additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
drivers in works that require VirtIO-GPU to support sync objects UAPI.

The patch is heavily inspired by the sync object UAPI implementation of the
MSM driver.

Changelog:

v6: - Added zeroing out of syncobj_desc, as was suggested by Emil Velikov.

    - Fixed memleak in error code path which was spotted by Emil Velikov.

    - Switched to u32/u64 instead of uint_t. Previously was keeping
      uint_t style of the virtgpu_ioctl.c, in the end decided to change
      it because it's not a proper kernel coding style after all.

    - Kept single drm_virtgpu_execbuffer_syncobj struct for both in/out
      sync objects. There was a little concern about whether it would be
      worthwhile to have separate in/out descriptors, in practice it's
      unlikely that we will extend the descs in a foreseeable future.
      There is no overhead in using same struct since we want to pad it
      to 64b anyways and it shouldn't be a problem to separate the descs
      later on if we will want to do that.

    - Added r-b from Emil Velikov.

v5: - Factored out dma-fence unwrap API usage into separate patch as was
      suggested by Emil Velikov.

    - Improved and documented the job submission reorderings as was
      requested by Emil Velikov. Sync file FD is now installed after
      job is submitted to virtio to further optimize reorderings.

    - Added comment for the kvalloc, as was requested by Emil Velikov.

    - The num_in/out_syncobjs now is set only after completed parsing
      of pre/post deps, as was requested by Emil Velikov.

v4: - Added r-b from Rob Clark to the "refactoring" patch.

    - Replaced for/while(ptr && itr) with if (ptr), like was suggested by
      Rob Clark.

    - Dropped NOWARN and NORETRY GFP flags and switched syncobj patch
      to use kvmalloc.

    - Removed unused variables from syncobj patch that were borrowed by
      accident from another (upcoming) patch after one of git rebases.

v3: - Switched to use dma_fence_unwrap_for_each(), like was suggested by
      Rob Clark.

    - Fixed missing dma_fence_put() in error code path that was spotted by
      Rob Clark.

    - Removed obsoleted comment to virtio_gpu_execbuffer_ioctl(), like was
      suggested by Rob Clark.

v2: - Fixed chain-fence context matching by making use of
      dma_fence_chain_contained().

    - Fixed potential uninitialized var usage in error code patch of
      parse_post_deps(). MSM driver had a similar issue that is fixed
      already in upstream.

    - Added new patch that refactors job submission code path. I found
      that it was very difficult to add a new/upcoming host-waits feature
      because of how variables are passed around the code, the virtgpu_ioctl.c
      also was growing to unmanageable size.

Dmitry Osipenko (3):
  drm/virtio: Refactor and optimize job submission code path
  drm/virtio: Wait for each dma-fence of in-fence array individually
  drm/virtio: Support sync objects

 drivers/gpu/drm/virtio/Makefile         |   2 +-
 drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h    |   4 +
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 182 --------
 drivers/gpu/drm/virtio/virtgpu_submit.c | 530 ++++++++++++++++++++++++
 include/uapi/drm/virtgpu_drm.h          |  16 +-
 6 files changed, 552 insertions(+), 185 deletions(-)
 create mode 100644 drivers/gpu/drm/virtio/virtgpu_submit.c
  

Comments

Dmitry Osipenko April 19, 2023, 9:22 p.m. UTC | #1
Hello Gurchetan,

On 4/18/23 02:17, Gurchetan Singh wrote:
> On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> 
>> We have multiple Vulkan context types that are awaiting for the addition
>> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>>
>>  1. Venus context
>>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>>
>> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>> generic fencing implementation that we want to utilize.
>>
>> This patch adds initial sync objects support. It creates fundament for a
>> further fencing improvements. Later on we will want to extend the
>> VirtIO-GPU
>> fencing API with passing fence IDs to host for waiting, it will be a new
>> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
>> context
>> drivers in works that require VirtIO-GPU to support sync objects UAPI.
>>
>> The patch is heavily inspired by the sync object UAPI implementation of the
>> MSM driver.
>>
> 
> The changes seem good, but I would recommend getting a full end-to-end
> solution (i.e, you've proxied the host fence with these changes and shared
> with the host compositor) working first.  You'll never know what you'll
> find after completing this exercise.  Or is that the plan already?
> 
> Typically, you want to land the uAPI and virtio spec changes last.
> Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
> unstable uAPIs ...

The proxied host fence isn't directly related to sync objects, though I
prepared code such that it could be extended with a proxied fence later
on, based on a prototype that was made some time ago.

The proxied host fence shouldn't require UAPI changes, but only
virtio-gpu proto extension. Normally, all in-fences belong to a job's
context, and thus, waits are skipped by the guest kernel. Hence, fence
proxying is a separate feature from sync objects, it can be added
without sync objects.

Sync objects primarily wanted by native context drivers because Mesa
relies on the sync object UAPI presence. It's one of direct blockers for
Intel and AMDGPU drivers, both of which has been using this sync object
UAPI for a few months and now wanting it to land upstream.
  
Gurchetan Singh April 24, 2023, 6:40 p.m. UTC | #2
On Wed, Apr 19, 2023 at 2:22 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
>
> Hello Gurchetan,
>
> On 4/18/23 02:17, Gurchetan Singh wrote:
> > On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
> > dmitry.osipenko@collabora.com> wrote:
> >
> >> We have multiple Vulkan context types that are awaiting for the addition
> >> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> >>
> >>  1. Venus context
> >>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> >>
> >> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> >> generic fencing implementation that we want to utilize.
> >>
> >> This patch adds initial sync objects support. It creates fundament for a
> >> further fencing improvements. Later on we will want to extend the
> >> VirtIO-GPU
> >> fencing API with passing fence IDs to host for waiting, it will be a new
> >> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
> >> context
> >> drivers in works that require VirtIO-GPU to support sync objects UAPI.
> >>
> >> The patch is heavily inspired by the sync object UAPI implementation of the
> >> MSM driver.
> >>
> >
> > The changes seem good, but I would recommend getting a full end-to-end
> > solution (i.e, you've proxied the host fence with these changes and shared
> > with the host compositor) working first.  You'll never know what you'll
> > find after completing this exercise.  Or is that the plan already?
> >
> > Typically, you want to land the uAPI and virtio spec changes last.
> > Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
> > unstable uAPIs ...
>
> The proxied host fence isn't directly related to sync objects, though I
> prepared code such that it could be extended with a proxied fence later
> on, based on a prototype that was made some time ago.

Proxying the host fence is the novel bit.  If you have code that does
this, you should rebase/send that out (even as an RFC) so it's easier
to see how the pieces fit.

Right now, if you've only tested synchronization objects between the
same virtio-gpu context that skips the guest side wait, I think you
can already do that with the current uAPI (since ideally you'd wait on
the host side and can encode the sync resource in the command stream).

Also, try to come with a simple test (so we can meet requirements here
[a]) that showcases the new feature/capability.  An example would be
the virtio-intel native context sharing a fence with KMS or even
Wayland.

[a] https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements

>
> The proxied host fence shouldn't require UAPI changes, but only
> virtio-gpu proto extension. Normally, all in-fences belong to a job's
> context, and thus, waits are skipped by the guest kernel. Hence, fence
> proxying is a separate feature from sync objects, it can be added
> without sync objects.
>
> Sync objects primarily wanted by native context drivers because Mesa
> relies on the sync object UAPI presence. It's one of direct blockers for
> Intel and AMDGPU drivers, both of which has been using this sync object
> UAPI for a few months and now wanting it to land upstream.
>
> --
> Best regards,
> Dmitry
>
  
Dmitry Osipenko May 1, 2023, 3:38 p.m. UTC | #3
On 4/16/23 14:52, Dmitry Osipenko wrote:
> We have multiple Vulkan context types that are awaiting for the addition
> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> 
>  1. Venus context
>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> 
> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> generic fencing implementation that we want to utilize.
> 
> This patch adds initial sync objects support. It creates fundament for a
> further fencing improvements. Later on we will want to extend the VirtIO-GPU
> fencing API with passing fence IDs to host for waiting, it will be a new
> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
> drivers in works that require VirtIO-GPU to support sync objects UAPI.
> 
> The patch is heavily inspired by the sync object UAPI implementation of the
> MSM driver.

Gerd, do you have any objections to merging this series?

We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
the sync object support. It is the only part missing from kernel today
that is wanted by the native context drivers. Otherwise, there are few
other things in Qemu and virglrenderer left to sort out.

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
[2] https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris
  
Gerd Hoffmann May 3, 2023, 6:51 a.m. UTC | #4
On Mon, May 01, 2023 at 06:38:45PM +0300, Dmitry Osipenko wrote:
> On 4/16/23 14:52, Dmitry Osipenko wrote:
> > We have multiple Vulkan context types that are awaiting for the addition
> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> > 
> >  1. Venus context
> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> > 
> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> > generic fencing implementation that we want to utilize.
> > 
> > This patch adds initial sync objects support. It creates fundament for a
> > further fencing improvements. Later on we will want to extend the VirtIO-GPU
> > fencing API with passing fence IDs to host for waiting, it will be a new
> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
> > 
> > The patch is heavily inspired by the sync object UAPI implementation of the
> > MSM driver.
> 
> Gerd, do you have any objections to merging this series?

No objections.  Can't spot any issues, but I also don't follow drm close
enough to be able to review the sync object logic in detail.

Acked-by: Gerd Hoffmann <kraxel@redhat.com>

take care,
  Gerd
  
Dmitry Osipenko May 8, 2023, 12:16 p.m. UTC | #5
On 5/3/23 09:51, Gerd Hoffmann wrote:
> On Mon, May 01, 2023 at 06:38:45PM +0300, Dmitry Osipenko wrote:
>> On 4/16/23 14:52, Dmitry Osipenko wrote:
>>> We have multiple Vulkan context types that are awaiting for the addition
>>> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>>>
>>>  1. Venus context
>>>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>>>
>>> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>>> generic fencing implementation that we want to utilize.
>>>
>>> This patch adds initial sync objects support. It creates fundament for a
>>> further fencing improvements. Later on we will want to extend the VirtIO-GPU
>>> fencing API with passing fence IDs to host for waiting, it will be a new
>>> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
>>> drivers in works that require VirtIO-GPU to support sync objects UAPI.
>>>
>>> The patch is heavily inspired by the sync object UAPI implementation of the
>>> MSM driver.
>>
>> Gerd, do you have any objections to merging this series?
> 
> No objections.  Can't spot any issues, but I also don't follow drm close
> enough to be able to review the sync object logic in detail.
> 
> Acked-by: Gerd Hoffmann <kraxel@redhat.com>

Thanks, I'll work with Gurchetan on resolving his questions and will
apply the patches as soon as he'll give his ack.
  
Rob Clark May 8, 2023, 1:59 p.m. UTC | #6
On Wed, May 3, 2023 at 10:07 AM Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
>
>
> On Mon, May 1, 2023 at 8:38 AM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>
>> On 4/16/23 14:52, Dmitry Osipenko wrote:
>> > We have multiple Vulkan context types that are awaiting for the addition
>> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>> >
>> >  1. Venus context
>> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>> >
>> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>> > generic fencing implementation that we want to utilize.
>> >
>> > This patch adds initial sync objects support. It creates fundament for a
>> > further fencing improvements. Later on we will want to extend the VirtIO-GPU
>> > fencing API with passing fence IDs to host for waiting, it will be a new
>> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
>> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
>> >
>> > The patch is heavily inspired by the sync object UAPI implementation of the
>> > MSM driver.
>>
>> Gerd, do you have any objections to merging this series?
>>
>> We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
>> the sync object support. It is the only part missing from kernel today
>> that is wanted by the native context drivers. Otherwise, there are few
>> other things in Qemu and virglrenderer left to sort out.
>>
>> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
>> [2] https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris
>
>
> I'm not saying this change isn't good, just it's probably possible to implement the native contexts (even up to even VK1.2) without it.  But this patch series may be the most ergonomic way to do it, given how Mesa is designed.  But you probably want one of Mesa MRs reviewed first before merging (I added a comment on the amdgpu change) and that is a requirement [a].
>
> [a] "The userspace side must be fully reviewed and tested to the standards of that user space project. For e.g. mesa this means piglit testcases and review on the mailing list. This is again to ensure that the new interface actually gets the job done." -- from the requirements
>

tbh, the syncobj support is all drm core, the only driver specifics is
the ioctl parsing.  IMHO existing tests and the two existing consumers
are sufficient.  (Also, considering that additional non-drm
dependencies involved.)

If this was for the core drm syncobj implementation, and not just
driver ioctl parsing and wiring up the core helpers, I would agree
with you.

BR,
-R
  
Dmitry Osipenko May 12, 2023, 2:33 a.m. UTC | #7
On 5/12/23 03:17, Gurchetan Singh wrote:
...
> Can we get one of the Mesa MRs reviewed first?  There's currently no
> virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".
> 
> Even for the amdgpu, Pierre suggests the feature "will be marked as
> experimental both in Mesa and virglrenderer" and we can revise as needed.
> The DRM requirements seem to warn against adding an UAPI too hastily...
> 
> You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
> just change your mesa <--> virglrenderer protocol a little.  Perhaps that
> way is even better, since you plumb the in sync-obj into host-side command
> submission.
> 
> Without inter-context sharing of the fence, this MR really only adds guest
> kernel syntactic sugar.
> 
> Note I'm not against syntactic sugar, but I just want to point out that you
> can likely merge the native context work without any UAPI changes, in case
> it's not clear.
> 
> If this was for the core drm syncobj implementation, and not just
>> driver ioctl parsing and wiring up the core helpers, I would agree
>> with you.
>>
> 
> There are several possible and viable paths to get the features in question
> (VK1.2 syncobjs, and inter-context fence sharing).  There are paths
> entirely without the syncobj, paths that only use the syncobj for the
> inter-context fence sharing case and create host syncobjs for VK1.2, paths
> that also use guest syncobjs in every proxied command submission.
> 
> It's really hard to tell which one is better.  Here's my suggestion:
> 
> 1) Get the native contexts reviewed/merged in Mesa/virglrenderer using the
> current UAPI.  Options for VK1.2 include: pushing down the syncobjs to the
> host, and simulating the syncobj (as already done).  It's fine to mark
> these contexts as "experimental" like msm-experimental.  That will allow
> you to experiment with the protocols, come up with tests, and hopefully
> determine an answer to the host versus guest syncobj question.
> 
> 2) Once you've completed (1), try to add UAPI changes for features that are
> missing or things that are suboptimal with the knowledge gained from doing
> (2).
> 
> WDYT?

Having syncobj support available by DRM driver is a mandatory
requirement for native contexts because userspace (Mesa) relies on sync
objects support presence. In particular, Intel Mesa driver checks
whether DRM driver supports sync objects to decide which features are
available, ANV depends on the syncobj support.

I'm not familiar with a history of Venus and its limitations. Perhaps
the reason it's using host-side syncobjs is to have 1:1 Vulkan API
mapping between guest and host. Not sure if Venus could use guest
syncobjs instead or there are problems with that.

When syncobj was initially added to kernel, it was done from the needs
of supporting Vulkan wait API. For Venus the actual Vulkan driver is on
host side, while for native contexts it's on guest side. Native contexts
don't need syncobj on host side, it will be unnecessary overhead for
every nctx to have it on host. Hence, if there is no good reason for
host-side syncobjs, then why do that?

Native contexts pass deqp synchronization tests, they use sync objects
universally for both GL and VK. Games work, piglit/deqp passing. What
else you're wanting to test? Turnip?

The AMDGPU code has been looked and it looks good. It's a draft for now
because of the missing sync objects UAPI and other virglrender/Qemu
changes required to get KMS working. Maybe it will be acceptable to
merge the Mesa part once kernel will get sync objects supported, will
need to revisit it.

I'm not opening MR for virtio-intel because it has open questions that
need to be resolved first.
  
Dmitry Osipenko June 3, 2023, 2:11 a.m. UTC | #8
> Dmitry Osipenko (3):
>   drm/virtio: Refactor and optimize job submission code path
>   drm/virtio: Wait for each dma-fence of in-fence array individually

Applied these two patches to misc-next. The syncobj patch will wait for
the turnip Mesa MR.
  
Rob Clark June 27, 2023, 5:16 p.m. UTC | #9
On Fri, May 12, 2023 at 2:23 PM Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
>
>
> On Thu, May 11, 2023 at 7:33 PM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>
>> On 5/12/23 03:17, Gurchetan Singh wrote:
>> ...
>> > Can we get one of the Mesa MRs reviewed first?  There's currently no
>> > virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".
>> >
>> > Even for the amdgpu, Pierre suggests the feature "will be marked as
>> > experimental both in Mesa and virglrenderer" and we can revise as needed.
>> > The DRM requirements seem to warn against adding an UAPI too hastily...
>> >
>> > You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
>> > just change your mesa <--> virglrenderer protocol a little.  Perhaps that
>> > way is even better, since you plumb the in sync-obj into host-side command
>> > submission.
>> >
>> > Without inter-context sharing of the fence, this MR really only adds guest
>> > kernel syntactic sugar.
>> >
>> > Note I'm not against syntactic sugar, but I just want to point out that you
>> > can likely merge the native context work without any UAPI changes, in case
>> > it's not clear.
>> >
>> > If this was for the core drm syncobj implementation, and not just
>> >> driver ioctl parsing and wiring up the core helpers, I would agree
>> >> with you.
>> >>
>> >
>> > There are several possible and viable paths to get the features in question
>> > (VK1.2 syncobjs, and inter-context fence sharing).  There are paths
>> > entirely without the syncobj, paths that only use the syncobj for the
>> > inter-context fence sharing case and create host syncobjs for VK1.2, paths
>> > that also use guest syncobjs in every proxied command submission.
>> >
>> > It's really hard to tell which one is better.  Here's my suggestion:
>> >
>> > 1) Get the native contexts reviewed/merged in Mesa/virglrenderer using the
>> > current UAPI.  Options for VK1.2 include: pushing down the syncobjs to the
>> > host, and simulating the syncobj (as already done).  It's fine to mark
>> > these contexts as "experimental" like msm-experimental.  That will allow
>> > you to experiment with the protocols, come up with tests, and hopefully
>> > determine an answer to the host versus guest syncobj question.
>> >
>> > 2) Once you've completed (1), try to add UAPI changes for features that are
>> > missing or things that are suboptimal with the knowledge gained from doing
>> > (2).
>> >
>> > WDYT?
>>
>> Having syncobj support available by DRM driver is a mandatory
>> requirement for native contexts because userspace (Mesa) relies on sync
>> objects support presence. In particular, Intel Mesa driver checks
>> whether DRM driver supports sync objects to decide which features are
>> available, ANV depends on the syncobj support.
>>
>>
>> I'm not familiar with a history of Venus and its limitations. Perhaps
>> the reason it's using host-side syncobjs is to have 1:1 Vulkan API
>> mapping between guest and host. Not sure if Venus could use guest
>> syncobjs instead or there are problems with that.
>
>
> Why not submit a Venus MR?  It's already in-tree, and you can see how your API works in scenarios with a host side timeline semaphore (aka syncobj).  I think they are also interested in fencing/sync improvements.
>
>>
>>
>> When syncobj was initially added to kernel, it was done from the needs
>> of supporting Vulkan wait API. For Venus the actual Vulkan driver is on
>> host side, while for native contexts it's on guest side. Native contexts
>> don't need syncobj on host side, it will be unnecessary overhead for
>> every nctx to have it on host. Hence, if there is no good reason for
>> host-side syncobjs, then why do that?
>
>
> Depends on your threading model.  You can have the following scenarios:
>
> 1) N guest contexts : 1 host thread
> 2) N guest contexts : N host threads for each context
> 3) 1:1 thread
>
> I think the native context is single-threaded (1), IIRC?  If the goal is to push command submission to the host (for inter-context sharing), I think you'll at-least want (2).  For a 1:1 model (a la gfxstream), one host thread can put another thread's out_sync_objs as it's in_sync_objs (in the same virtgpu context).  I think that's kind of the goal of timeline semaphores, with the example given by Khronos as with a compute thread + a graphics thread.
>
> I'm not saying one threading model is better than any other, perhaps the native context using the host driver in the guest is so good, it doesn't matter.  I'm just saying these are the types of discussions we can have if we tried to get one the Mesa MRs merged first ;-)
>
>>
>> Native contexts pass deqp synchronization tests, they use sync objects
>> universally for both GL and VK. Games work, piglit/deqp passing. What
>> else you're wanting to test? Turnip?
>
>
> Turnip would also fulfill the requirements, since most of the native context stuff is already wired for freedreno.
>
>>
>>
>> The AMDGPU code has been looked and it looks good. It's a draft for now
>> because of the missing sync objects UAPI and other virglrender/Qemu
>> changes required to get KMS working.
>
>
> Get it out of draft mode then :-).  How long would that take?
>
> Also, there's crosvm which builds on standard Linux, so I wouldn't consider QEMU patches as a requirement.  Just Mesa/virglrenderer part.
>
>>
>> Maybe it will be acceptable to
>> merge the Mesa part once kernel will get sync objects supported, will
>> need to revisit it.
>
>
> You can think of my commentary as the following suggestions:
>
> - You can probably get native contexts and deqp-vk 1.2 working with the current UAPI
> - It would be terrific to see inter-context fence sharing working (with the wait pushed down to the host), that's something the current UAPI can't do
> - Work iteratively (i.e, it's fine to merge Mesa/virglrenderer MRs as "experimental") and in steps, no need to figure everything out at once
>
> Now these are just suggestions, and while I think they are good, you can safely ignore them.
>
> But there's also the DRM requirements, which state "userspace side must be fully reviewed and tested to the standards of that user-space project.".  So I think to meet the minimum requirements, I think we should at-least have one of the following (not all, just one) reviewed:
>
> 1) venus using the new uapi
> 2) gfxstream vk using the new uapi
> 3) amdgpu nctx out of "draft" mode and using the new uapi.
> 4) virtio-intel using new uapi
> 5) turnip using your new uapi

forgot to mention this earlier, but
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533

Dmitry, you can also add, if you haven't already:

Tested-by: Rob Clark <robdclark@gmail.com>

> Depending on which one you chose, maybe we can get it done within 1-2 weeks?
>
>> I'm not opening MR for virtio-intel because it has open questions that
>> need to be resolved first.
>>
>> --
>> Best regards,
>> Dmitry
>>
  
Dmitry Osipenko July 19, 2023, 6:58 p.m. UTC | #10
27.06.2023 20:16, Rob Clark пишет:
...
>> Now these are just suggestions, and while I think they are good, you can safely ignore them.
>>
>> But there's also the DRM requirements, which state "userspace side must be fully reviewed and tested to the standards of that user-space project.".  So I think to meet the minimum requirements, I think we should at-least have one of the following (not all, just one) reviewed:
>>
>> 1) venus using the new uapi
>> 2) gfxstream vk using the new uapi
>> 3) amdgpu nctx out of "draft" mode and using the new uapi.
>> 4) virtio-intel using new uapi
>> 5) turnip using your new uapi
> 
> forgot to mention this earlier, but
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533
> 
> Dmitry, you can also add, if you haven't already:
> 
> Tested-by: Rob Clark <robdclark@gmail.com>

Gurchetan, Turnip Mesa virtio support is ready to be merged upstream,
it's using this new syncobj UAPI. Could you please give yours r-b if you
don't have objections?
  
Dmitry Osipenko July 31, 2023, 4:26 p.m. UTC | #11
On 7/29/23 01:03, Gurchetan Singh wrote:
> On Wed, Jul 19, 2023 at 11:58 AM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> 
>> 27.06.2023 20:16, Rob Clark пишет:
>> ...
>>>> Now these are just suggestions, and while I think they are good, you
>> can safely ignore them.
>>>>
>>>> But there's also the DRM requirements, which state "userspace side must
>> be fully reviewed and tested to the standards of that user-space
>> project.".  So I think to meet the minimum requirements, I think we should
>> at-least have one of the following (not all, just one) reviewed:
>>>>
>>>> 1) venus using the new uapi
>>>> 2) gfxstream vk using the new uapi
>>>> 3) amdgpu nctx out of "draft" mode and using the new uapi.
>>>> 4) virtio-intel using new uapi
>>>> 5) turnip using your new uapi
>>>
>>> forgot to mention this earlier, but
>>> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533
>>>
>>> Dmitry, you can also add, if you haven't already:
>>>
>>> Tested-by: Rob Clark <robdclark@gmail.com>
>>
>> Gurchetan, Turnip Mesa virtio support is ready to be merged upstream,
>> it's using this new syncobj UAPI. Could you please give yours r-b if you
>> don't have objections?
>>
> 
> Given that Turnip native contexts are reviewed using this UAPI, your change
> does now meet the requirements and is ready to merge.
> 
> One thing I noticed is you might need explicit padding between
> `num_out_syncobjs` and `in_syncobjs`.  Otherwise, feel free to add my
> acked-by.

The padding looks okay as-as, all the struct size and u64s are properly
aligned. I'll merge the patch soon, thanks.