Message ID | 20231109172449.1599262-1-javierm@redhat.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b129:0:b0:403:3b70:6f57 with SMTP id q9csp588985vqs; Thu, 9 Nov 2023 09:26:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IGSD5xp8ne14mp9/MAnudK82QyKTpUGIZrqpsJ0efcp+LEOzcc1gkiEe6Ugyfre27L2viqO X-Received: by 2002:a17:90b:1a91:b0:27d:2663:c5f4 with SMTP id ng17-20020a17090b1a9100b0027d2663c5f4mr2402455pjb.47.1699550811343; Thu, 09 Nov 2023 09:26:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699550811; cv=none; d=google.com; s=arc-20160816; b=pgyTfvT/KEUZtFFxVkgnR9vWyDSlIOtsTFXBLRtxkzzMklX5TkZ4MArHpIDZEb864z bmjdagQzml8rXEiCiboGzh7SHfR2jHo1Qk+ftubRNfc/AqLVC6hGZqyVLLBEy1brXLL/ LNMNcX9aznWESwIIHQm7Odx3GTKqEt6xoAHT2U0x8gUdE9m416Hn/yH+ZIxBHxn4zu4I NMiPwdeFHyypS5pjYpmkeOxlv8VuLj9AKsBx1zFjU3W7+nWI9i1AftuHgQzUDpciHHij pSw7UepihQtwM4QEId8IcEMMa1+kOaga1KXDe6IZ+YUS6tzx8VgUZuj/5i1aV9yJiVyC sO5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=KM+kZjFHA8E0A+l+o485oGQAZV87bwGJzM3u9NBaaMA=; fh=NMHYwYFF0pQmv/fwEleZFcUx4UPYHfkFGV6bqDbb1HQ=; b=pRCElY8fdjyK71ss+FoOuDS676CB8OlzIm6218HcA+JZ/hyQGe9VPzHMT8BRFY2rn5 C69s9IMJTog9EMG69wnvwbSs8sMJ1g7hlu3WCOBXNufYpYJCMHfGJGFHJiZNRrk9oumt EPVstAwgpQomB3H97hOVGmnNyXmgvxRq8anR3UP9qB0VdckwuOY9FJ6K2SAwn12wOBdg g9r8rHEfPKTMPlvSPfJHBCAvwShutO3qxYEdtNiax0L01BDSXUV2jyXhyv3caiJ1h8OF cTbN0Cy2JC2v2q79Qo6vSsICaO6j+IVvhc5RbdPRMan+A2EP045113oJCsJfZ4+x87j1 xwCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="e/cGm8lY"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id cu24-20020a17090afa9800b00280277d721dsi2118179pjb.83.2023.11.09.09.26.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Nov 2023 09:26:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="e/cGm8lY"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id A65A883805D5; Thu, 9 Nov 2023 09:26:41 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344550AbjKIR0D (ORCPT <rfc822;lhua1029@gmail.com> + 31 others); Thu, 9 Nov 2023 12:26:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343808AbjKIR0A (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 9 Nov 2023 12:26:00 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C28B325B for <linux-kernel@vger.kernel.org>; Thu, 9 Nov 2023 09:25:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1699550709; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=KM+kZjFHA8E0A+l+o485oGQAZV87bwGJzM3u9NBaaMA=; b=e/cGm8lYKvoeQYINzkmtM+ll6tQjTbbgMlj+O7XVmg9vhZHZdmMZIM5qvj+W8Fr/jFm0ZX P7K3mNfN9ua/yU7uhevO+qWx7CWiFLNOgXUHBEi1pMBJuuU2Hbw2xJihTKJd7U8LdC7q2z XaF5uI6BHfHQ2Em0LlIjGVWeVPKmZ54= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-258-Gupn3hwQOBOfC2d31EDtLQ-1; Thu, 09 Nov 2023 12:25:08 -0500 X-MC-Unique: Gupn3hwQOBOfC2d31EDtLQ-1 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-408508aa81cso7221165e9.3 for <linux-kernel@vger.kernel.org>; Thu, 09 Nov 2023 09:25:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699550706; x=1700155506; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KM+kZjFHA8E0A+l+o485oGQAZV87bwGJzM3u9NBaaMA=; b=XT7QYsa8SbI5B1+l+i7BwAl65Au8417ZAim1k6E0j1tHfHH/DLEtA3x47Y6tI4lErn eTHi+EbT61sANzIa6K72+LWGzjBcVEoiytjhN/EYPJ1e3FS21+hkU875x79iNhVUqspH 6PdhrFQykXbYytqdKqRwFSKbZ0iA+sK7KPT/rYIIkG44lOcuQnBWbTAZqJeS0mdcQDTF alqODZyklFxdrvUw7/7qxRtjFl52+mAOi7I1GLW5tp5Vtew0KtkxqD2cs0ISRBqQLnbu iOLr7EGo5EpTipVIUnXSejCfsNUrbbQBxx4OO+I357P3ZEJE9k87o/e0fbntIZWmfRiC 5JfQ== X-Gm-Message-State: AOJu0YycmiQVLCmxDLAZ+w7pL9Bdr8yrSYuWkSWwak4a1Tzrzoz+aqMU bQen4Y1ewFDiF/7Dc1F0Gw6CVDmfF4ekQraPCumeZfYy+Dg7I6idIxhXb86DY57yyav+0AE6VJv NZOblPd4lXUYsUFBEyzh/jgdT9u/ZY4JV7bdHJjC9I4ksMIbybDL0Mru1sL0JymYyiK9oXYmcsL 30e4gGQJ0= X-Received: by 2002:a05:600c:3d9a:b0:408:5bc6:a7d with SMTP id bi26-20020a05600c3d9a00b004085bc60a7dmr4807933wmb.19.1699550706541; Thu, 09 Nov 2023 09:25:06 -0800 (PST) X-Received: by 2002:a05:600c:3d9a:b0:408:5bc6:a7d with SMTP id bi26-20020a05600c3d9a00b004085bc60a7dmr4807894wmb.19.1699550706031; Thu, 09 Nov 2023 09:25:06 -0800 (PST) Received: from localhost (205.pool92-176-231.dynamic.orange.es. [92.176.231.205]) by smtp.gmail.com with ESMTPSA id n18-20020a05600c3b9200b0040839fcb217sm646229wms.8.2023.11.09.09.25.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Nov 2023 09:25:04 -0800 (PST) From: Javier Martinez Canillas <javierm@redhat.com> To: linux-kernel@vger.kernel.org Cc: Simon Ser <contact@emersion.fr>, Sima Vetter <daniel.vetter@ffwll.ch>, Pekka Paalanen <pekka.paalanen@collabora.com>, Maxime Ripard <mripard@kernel.org>, Bilal Elmoussaoui <belmouss@redhat.com>, Erico Nunes <nunes.erico@gmail.com>, Javier Martinez Canillas <javierm@redhat.com>, Chia-I Wu <olvaffe@gmail.com>, Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@gmail.com>, David Airlie <airlied@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>, Gurchetan Singh <gurchetansingh@chromium.org>, Jonathan Corbet <corbet@lwn.net>, Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Thomas Zimmermann <tzimmermann@suse.de>, VMware Graphics Reviewers <linux-graphics-maintainer@vmware.com>, Zack Rusin <zackr@vmware.com>, dri-devel@lists.freedesktop.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH 0/6] drm: Allow the damage helpers to handle buffer damage Date: Thu, 9 Nov 2023 18:24:34 +0100 Message-ID: <20231109172449.1599262-1-javierm@redhat.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Thu, 09 Nov 2023 09:26:41 -0800 (PST) X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1782108191714726960 X-GMAIL-MSGID: 1782108191714726960 |
Series |
drm: Allow the damage helpers to handle buffer damage
|
|
Message
Javier Martinez Canillas
Nov. 9, 2023, 5:24 p.m. UTC
Hello, This series is to fix an issue that surfaced after damage clipping was enabled for the virtio-gpu by commit 01f05940a9a7 ("drm/virtio: Enable fb damage clips property for the primary plane"). After that change, flickering artifacts was reported to be present with both weston and wlroots wayland compositors when running in a virtual machine. The cause was identified by Sima Vetter, who pointed out that virtio-gpu does per-buffer uploads and for this reason it needs to do a buffer damage handling, instead of frame damage handling. Their suggestion was to extend the damage helpers to cover that case and given that there's isn't a buffer damage accumulation algorithm (e.g: buffer age), just do a full plane update if the framebuffer that is attached to a plane changed since the last plane update (page-flip). Patch #1 is just a refactoring to allow the logic of the frame damage helpers to be shared by the buffer damage helpers. Patch #2 adds the helpers that are needed for buffer damage handling. Patch #3 fixes the virtio-gpu damage handling logic by using the helper that is required by drivers that need to handle buffer damage. Patch #4 fixes the vmwgfx similarly, since that driver also needs to handle buffer damage and should have the same issue (although I have not tested it due not having a VMWare setup). Patch #5 adds to the KMS damage tracking kernel-doc some paragraphs about damage tracking types and references to links that explain frame damage vs buffer damage. Finally patch #6 adds an item to the DRM/KMS todo, about the need to implement some buffer damage accumulation algorithm instead of just doing a full plane update in this case. Because commit 01f05940a9a7 landed in v6.4, the first three patches are marked as Fixes and Cc stable. I've tested this on a VM with weston, was able to reproduce the issue reported and the patches did fix the problem. Please let me know what you think. Specially on the wording since could made mistakes due just learning about these concepts yesterday thanks to Sima, Simon and Pekka. Best regards, Javier Javier Martinez Canillas (6): drm: Move drm_atomic_helper_damage_{iter_init,merged}() to helpers drm: Add drm_atomic_helper_buffer_damage_{iter_init,merged}() helpers drm/virtio: Use drm_atomic_helper_buffer_damage_merged() for buffer damage drm/vmwgfx: Use drm_atomic_helper_buffer_damage_iter_init() for buffer damage drm/plane: Extend damage tracking kernel-doc drm/todo: Add entry about implementing buffer age for damage tracking Documentation/gpu/todo.rst | 20 +++ drivers/gpu/drm/drm_damage_helper.c | 166 +++++++++++++++++++------ drivers/gpu/drm/drm_plane.c | 22 +++- drivers/gpu/drm/virtio/virtgpu_plane.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 2 +- include/drm/drm_damage_helper.h | 7 ++ 6 files changed, 173 insertions(+), 46 deletions(-)
Comments
Hi Javier Am 09.11.23 um 18:24 schrieb Javier Martinez Canillas: > Hello, > > This series is to fix an issue that surfaced after damage clipping was > enabled for the virtio-gpu by commit 01f05940a9a7 ("drm/virtio: Enable > fb damage clips property for the primary plane"). > > After that change, flickering artifacts was reported to be present with > both weston and wlroots wayland compositors when running in a virtual > machine. The cause was identified by Sima Vetter, who pointed out that > virtio-gpu does per-buffer uploads and for this reason it needs to do > a buffer damage handling, instead of frame damage handling. I'm having problem understanding the types of damage. You never say what buffer damage is. I also don't know what a frame is in this context. Regular damage handling marks parts of a plane as dirty/damaged. That is per-plane damage handling. The individual planes more or less independent from each other. Buffer damage, I guess, marks the underlying buffer as dirty and requires synchronization of the buffer with some backing storage. The planes using that buffer are then updated more or less automatically. Is that right? And why does it flicker? Is there old data stored somewhere? Best regards Thomas > > Their suggestion was to extend the damage helpers to cover that case > and given that there's isn't a buffer damage accumulation algorithm > (e.g: buffer age), just do a full plane update if the framebuffer that > is attached to a plane changed since the last plane update (page-flip). > > Patch #1 is just a refactoring to allow the logic of the frame damage > helpers to be shared by the buffer damage helpers. > > Patch #2 adds the helpers that are needed for buffer damage handling. > > Patch #3 fixes the virtio-gpu damage handling logic by using the > helper that is required by drivers that need to handle buffer damage. > > Patch #4 fixes the vmwgfx similarly, since that driver also needs to > handle buffer damage and should have the same issue (although I have > not tested it due not having a VMWare setup). > > Patch #5 adds to the KMS damage tracking kernel-doc some paragraphs > about damage tracking types and references to links that explain > frame damage vs buffer damage. > > Finally patch #6 adds an item to the DRM/KMS todo, about the need to > implement some buffer damage accumulation algorithm instead of just > doing a full plane update in this case. > > Because commit 01f05940a9a7 landed in v6.4, the first three patches > are marked as Fixes and Cc stable. > > I've tested this on a VM with weston, was able to reproduce the issue > reported and the patches did fix the problem. > > Please let me know what you think. Specially on the wording since could > made mistakes due just learning about these concepts yesterday thanks to > Sima, Simon and Pekka. > > Best regards, > Javier > > > Javier Martinez Canillas (6): > drm: Move drm_atomic_helper_damage_{iter_init,merged}() to helpers > drm: Add drm_atomic_helper_buffer_damage_{iter_init,merged}() helpers > drm/virtio: Use drm_atomic_helper_buffer_damage_merged() for buffer > damage > drm/vmwgfx: Use drm_atomic_helper_buffer_damage_iter_init() for buffer > damage > drm/plane: Extend damage tracking kernel-doc > drm/todo: Add entry about implementing buffer age for damage tracking > > Documentation/gpu/todo.rst | 20 +++ > drivers/gpu/drm/drm_damage_helper.c | 166 +++++++++++++++++++------ > drivers/gpu/drm/drm_plane.c | 22 +++- > drivers/gpu/drm/virtio/virtgpu_plane.c | 2 +- > drivers/gpu/drm/vmwgfx/vmwgfx_kms.c | 2 +- > include/drm/drm_damage_helper.h | 7 ++ > 6 files changed, 173 insertions(+), 46 deletions(-) > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg)
Thomas Zimmermann <tzimmermann@suse.de> writes: > Hi Javier > > Am 09.11.23 um 18:24 schrieb Javier Martinez Canillas: >> Hello, >> >> This series is to fix an issue that surfaced after damage clipping was >> enabled for the virtio-gpu by commit 01f05940a9a7 ("drm/virtio: Enable >> fb damage clips property for the primary plane"). >> >> After that change, flickering artifacts was reported to be present with >> both weston and wlroots wayland compositors when running in a virtual >> machine. The cause was identified by Sima Vetter, who pointed out that >> virtio-gpu does per-buffer uploads and for this reason it needs to do >> a buffer damage handling, instead of frame damage handling. > > I'm having problem understanding the types of damage. You never say what > buffer damage is. I also don't know what a frame is in this context. > > Regular damage handling marks parts of a plane as dirty/damaged. That is > per-plane damage handling. The individual planes more or less > independent from each other. > > Buffer damage, I guess, marks the underlying buffer as dirty and > requires synchronization of the buffer with some backing storage. The > planes using that buffer are then updated more or less automatically. > > Is that right? > In both cases the damage tracking information is the same, they mark the damaged regions on the plane in framebuffer coordinates of the framebuffer attached to the plane. The problem as far as I understand is whether the driver expects that to determine the area that changed in the plane (and a plane flush is enough) or the area that changed since that same buffer was last used. > And why does it flicker? Is there old data stored somewhere? > It flickers because the framebuffer changed and so the damage tracking is not used correctly to flush the damaged areas to the backing storage. This is my understanding at least, please Sima or Simon correct me if I got this wrong. > Best regards > Thomas >
Hi Am 14.11.23 um 17:28 schrieb Javier Martinez Canillas: > Thomas Zimmermann <tzimmermann@suse.de> writes: > >> Hi Javier >> >> Am 09.11.23 um 18:24 schrieb Javier Martinez Canillas: >>> Hello, >>> >>> This series is to fix an issue that surfaced after damage clipping was >>> enabled for the virtio-gpu by commit 01f05940a9a7 ("drm/virtio: Enable >>> fb damage clips property for the primary plane"). >>> >>> After that change, flickering artifacts was reported to be present with >>> both weston and wlroots wayland compositors when running in a virtual >>> machine. The cause was identified by Sima Vetter, who pointed out that >>> virtio-gpu does per-buffer uploads and for this reason it needs to do >>> a buffer damage handling, instead of frame damage handling. >> >> I'm having problem understanding the types of damage. You never say what >> buffer damage is. I also don't know what a frame is in this context. >> >> Regular damage handling marks parts of a plane as dirty/damaged. That is >> per-plane damage handling. The individual planes more or less >> independent from each other. >> >> Buffer damage, I guess, marks the underlying buffer as dirty and >> requires synchronization of the buffer with some backing storage. The >> planes using that buffer are then updated more or less automatically. >> >> Is that right? >> > > In both cases the damage tracking information is the same, they mark > the damaged regions on the plane in framebuffer coordinates of the > framebuffer attached to the plane. > > The problem as far as I understand is whether the driver expects that > to determine the area that changed in the plane (and a plane flush is > enough) or the area that changed since that same buffer was last used. > >> And why does it flicker? Is there old data stored somewhere? >> > > It flickers because the framebuffer changed and so the damage tracking > is not used correctly to flush the damaged areas to the backing storage. I think I got it from the links in patch 5. In out other drivers, there's a single backing storage for each plane (for example in the video memory). Here, there's a backing storage for each buffer. On page flips, the plane changes its backing storage. Our GEM buffer is up to date, but the respective backing storage is missing all the intermediate changes. If I'm not mistaken, an entirely different solution would be to implement a per-plane back storage in these drivers. Best regards Thomas > > This is my understanding at least, please Sima or Simon correct me if I > got this wrong. > >> Best regards >> Thomas >> > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg)
Thomas Zimmermann <tzimmermann@suse.de> writes: Hello Thomas, > Hi [...] >>> And why does it flicker? Is there old data stored somewhere? >>> >> >> It flickers because the framebuffer changed and so the damage tracking >> is not used correctly to flush the damaged areas to the backing storage. > > I think I got it from the links in patch 5. In out other drivers, > there's a single backing storage for each plane (for example in the > video memory). Here, there's a backing storage for each buffer. On page Correct, that's what I understood too. > flips, the plane changes its backing storage. Our GEM buffer is up to > date, but the respective backing storage is missing all the intermediate > changes. > > If I'm not mistaken, an entirely different solution would be to > implement a per-plane back storage in these drivers. > I believe so but I'm not sure if that's possible since the virtio-gpu spec defines that the VM should send a VIRTIO_GPU_CMD_RESOURCE_FLUSH to the VMM in the host to do an update and the granularity for that is a framebuffer. For that reason the only solution (other than forcing a full plane update like this patch-set does) is to implement tracking suppor for buffer damage. > Best regards > Thomas >