Message ID | 20230721070955.1170974-1-javierm@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp27501vqg; Fri, 21 Jul 2023 00:23:15 -0700 (PDT) X-Google-Smtp-Source: APBJJlE6nmth262OLXRAGU4Fwkw+gPjDtVwdNn8NFrNoYnU18HrbcHJ6FlpMC9z01kCXQSQ6TUmb X-Received: by 2002:a05:6a20:2d5:b0:132:a85f:b2e7 with SMTP id 21-20020a056a2002d500b00132a85fb2e7mr1031504pzb.53.1689924195268; Fri, 21 Jul 2023 00:23:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689924195; cv=none; d=google.com; s=arc-20160816; b=id7OafoaEF6saWJu9IwSgXdvbetcDkGatx9VQDJ04KwMBw33baiMJQbyUH0kT5e7VK MMhXLtiD8b4HWuBH8ZV2DHxgGtlG2ijIUJrhVYIBgcmuxU+LrdK/pRuZZYAPMWON+wi9 tPvhXxtk1o2pvMXRyd21dE5Nga0Z7Xid8g+qwq+FecTtvFKDGnYFLjTxhANQ6F/hRNSU cA4qZPeazdXbKYvbsQck6YqIligT6zNNqvnbRtS5eRled8Lco894zGr9DSyWoINnjlC9 lU1muuqyv0DmDZwI4g8a9kh2x0x39BuHxHsnFqyfVa2tNMpn5bv21mkxYNZFj3DFQ14X ommg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=0oYK01gd9R8CyJOn/uNECjwAoUc3vytnk5FF8VVpYFg=; fh=E/vjnfVrY6nwMCJbf2gZQlKIKW5xKWb0eOZha+SsbmI=; b=BtXkcG3tTTSMZoAJJUpGYopoQG8TjC4P/r3mCHm1xS1YwKdOzFW4mU/tVYm+1j4/bX 1sw4OUfAYpfhCM4jjsnIBxPXjwsl62P1+1/3Fhlu1V6Wv0mreLwoxyE2PGHQhW230WBK VgYE8PnWWwh/TLiWMZtmS8Hf4g5k/onRXqJVFHMj+QZp02UDNmtWorRguG15kkS5HquZ o1mUaBKJi6ciwfcEzoSvmV8xCrGGn8+OVwhAhX5ML6LgX/1F5X/cy7LkOYgLlK4z22dC O8jcbm3pbEz3sgYFMBvlIDYegQgkcwjpaMP8udHrBMAr2gHyFjvM/lnIy/KxMCVFIXzD cGvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jVXyCiwq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a17-20020a637051000000b00557447d5721si2418728pgn.768.2023.07.21.00.23.02; Fri, 21 Jul 2023 00:23:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jVXyCiwq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbjGUHLf (ORCPT <rfc822;chrisben.tianve@gmail.com> + 99 others); Fri, 21 Jul 2023 03:11:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229576AbjGUHLb (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 21 Jul 2023 03:11:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2DDA2D76 for <linux-kernel@vger.kernel.org>; Fri, 21 Jul 2023 00:10:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689923422; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=0oYK01gd9R8CyJOn/uNECjwAoUc3vytnk5FF8VVpYFg=; b=jVXyCiwqNBPb16GJghsnRtlyq6kbX1iaoAwsWHH/DgWA2v79N8yv742S9FQR1QK+23F7xG 8pyN6LtC/0P9E1pMcy3tr/PJmCVUGbtIcQ7hzJR9GbH4pNPgnEu/hfXJXTE4pnC8v07sJl jPiOUl9wZRhBwm8OH8xXDjLPq0FCEzw= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-434-t6uPy0kZNrqoljME9-yrXw-1; Fri, 21 Jul 2023 03:10:21 -0400 X-MC-Unique: t6uPy0kZNrqoljME9-yrXw-1 Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-4fab61bb53bso1553373e87.0 for <linux-kernel@vger.kernel.org>; Fri, 21 Jul 2023 00:10:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689923419; x=1690528219; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0oYK01gd9R8CyJOn/uNECjwAoUc3vytnk5FF8VVpYFg=; b=SQ/krsnTg0fwxgKrBVo8JTHxeOSGYgrqDxNKoSO4U35A8yIBKPUL+nr+tUN8cKUsx5 NHQT6SIFingYYsJwS1epkXuWI/tDftfvLTbldO7RIBudfqXqlUliM77MzVXqgxoz4piH DrMkrk4x5QssIdVGkvF3IlQzjX6ASGsUWWxCT9UGc9V5yLnOL1N6XGs0MqlMdyzKt77f V+w3R0jU3eL58wHXpL8gtBCqv93lgBMS2U2d9Ce5HwmAFqP54a5ZLqFrHPBMceTEugT1 sYuN2BChHsOuVno3DNdcwtEo9NnNqTTNM8kBbdmMVsvSnre8XefkJD4Zq8yFdZqIWUE/ GBDA== X-Gm-Message-State: ABy/qLYFtr8x5DV3oTyZG1eLLuwcJY+0r3wd23oPU83TX+eek84ZoEHB GQFZ0Dpag8HJlIm9dDJJJNS39Chvjsqb/ZR6/PRUdCU2QnvCyN/uxqw5O7724vJoehjW999SDId eY0QDZJwns0BdKcjDFP1008Lm9sPhazyUMvaB1comWCJaX8ie9JN2BbU1CyihQwJtwSxLLbK4YM 9pN8S1hD4= X-Received: by 2002:ac2:4dbb:0:b0:4fb:772a:af12 with SMTP id h27-20020ac24dbb000000b004fb772aaf12mr575233lfe.21.1689923419040; Fri, 21 Jul 2023 00:10:19 -0700 (PDT) X-Received: by 2002:ac2:4dbb:0:b0:4fb:772a:af12 with SMTP id h27-20020ac24dbb000000b004fb772aaf12mr575204lfe.21.1689923418510; Fri, 21 Jul 2023 00:10:18 -0700 (PDT) Received: from minerva.home (205.pool92-176-231.dynamic.orange.es. [92.176.231.205]) by smtp.gmail.com with ESMTPSA id y9-20020a7bcd89000000b003fbb1a9586esm5593956wmj.15.2023.07.21.00.10.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jul 2023 00:10:18 -0700 (PDT) From: Javier Martinez Canillas <javierm@redhat.com> To: linux-kernel@vger.kernel.org Cc: Maxime Ripard <mripard@kernel.org>, Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter <daniel.vetter@ffwll.ch>, Javier Martinez Canillas <javierm@redhat.com>, Geert Uytterhoeven <geert@linux-m68k.org>, Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@gmail.com>, dri-devel@lists.freedesktop.org Subject: [PATCH v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback Date: Fri, 21 Jul 2023 09:09:50 +0200 Message-ID: <20230721070955.1170974-1-javierm@redhat.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772013953034760674 X-GMAIL-MSGID: 1772013953034760674 |
Series |
[v4] drm/ssd130x: Allocate buffers in the plane's .atomic_check callback
|
|
Commit Message
Javier Martinez Canillas
July 21, 2023, 7:09 a.m. UTC
Geert reports that the following NULL pointer dereference happens for him
after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each
plane update"):
[drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0
ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1)
and format(R1 little-endian (0x20203152))
Unable to handle kernel NULL pointer dereference at virtual address 00000000
Oops [#1]
CPU: 0 PID: 1 Comm: swapper Not tainted
6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565
epc : ssd130x_update_rect.isra.0+0x13c/0x340
ra : ssd130x_update_rect.isra.0+0x2bc/0x340
...
status: 00000120 badaddr: 00000000 cause: 0000000f
[<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340
[<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284
[<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c
[<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4
[<c02f94fc>] commit_tail+0x190/0x1b8
[<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0
[<c02c5d00>] drm_atomic_commit+0xa4/0xe4
[<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278
[<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc
[<c02cd064>] drm_client_modeset_commit+0x34/0x64
[<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8
[<c0303424>] drm_fb_helper_set_par+0x38/0x58
[<c027c410>] fbcon_init+0x294/0x534
...
The problem is that fbcon calls fbcon_init() which triggers a DRM modeset
and this leads to drm_atomic_helper_commit_planes() attempting to commit
the atomic state for all planes, even the ones whose CRTC is not enabled.
Since the primary plane buffer is allocated in the encoder .atomic_enable
callback, this happens after that initial modeset commit and leads to the
mentioned NULL pointer dereference.
Fix this by having custom helpers to allocate, duplicate and destroy the
plane state, that will take care of allocating and freeing these buffers.
Instead of doing it in the encoder atomic enabled and disabled callbacks,
since allocations must not be done in an .atomic_enable handler. Because
drivers are not allowed to fail after drm_atomic_helper_swap_state() has
been called and the new atomic state is stored into the current sw state.
Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update")
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Suggested-by: Maxime Ripard <mripard@kernel.org>
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
---
Changes in v4:
- Move buffers allocation/free to plane .reset/.destroy helpers (Maxime Ripard).
Changes in v3:
- Move the buffers allocation to the plane helper funcs .begin_fb_access
and the free to .end_fb_access callbacks.
- Always allocate intermediate buffer because is use in ssd130x_clear_screen().
- Don't allocate the buffers as device managed resources.
Changes in v2:
- Move the buffers allocation to .fb_create instead of preventing atomic
updates for disable planes.
- Don't allocate the intermediate buffer when using the native R1 format.
- Allocate buffers as device managed resources.
drivers/gpu/drm/solomon/ssd130x.c | 134 ++++++++++++++++++++++++------
drivers/gpu/drm/solomon/ssd130x.h | 3 -
2 files changed, 108 insertions(+), 29 deletions(-)
Comments
On Fri, 21 Jul 2023 09:09:50 +0200, Javier Martinez Canillas wrote: > Geert reports that the following NULL pointer dereference happens for him > after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each > plane update"): > > [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0 > > [ ... ] Acked-by: Maxime Ripard <mripard@kernel.org> Thanks! Maxime
Hi Javier, Thanks for your patch! On Fri, Jul 21, 2023 at 9:10 AM Javier Martinez Canillas <javierm@redhat.com> wrote: > Geert reports that the following NULL pointer dereference happens for him > after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each > plane update"): > > [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0 > ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1) > and format(R1 little-endian (0x20203152)) > Unable to handle kernel NULL pointer dereference at virtual address 00000000 > Oops [#1] > CPU: 0 PID: 1 Comm: swapper Not tainted > 6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565 > epc : ssd130x_update_rect.isra.0+0x13c/0x340 > ra : ssd130x_update_rect.isra.0+0x2bc/0x340 > ... > status: 00000120 badaddr: 00000000 cause: 0000000f > [<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340 > [<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284 > [<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c > [<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4 > [<c02f94fc>] commit_tail+0x190/0x1b8 > [<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0 > [<c02c5d00>] drm_atomic_commit+0xa4/0xe4 > [<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278 > [<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc > [<c02cd064>] drm_client_modeset_commit+0x34/0x64 > [<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8 > [<c0303424>] drm_fb_helper_set_par+0x38/0x58 > [<c027c410>] fbcon_init+0x294/0x534 > ... > > The problem is that fbcon calls fbcon_init() which triggers a DRM modeset > and this leads to drm_atomic_helper_commit_planes() attempting to commit > the atomic state for all planes, even the ones whose CRTC is not enabled. > > Since the primary plane buffer is allocated in the encoder .atomic_enable > callback, this happens after that initial modeset commit and leads to the > mentioned NULL pointer dereference. Upon further investigation, the NULL pointer dereference does not happen with the current and accepted code (only with my patches to add support for R1), because of the "superfluous" NULL buffer check in ssd130x_fb_blit_rect(): https://cgit.freedesktop.org/drm-misc/tree/drivers/gpu/drm/solomon/ssd130x.c#n592 So you probably want to drop the crash report... > Fix this by having custom helpers to allocate, duplicate and destroy the > plane state, that will take care of allocating and freeing these buffers. > > Instead of doing it in the encoder atomic enabled and disabled callbacks, > since allocations must not be done in an .atomic_enable handler. Because > drivers are not allowed to fail after drm_atomic_helper_swap_state() has > been called and the new atomic state is stored into the current sw state. > > Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update") ... and the Fixes tag. > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> > Suggested-by: Maxime Ripard <mripard@kernel.org> > Signed-off-by: Javier Martinez Canillas <javierm@redhat.com> Regardless, avoiding calls to ssd130x_fb_blit_rect() when the buffer is not yet allocated is worthwhile. And this patch achieves that. Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Still, some comments below. > --- a/drivers/gpu/drm/solomon/ssd130x.c > +++ b/drivers/gpu/drm/solomon/ssd130x.c > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > }; > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > > +struct ssd130x_plane_state { > + struct drm_plane_state base; > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > + u8 *buffer; > + /* Buffer that contains R1 pixels to be written to the panel */ > + u8 *data_array; The second buffer actually contains pixels in hardware format. For now that is a transposed buffer of R1 pixels, but that will change when you will add support for greyscale displays. So I'd write "hardware format" instead of R1 for both. BTW, I still think data_array should be allocated during probing, as it is related to the hardware, not to a plane. > +}; > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > - if (!ssd130x->buffer) > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > + if (!ssd130x_state->buffer) > return -ENOMEM; > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > - if (!ssd130x->data_array) { > - kfree(ssd130x->buffer); > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > + if (!ssd130x_state->data_array) { > + kfree(ssd130x_state->buffer); Should ssd130x_state->buffer be reset to NULL here? I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, leading to a double free? > return -ENOMEM; > } > > return 0; > } > @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) > .y2 = ssd130x->height, > }; > > - ssd130x_update_rect(ssd130x, &fullscreen); > + ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen); > } > > -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap, > +static int ssd130x_fb_blit_rect(struct drm_plane_state *state, > + const struct iosys_map *vmap, > struct drm_rect *rect) > { > + struct drm_framebuffer *fb = state->fb; > struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); > unsigned int page_height = ssd130x->device_info->page_height; > struct iosys_map dst; > unsigned int dst_pitch; > int ret = 0; > - u8 *buf = ssd130x->buffer; > + u8 *buf = ssd130x_state->buffer; > > if (!buf) This check should no longer be needed (and prevented you from seeing the issue before). > return 0; > @@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m > > drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); > > - ssd130x_update_rect(ssd130x, rect); > + ssd130x_update_rect(ssd130x, ssd130x_state, rect); > > return ret; > } > > +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane, > + struct drm_atomic_state *state) > +{ > + struct drm_device *drm = plane->dev; > + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); > + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state); > + int ret; > + > + ret = drm_plane_helper_atomic_check(plane, state); > + if (ret) > + return ret; > + > + return ssd130x_buf_alloc(ssd130x, ssd130x_state); I think the code would become easier to read by inlining ssd130x_buf_alloc() here. > +} > + > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) > +{ > + struct ssd130x_plane_state *old_ssd130x_state; > + struct ssd130x_plane_state *ssd130x_state; > + > + if (WARN_ON(!plane->state)) > + return NULL; > + > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); I know this is the standard pattern, but the "dup" part is a bit silly here: - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, - The base part is copied again in __drm_atomic_helper_plane_duplicate_state). So (for now) you might as well just call kzalloc() ;-) > + if (!ssd130x_state) > + return NULL; > + > + /* The buffers are not duplicated and are allocated in .atomic_check */ > + ssd130x_state->buffer = NULL; > + ssd130x_state->data_array = NULL; > + > + __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base); > + > + return &ssd130x_state->base; > +} > + > +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane, > + struct drm_plane_state *state) > +{ > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); > + > + ssd130x_buf_free(ssd130x_state); I think the code would become easier to read by inlining ssd130x_buf_free() here. > + > + __drm_atomic_helper_plane_destroy_state(&ssd130x_state->base); > + > + kfree(ssd130x_state); > +} Gr{oetje,eeting}s, Geert
Geert Uytterhoeven <geert@linux-m68k.org> writes: Hello Geert, > Hi Javier, > > Thanks for your patch! > Thanks a lot for your feedabck. > On Fri, Jul 21, 2023 at 9:10 AM Javier Martinez Canillas > <javierm@redhat.com> wrote: >> Geert reports that the following NULL pointer dereference happens for him [...] > >> Since the primary plane buffer is allocated in the encoder .atomic_enable >> callback, this happens after that initial modeset commit and leads to the >> mentioned NULL pointer dereference. > > Upon further investigation, the NULL pointer dereference does not > happen with the current and accepted code (only with my patches to > add support for R1), because of the "superfluous" NULL buffer check > in ssd130x_fb_blit_rect(): > https://cgit.freedesktop.org/drm-misc/tree/drivers/gpu/drm/solomon/ssd130x.c#n592 > > So you probably want to drop the crash report... > >> Fix this by having custom helpers to allocate, duplicate and destroy the >> plane state, that will take care of allocating and freeing these buffers. >> >> Instead of doing it in the encoder atomic enabled and disabled callbacks, >> since allocations must not be done in an .atomic_enable handler. Because >> drivers are not allowed to fail after drm_atomic_helper_swap_state() has >> been called and the new atomic state is stored into the current sw state. >> >> Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update") > > ... and the Fixes tag. > Ah, I see. Thanks a lot for that information. >> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Will drop your Reported-by tag too then. >> Suggested-by: Maxime Ripard <mripard@kernel.org> >> Signed-off-by: Javier Martinez Canillas <javierm@redhat.com> > > Regardless, avoiding calls to ssd130x_fb_blit_rect() when the buffer > is not yet allocated is worthwhile. And this patch achieves that. > Agreed, and as Maxime mentioned is the correct thing to do. > Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> > Thanks for testing! > Still, some comments below. > >> --- a/drivers/gpu/drm/solomon/ssd130x.c >> +++ b/drivers/gpu/drm/solomon/ssd130x.c >> @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { >> }; >> EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); >> >> +struct ssd130x_plane_state { >> + struct drm_plane_state base; >> + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ >> + u8 *buffer; >> + /* Buffer that contains R1 pixels to be written to the panel */ >> + u8 *data_array; > > The second buffer actually contains pixels in hardware format. > For now that is a transposed buffer of R1 pixels, but that will change > when you will add support for greyscale displays. > > So I'd write "hardware format" instead of R1 for both. > Agreed. > BTW, I still think data_array should be allocated during probing, > as it is related to the hardware, not to a plane. > Ack. I'll do that as a separate patch on v5. >> +}; > >> @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) >> >> pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); >> >> - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> - if (!ssd130x->buffer) >> + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> + if (!ssd130x_state->buffer) >> return -ENOMEM; >> >> - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> - if (!ssd130x->data_array) { >> - kfree(ssd130x->buffer); >> + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> + if (!ssd130x_state->data_array) { >> + kfree(ssd130x_state->buffer); > > Should ssd130x_state->buffer be reset to NULL here? > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, > leading to a double free? > Yeah. I'll add it. >> return -ENOMEM; >> } >> >> return 0; >> } > >> @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) >> .y2 = ssd130x->height, >> }; >> >> - ssd130x_update_rect(ssd130x, &fullscreen); >> + ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen); >> } >> >> -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap, >> +static int ssd130x_fb_blit_rect(struct drm_plane_state *state, >> + const struct iosys_map *vmap, >> struct drm_rect *rect) >> { >> + struct drm_framebuffer *fb = state->fb; >> struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); >> + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); >> unsigned int page_height = ssd130x->device_info->page_height; >> struct iosys_map dst; >> unsigned int dst_pitch; >> int ret = 0; >> - u8 *buf = ssd130x->buffer; >> + u8 *buf = ssd130x_state->buffer; >> >> if (!buf) > > This check should no longer be needed (and prevented you from seeing > the issue before). > Ack. >> return 0; [...] >> +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane, >> + struct drm_atomic_state *state) >> +{ >> + struct drm_device *drm = plane->dev; >> + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); >> + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); >> + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state); >> + int ret; >> + >> + ret = drm_plane_helper_atomic_check(plane, state); >> + if (ret) >> + return ret; >> + >> + return ssd130x_buf_alloc(ssd130x, ssd130x_state); > > I think the code would become easier to read by inlining > ssd130x_buf_alloc() here. > Agreed. I'll do that. >> +} >> + > >> +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) >> +{ >> + struct ssd130x_plane_state *old_ssd130x_state; >> + struct ssd130x_plane_state *ssd130x_state; >> + >> + if (WARN_ON(!plane->state)) >> + return NULL; >> + >> + old_ssd130x_state = to_ssd130x_plane_state(plane->state); >> + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); > > I know this is the standard pattern, but the "dup" part is a bit > silly here: > - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, > - The base part is copied again in > __drm_atomic_helper_plane_duplicate_state). > So (for now) you might as well just call kzalloc() ;-) > Indeed. You are correct. >> + if (!ssd130x_state) >> + return NULL; >> + >> + /* The buffers are not duplicated and are allocated in .atomic_check */ >> + ssd130x_state->buffer = NULL; >> + ssd130x_state->data_array = NULL; >> + >> + __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base); >> + >> + return &ssd130x_state->base; >> +} >> + >> +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane, >> + struct drm_plane_state *state) >> +{ >> + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); >> + >> + ssd130x_buf_free(ssd130x_state); > > I think the code would become easier to read by inlining > ssd130x_buf_free() here. > Ack.
On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: > > --- a/drivers/gpu/drm/solomon/ssd130x.c > > +++ b/drivers/gpu/drm/solomon/ssd130x.c > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > > }; > > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > > > > +struct ssd130x_plane_state { > > + struct drm_plane_state base; > > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > > + u8 *buffer; > > + /* Buffer that contains R1 pixels to be written to the panel */ > > + u8 *data_array; > > The second buffer actually contains pixels in hardware format. > For now that is a transposed buffer of R1 pixels, but that will change > when you will add support for greyscale displays. > > So I'd write "hardware format" instead of R1 for both. > > BTW, I still think data_array should be allocated during probing, > as it is related to the hardware, not to a plane. I somewhat disagree. If I understood right during our discussion with Javier, the buffer size derives from the mode size (height and width). In KMS, the mode is tied to the KMS state, and thus you can expect the mode to change every state commit. So the more logical thing to do is to tie the buffer size (and thus the buffer pointer) to the state since it's only valid for that particular state for all we know. Of course, our case is allows use to simplify things since it's a fixed mode, but one of Javier's goal with this driver was to provide a good example we can refer people to, so I think it's worth keeping. > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > > > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); > > > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > - if (!ssd130x->buffer) > > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > + if (!ssd130x_state->buffer) > > return -ENOMEM; > > > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > - if (!ssd130x->data_array) { > > - kfree(ssd130x->buffer); > > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > + if (!ssd130x_state->data_array) { > > + kfree(ssd130x_state->buffer); > > Should ssd130x_state->buffer be reset to NULL here? > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, > leading to a double free? That's a good question. I never really thought of that, but yeah, AFAIK atomic_destroy_state() will be called when atomic_check() fails. Which means that it's probably broken in a lot of drivers. Also, Javier pointed me to a discussion you had on IRC about using devm allocation here. We can't really do that. KMS devices are only freed once the last userspace application closes its fd to the device file, so you have an unbounded window during which the driver is still callable by userspace (and thus can still trigger an atomic commit) but the buffer would have been freed for a while. The driver could use a bit more work on that area (by adding drm_dev_enter/drm_dev_exit) but it still creates unnecessary risks to use devm there. > > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) > > +{ > > + struct ssd130x_plane_state *old_ssd130x_state; > > + struct ssd130x_plane_state *ssd130x_state; > > + > > + if (WARN_ON(!plane->state)) > > + return NULL; > > + > > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); > > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); > > I know this is the standard pattern, but the "dup" part is a bit > silly here: > - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, > - The base part is copied again in > __drm_atomic_helper_plane_duplicate_state). > So (for now) you might as well just call kzalloc() ;-) Still if we ever add a field in the state structure, it will be surprising that it's not copied over. The code is there and looks good to me, so I'd rather keep it.
Maxime Ripard <mripard@kernel.org> writes: Hello Maxime, > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: >> > --- a/drivers/gpu/drm/solomon/ssd130x.c >> > +++ b/drivers/gpu/drm/solomon/ssd130x.c >> > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { >> > }; >> > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); >> > >> > +struct ssd130x_plane_state { >> > + struct drm_plane_state base; >> > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ >> > + u8 *buffer; >> > + /* Buffer that contains R1 pixels to be written to the panel */ >> > + u8 *data_array; >> >> The second buffer actually contains pixels in hardware format. >> For now that is a transposed buffer of R1 pixels, but that will change >> when you will add support for greyscale displays. >> >> So I'd write "hardware format" instead of R1 for both. >> >> BTW, I still think data_array should be allocated during probing, >> as it is related to the hardware, not to a plane. > > I somewhat disagree. > > If I understood right during our discussion with Javier, the buffer size > derives from the mode size (height and width). > > In KMS, the mode is tied to the KMS state, and thus you can expect the > mode to change every state commit. So the more logical thing to do is to > tie the buffer size (and thus the buffer pointer) to the state since > it's only valid for that particular state for all we know. > > Of course, our case is allows use to simplify things since it's a fixed > mode, but one of Javier's goal with this driver was to provide a good > example we can refer people to, so I think it's worth keeping. > Yes, that's certainly one of my goals. So I'll just keep it as-is then. >> > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) >> > >> > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); >> > >> > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> > - if (!ssd130x->buffer) >> > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> > + if (!ssd130x_state->buffer) >> > return -ENOMEM; >> > >> > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> > - if (!ssd130x->data_array) { >> > - kfree(ssd130x->buffer); >> > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> > + if (!ssd130x_state->data_array) { >> > + kfree(ssd130x_state->buffer); >> >> Should ssd130x_state->buffer be reset to NULL here? >> I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, >> leading to a double free? > > That's a good question. > > I never really thought of that, but yeah, AFAIK atomic_destroy_state() > will be called when atomic_check() fails. > Interesting. I didn't know that. I'll set to NULL and add a comment. > Which means that it's probably broken in a lot of drivers. > > Also, Javier pointed me to a discussion you had on IRC about using devm > allocation here. We can't really do that. KMS devices are only freed > once the last userspace application closes its fd to the device file, so > you have an unbounded window during which the driver is still callable > by userspace (and thus can still trigger an atomic commit) but the > buffer would have been freed for a while. > > The driver could use a bit more work on that area (by adding > drm_dev_enter/drm_dev_exit) but it still creates unnecessary risks to > use devm there. > Yes, but that discussion is not relevant anymore anyways if we keep the .data_array allocatioin the plane's .atomic_check handler. >> > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) >> > +{ >> > + struct ssd130x_plane_state *old_ssd130x_state; >> > + struct ssd130x_plane_state *ssd130x_state; >> > + >> > + if (WARN_ON(!plane->state)) >> > + return NULL; >> > + >> > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); >> > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); >> >> I know this is the standard pattern, but the "dup" part is a bit >> silly here: >> - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, >> - The base part is copied again in >> __drm_atomic_helper_plane_duplicate_state). >> So (for now) you might as well just call kzalloc() ;-) > > Still if we ever add a field in the state structure, it will be > surprising that it's not copied over. > > The code is there and looks good to me, so I'd rather keep it. Yes, it's unlikely that this state structure will get other fields but still since is the standard patter, we can keep it just for others to use it as a reference.
Hi Maxime, On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote: > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: > > > --- a/drivers/gpu/drm/solomon/ssd130x.c > > > +++ b/drivers/gpu/drm/solomon/ssd130x.c > > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > > > }; > > > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > > > > > > +struct ssd130x_plane_state { > > > + struct drm_plane_state base; > > > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > > > + u8 *buffer; > > > + /* Buffer that contains R1 pixels to be written to the panel */ > > > + u8 *data_array; > > > > The second buffer actually contains pixels in hardware format. > > For now that is a transposed buffer of R1 pixels, but that will change > > when you will add support for greyscale displays. > > > > So I'd write "hardware format" instead of R1 for both. > > > > BTW, I still think data_array should be allocated during probing, > > as it is related to the hardware, not to a plane. > > I somewhat disagree. > > If I understood right during our discussion with Javier, the buffer size > derives from the mode size (height and width). > > In KMS, the mode is tied to the KMS state, and thus you can expect the > mode to change every state commit. So the more logical thing to do is to > tie the buffer size (and thus the buffer pointer) to the state since > it's only valid for that particular state for all we know. > > Of course, our case is allows use to simplify things since it's a fixed > mode, but one of Javier's goal with this driver was to provide a good > example we can refer people to, so I think it's worth keeping. The second buffer (containing the hardware format) has a size that depends on the full screen size, not the current mode (I believe that's also the size of the frame buffer backing the plane?). So its size is fixed. Given the allocations are now done based on plane state, I think the first buffer should be sized according to the frame buffer backing the plane? Currently it uses the full screen size, too (cfr. below). > > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > > > > > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); > > > > > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > > - if (!ssd130x->buffer) > > > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); Based on full screen width and height. > > > + if (!ssd130x_state->buffer) > > > return -ENOMEM; > > > > > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > > - if (!ssd130x->data_array) { > > > - kfree(ssd130x->buffer); > > > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); Based on full screen width and height (hardware page size). > > > + if (!ssd130x_state->data_array) { > > > + kfree(ssd130x_state->buffer); > > > > Should ssd130x_state->buffer be reset to NULL here? > > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, > > leading to a double free? > > That's a good question. > > I never really thought of that, but yeah, AFAIK atomic_destroy_state() > will be called when atomic_check() fails. > > Which means that it's probably broken in a lot of drivers. > > Also, Javier pointed me to a discussion you had on IRC about using devm > allocation here. We can't really do that. KMS devices are only freed > once the last userspace application closes its fd to the device file, so > you have an unbounded window during which the driver is still callable > by userspace (and thus can still trigger an atomic commit) but the > buffer would have been freed for a while. It should still be safe for (at least) the data_array buffer. That buffer is only used to store pixels in hardware format, and immediately send them to the hardware. If this can be called that late, it will fail horribly, as you can no longer talk to the hardware at that point (as ssd130x is an i2c driver, it might actually work; but a DRM driver that calls devm_platform_ioremap_resource() will crash when writing to its MMIO registers)?!? > > > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) > > > +{ > > > + struct ssd130x_plane_state *old_ssd130x_state; > > > + struct ssd130x_plane_state *ssd130x_state; > > > + > > > + if (WARN_ON(!plane->state)) > > > + return NULL; > > > + > > > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); > > > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); > > > > I know this is the standard pattern, but the "dup" part is a bit > > silly here: > > - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, > > - The base part is copied again in > > __drm_atomic_helper_plane_duplicate_state). > > So (for now) you might as well just call kzalloc() ;-) > > Still if we ever add a field in the state structure, it will be > surprising that it's not copied over. > > The code is there and looks good to me, so I'd rather keep it. Fair enough, let's keep it. Gr{oetje,eeting}s, Geert
Geert Uytterhoeven <geert@linux-m68k.org> writes: Hello Geert, > Hi Maxime, > > On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote: >> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: >> > > --- a/drivers/gpu/drm/solomon/ssd130x.c >> > > +++ b/drivers/gpu/drm/solomon/ssd130x.c >> > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { >> > > }; >> > > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); >> > > >> > > +struct ssd130x_plane_state { >> > > + struct drm_plane_state base; >> > > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ >> > > + u8 *buffer; >> > > + /* Buffer that contains R1 pixels to be written to the panel */ >> > > + u8 *data_array; >> > >> > The second buffer actually contains pixels in hardware format. >> > For now that is a transposed buffer of R1 pixels, but that will change >> > when you will add support for greyscale displays. >> > >> > So I'd write "hardware format" instead of R1 for both. >> > >> > BTW, I still think data_array should be allocated during probing, >> > as it is related to the hardware, not to a plane. >> >> I somewhat disagree. >> >> If I understood right during our discussion with Javier, the buffer size >> derives from the mode size (height and width). >> >> In KMS, the mode is tied to the KMS state, and thus you can expect the >> mode to change every state commit. So the more logical thing to do is to >> tie the buffer size (and thus the buffer pointer) to the state since >> it's only valid for that particular state for all we know. >> >> Of course, our case is allows use to simplify things since it's a fixed >> mode, but one of Javier's goal with this driver was to provide a good >> example we can refer people to, so I think it's worth keeping. > > The second buffer (containing the hardware format) has a size that > depends on the full screen size, not the current mode (I believe that's > also the size of the frame buffer backing the plane?). So its size is > fixed. > Yes, is fixed. But Maxime's point is that this is a characteristic of this particular device and even when the display resolution can't be changed, the correct thing to do is to keep all state related to the mode (even the buffer used to store the hardware pixels that are written to the display) > Given the allocations are now done based on plane state, I think the > first buffer should be sized according to the frame buffer backing > the plane? Currently it uses the full screen size, too (cfr. below). > But can the mode even be changed if ssd130x_connector_helper_get_modes() just adds a single display mode with mode->hdisplay == ssd130x->width and mode->vdisplay == ssd130x->height. >> > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) >> > > >> > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); >> > > >> > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> > > - if (!ssd130x->buffer) >> > > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > Based on full screen width and height. > You think that using ssd130x->mode->vdisplay instead will be more clear here? >> > > + if (!ssd130x_state->buffer) >> > > return -ENOMEM; >> > > >> > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> > > - if (!ssd130x->data_array) { >> > > - kfree(ssd130x->buffer); >> > > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > Based on full screen width and height (hardware page size). > Yes, this depends on the panel attached to the OLED controller, and that resolution is fixed and taken from the Device Tree (or ACPI table). >> > > + if (!ssd130x_state->data_array) { >> > > + kfree(ssd130x_state->buffer); >> > >> > Should ssd130x_state->buffer be reset to NULL here? >> > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, >> > leading to a double free? >> >> That's a good question. >> >> I never really thought of that, but yeah, AFAIK atomic_destroy_state() >> will be called when atomic_check() fails. >> >> Which means that it's probably broken in a lot of drivers. >> >> Also, Javier pointed me to a discussion you had on IRC about using devm >> allocation here. We can't really do that. KMS devices are only freed >> once the last userspace application closes its fd to the device file, so >> you have an unbounded window during which the driver is still callable >> by userspace (and thus can still trigger an atomic commit) but the >> buffer would have been freed for a while. > > It should still be safe for (at least) the data_array buffer. That > buffer is only used to store pixels in hardware format, and immediately > send them to the hardware. If this can be called that late, it will > fail horribly, as you can no longer talk to the hardware at that point > (as ssd130x is an i2c driver, it might actually work; but a DRM driver > that calls devm_platform_ioremap_resource() will crash when writing > to its MMIO registers)?!? > At the very least the SPI driver will fail since the GPIO that is used to toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also the regulator, backlight device, etc. But in any case, as mentioned it is only relevant if the data_array buffer is allocated at probe time, and from Maxime's explanation is more correct to do it in the .atomic_check handler. >> > > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) >> > > +{ >> > > + struct ssd130x_plane_state *old_ssd130x_state; >> > > + struct ssd130x_plane_state *ssd130x_state; >> > > + >> > > + if (WARN_ON(!plane->state)) >> > > + return NULL; >> > > + >> > > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); >> > > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); >> > >> > I know this is the standard pattern, but the "dup" part is a bit >> > silly here: >> > - The ssd130x-specific parts in ssd130x_plane_state are zeroed below, >> > - The base part is copied again in >> > __drm_atomic_helper_plane_duplicate_state). >> > So (for now) you might as well just call kzalloc() ;-) >> >> Still if we ever add a field in the state structure, it will be >> surprising that it's not copied over. >> >> The code is there and looks good to me, so I'd rather keep it. > > Fair enough, let's keep it. > Yeah. At the very least helps since will be consistent with other drivers. > Gr{oetje,eeting}s, > > Geert
Hi Javier, On Wed, Jul 26, 2023 at 2:22 PM Javier Martinez Canillas <javierm@redhat.com> wrote: > Geert Uytterhoeven <geert@linux-m68k.org> writes: > > On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote: > >> On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: > >> > > --- a/drivers/gpu/drm/solomon/ssd130x.c > >> > > +++ b/drivers/gpu/drm/solomon/ssd130x.c > >> > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > >> > > }; > >> > > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > >> > > > >> > > +struct ssd130x_plane_state { > >> > > + struct drm_plane_state base; > >> > > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > >> > > + u8 *buffer; > >> > > + /* Buffer that contains R1 pixels to be written to the panel */ > >> > > + u8 *data_array; > >> > > >> > The second buffer actually contains pixels in hardware format. > >> > For now that is a transposed buffer of R1 pixels, but that will change > >> > when you will add support for greyscale displays. > >> > > >> > So I'd write "hardware format" instead of R1 for both. > >> > > >> > BTW, I still think data_array should be allocated during probing, > >> > as it is related to the hardware, not to a plane. > >> > >> I somewhat disagree. > >> > >> If I understood right during our discussion with Javier, the buffer size > >> derives from the mode size (height and width). > >> > >> In KMS, the mode is tied to the KMS state, and thus you can expect the > >> mode to change every state commit. So the more logical thing to do is to > >> tie the buffer size (and thus the buffer pointer) to the state since > >> it's only valid for that particular state for all we know. > >> > >> Of course, our case is allows use to simplify things since it's a fixed > >> mode, but one of Javier's goal with this driver was to provide a good > >> example we can refer people to, so I think it's worth keeping. > > > > The second buffer (containing the hardware format) has a size that > > depends on the full screen size, not the current mode (I believe that's > > also the size of the frame buffer backing the plane?). So its size is > > fixed. > > > > Yes, is fixed. But Maxime's point is that this is a characteristic of this > particular device and even when the display resolution can't be changed, > the correct thing to do is to keep all state related to the mode (even the > buffer used to store the hardware pixels that are written to the display) > > > Given the allocations are now done based on plane state, I think the > > first buffer should be sized according to the frame buffer backing > > the plane? Currently it uses the full screen size, too (cfr. below). > > > > But can the mode even be changed if ssd130x_connector_helper_get_modes() > just adds a single display mode with mode->hdisplay == ssd130x->width and > mode->vdisplay == ssd130x->height. No, the mode cannot be changed. At first, I thought you could still create a smaller frame buffer, and attach that to the (single, thus primary) plane, but it turns out I was wrong[*], so you can ignore that. [*] ssd130x_primary_plane_helper_atomic_check() calls drm_plane_helper_atomic_check(), which calls drm_atomic_helper_check_plane_state() with can_position = false. As the position of planes is actually a software thing on ssd130x, positioning support could be added later, though... > >> Also, Javier pointed me to a discussion you had on IRC about using devm > >> allocation here. We can't really do that. KMS devices are only freed > >> once the last userspace application closes its fd to the device file, so > >> you have an unbounded window during which the driver is still callable > >> by userspace (and thus can still trigger an atomic commit) but the > >> buffer would have been freed for a while. > > > > It should still be safe for (at least) the data_array buffer. That > > buffer is only used to store pixels in hardware format, and immediately > > send them to the hardware. If this can be called that late, it will > > fail horribly, as you can no longer talk to the hardware at that point > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver > > that calls devm_platform_ioremap_resource() will crash when writing > > to its MMIO registers)?!? > > At the very least the SPI driver will fail since the GPIO that is used to > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also > the regulator, backlight device, etc. > > But in any case, as mentioned it is only relevant if the data_array buffer > is allocated at probe time, and from Maxime's explanation is more correct > to do it in the .atomic_check handler. You need (at least) data_array for clear_screen, too, which is called from .atomic_disable(). Gr{oetje,eeting}s, Geert
Hi, On Wed, Jul 26, 2023 at 01:52:37PM +0200, Geert Uytterhoeven wrote: > On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote: > > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: > > > > --- a/drivers/gpu/drm/solomon/ssd130x.c > > > > +++ b/drivers/gpu/drm/solomon/ssd130x.c > > > > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > > > > }; > > > > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > > > > > > > > +struct ssd130x_plane_state { > > > > + struct drm_plane_state base; > > > > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > > > > + u8 *buffer; > > > > + /* Buffer that contains R1 pixels to be written to the panel */ > > > > + u8 *data_array; > > > > > > The second buffer actually contains pixels in hardware format. > > > For now that is a transposed buffer of R1 pixels, but that will change > > > when you will add support for greyscale displays. > > > > > > So I'd write "hardware format" instead of R1 for both. > > > > > > BTW, I still think data_array should be allocated during probing, > > > as it is related to the hardware, not to a plane. > > > > I somewhat disagree. > > > > If I understood right during our discussion with Javier, the buffer size > > derives from the mode size (height and width). > > > > In KMS, the mode is tied to the KMS state, and thus you can expect the > > mode to change every state commit. So the more logical thing to do is to > > tie the buffer size (and thus the buffer pointer) to the state since > > it's only valid for that particular state for all we know. > > > > Of course, our case is allows use to simplify things since it's a fixed > > mode, but one of Javier's goal with this driver was to provide a good > > example we can refer people to, so I think it's worth keeping. > > The second buffer (containing the hardware format) has a size that > depends on the full screen size, not the current mode (I believe that's > also the size of the frame buffer backing the plane?). So its size is > fixed. In KMS in general, no. For that particular case, yes. And about the framebuffer size == full screen size, there's multiple sizes involved. I guess what you call full screen size is the CRTC size. You can't make the assumption in KMS that CRTC size == (primary) plane size == framebuffer size. If you're using scaling for example, you will have a framebuffer size smaller or larger than it plane size. If you're using cropping, then the plane size or framebuffer size will be different from the CRTC size. Some ways to work around overscan is also to have a smaller plane size than the CRTC size. You don't have to have the CRTC size == primary plane size, and then you don't have to have plane size == framebuffer size. you can't really make that assumption in the general case either: scaling or cropping will have a different framebuffer size than the CRTC primary plane size (which doesn't have to be "full screen" either). > Given the allocations are now done based on plane state, I think the > first buffer should be sized according to the frame buffer backing > the plane? Currently it uses the full screen size, too (cfr. below). Yeah, probably. > > > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > > > > > > > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); > > > > > > > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > > > - if (!ssd130x->buffer) > > > > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > > Based on full screen width and height. > > > > > + if (!ssd130x_state->buffer) > > > > return -ENOMEM; > > > > > > > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > > > - if (!ssd130x->data_array) { > > > > - kfree(ssd130x->buffer); > > > > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > > Based on full screen width and height (hardware page size). > > > > > + if (!ssd130x_state->data_array) { > > > > + kfree(ssd130x_state->buffer); > > > > > > Should ssd130x_state->buffer be reset to NULL here? > > > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, > > > leading to a double free? > > > > That's a good question. > > > > I never really thought of that, but yeah, AFAIK atomic_destroy_state() > > will be called when atomic_check() fails. > > > > Which means that it's probably broken in a lot of drivers. > > > > Also, Javier pointed me to a discussion you had on IRC about using devm > > allocation here. We can't really do that. KMS devices are only freed > > once the last userspace application closes its fd to the device file, so > > you have an unbounded window during which the driver is still callable > > by userspace (and thus can still trigger an atomic commit) but the > > buffer would have been freed for a while. > > It should still be safe for (at least) the data_array buffer. That > buffer is only used to store pixels in hardware format, and immediately > send them to the hardware. If this can be called that late, it will > fail horribly, as you can no longer talk to the hardware at that point > (as ssd130x is an i2c driver, it might actually work; but a DRM driver > that calls devm_platform_ioremap_resource() will crash when writing > to its MMIO registers)?!? Yep, that's exactly why we have drm_dev_enter/drm_dev_exit :) Not a lot of drivers are using it but all should. vc4 does exactly what you are describing for example. Maxime
On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote: > > >> Also, Javier pointed me to a discussion you had on IRC about using devm > > >> allocation here. We can't really do that. KMS devices are only freed > > >> once the last userspace application closes its fd to the device file, so > > >> you have an unbounded window during which the driver is still callable > > >> by userspace (and thus can still trigger an atomic commit) but the > > >> buffer would have been freed for a while. > > > > > > It should still be safe for (at least) the data_array buffer. That > > > buffer is only used to store pixels in hardware format, and immediately > > > send them to the hardware. If this can be called that late, it will > > > fail horribly, as you can no longer talk to the hardware at that point > > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver > > > that calls devm_platform_ioremap_resource() will crash when writing > > > to its MMIO registers)?!? > > > > At the very least the SPI driver will fail since the GPIO that is used to > > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also > > the regulator, backlight device, etc. > > > > But in any case, as mentioned it is only relevant if the data_array buffer > > is allocated at probe time, and from Maxime's explanation is more correct > > to do it in the .atomic_check handler. > > You need (at least) data_array for clear_screen, too, which is called > from .atomic_disable(). I'm not sure I get what your concern is? Even if we entirely disable the plane, the state will not have been destroyed yet so you still have at least access to the data_array from the old state. Maxime
Hi Maxime, On Wed, Jul 26, 2023 at 3:54 PM Maxime Ripard <mripard@kernel.org> wrote: > On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote: > > > >> Also, Javier pointed me to a discussion you had on IRC about using devm > > > >> allocation here. We can't really do that. KMS devices are only freed > > > >> once the last userspace application closes its fd to the device file, so > > > >> you have an unbounded window during which the driver is still callable > > > >> by userspace (and thus can still trigger an atomic commit) but the > > > >> buffer would have been freed for a while. > > > > > > > > It should still be safe for (at least) the data_array buffer. That > > > > buffer is only used to store pixels in hardware format, and immediately > > > > send them to the hardware. If this can be called that late, it will > > > > fail horribly, as you can no longer talk to the hardware at that point > > > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver > > > > that calls devm_platform_ioremap_resource() will crash when writing > > > > to its MMIO registers)?!? > > > > > > At the very least the SPI driver will fail since the GPIO that is used to > > > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also > > > the regulator, backlight device, etc. > > > > > > But in any case, as mentioned it is only relevant if the data_array buffer > > > is allocated at probe time, and from Maxime's explanation is more correct > > > to do it in the .atomic_check handler. > > > > You need (at least) data_array for clear_screen, too, which is called > > from .atomic_disable(). > > I'm not sure I get what your concern is? > > Even if we entirely disable the plane, the state will not have been > destroyed yet so you still have at least access to the data_array from > the old state. Currently, clearing the screen is done from the plane's .atomic_disable() callback, so the plane's state should be fine. But IIUIC, DRM allows the user to enable/disable the display without creating any plane first, so clearing should be handled in the CRTC's .atomic_disable() callback instead? Then, would we still have access to valid plane state? Thanks! Gr{oetje,eeting}s, Geert
Maxime Ripard <mripard@kernel.org> writes: Hello Maxime, > Hi, > > On Wed, Jul 26, 2023 at 01:52:37PM +0200, Geert Uytterhoeven wrote: >> On Wed, Jul 26, 2023 at 12:00 PM Maxime Ripard <mripard@kernel.org> wrote: >> > On Tue, Jul 25, 2023 at 09:21:52PM +0200, Geert Uytterhoeven wrote: [...] >> The second buffer (containing the hardware format) has a size that >> depends on the full screen size, not the current mode (I believe that's >> also the size of the frame buffer backing the plane?). So its size is >> fixed. > > In KMS in general, no. For that particular case, yes. > > And about the framebuffer size == full screen size, there's multiple > sizes involved. I guess what you call full screen size is the CRTC size. > > You can't make the assumption in KMS that CRTC size == (primary) plane > size == framebuffer size. > > If you're using scaling for example, you will have a framebuffer size > smaller or larger than it plane size. If you're using cropping, then the > plane size or framebuffer size will be different from the CRTC size. > Some ways to work around overscan is also to have a smaller plane size > than the CRTC size. > > You don't have to have the CRTC size == primary plane size, and then you > don't have to have plane size == framebuffer size. > > you can't really make that assumption in the general case either: > scaling or cropping will have a different framebuffer size than the CRTC > primary plane size (which doesn't have to be "full screen" either). > Yes, this particular driver is using the drm_plane_helper_atomic_check() as the primary plane .atomic_check and this function helper calls to: drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state, DRM_PLANE_NO_SCALING, DRM_PLANE_NO_SCALING, false, false); so it does not support scaling nor positioning. >> Given the allocations are now done based on plane state, I think the >> first buffer should be sized according to the frame buffer backing >> the plane? Currently it uses the full screen size, too (cfr. below). > > Yeah, probably. > Right, that could be fixed as another patch if anything to make it more reable since it won't have any functional change. >> > > > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) >> > > > >> > > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); >> > > > >> > > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> > > > - if (!ssd130x->buffer) >> > > > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); >> >> Based on full screen width and height. >> >> > > > + if (!ssd130x_state->buffer) >> > > > return -ENOMEM; >> > > > >> > > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> > > > - if (!ssd130x->data_array) { >> > > > - kfree(ssd130x->buffer); >> > > > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); >> >> Based on full screen width and height (hardware page size). >> >> > > > + if (!ssd130x_state->data_array) { >> > > > + kfree(ssd130x_state->buffer); >> > > >> > > Should ssd130x_state->buffer be reset to NULL here? >> > > I.e. if .atomic_check() fails, will .atomic_destroy_state() be called, >> > > leading to a double free? >> > >> > That's a good question. >> > >> > I never really thought of that, but yeah, AFAIK atomic_destroy_state() >> > will be called when atomic_check() fails. >> > >> > Which means that it's probably broken in a lot of drivers. >> > >> > Also, Javier pointed me to a discussion you had on IRC about using devm >> > allocation here. We can't really do that. KMS devices are only freed >> > once the last userspace application closes its fd to the device file, so >> > you have an unbounded window during which the driver is still callable >> > by userspace (and thus can still trigger an atomic commit) but the >> > buffer would have been freed for a while. >> >> It should still be safe for (at least) the data_array buffer. That >> buffer is only used to store pixels in hardware format, and immediately >> send them to the hardware. If this can be called that late, it will >> fail horribly, as you can no longer talk to the hardware at that point >> (as ssd130x is an i2c driver, it might actually work; but a DRM driver >> that calls devm_platform_ioremap_resource() will crash when writing >> to its MMIO registers)?!? > > Yep, that's exactly why we have drm_dev_enter/drm_dev_exit :) > Thanks. As a follow-up I can also do that in another patch.
Geert Uytterhoeven <geert@linux-m68k.org> writes: Hello Geert, > Hi Maxime, > > On Wed, Jul 26, 2023 at 3:54 PM Maxime Ripard <mripard@kernel.org> wrote: >> On Wed, Jul 26, 2023 at 02:33:06PM +0200, Geert Uytterhoeven wrote: >> > > >> Also, Javier pointed me to a discussion you had on IRC about using devm >> > > >> allocation here. We can't really do that. KMS devices are only freed >> > > >> once the last userspace application closes its fd to the device file, so >> > > >> you have an unbounded window during which the driver is still callable >> > > >> by userspace (and thus can still trigger an atomic commit) but the >> > > >> buffer would have been freed for a while. >> > > > >> > > > It should still be safe for (at least) the data_array buffer. That >> > > > buffer is only used to store pixels in hardware format, and immediately >> > > > send them to the hardware. If this can be called that late, it will >> > > > fail horribly, as you can no longer talk to the hardware at that point >> > > > (as ssd130x is an i2c driver, it might actually work; but a DRM driver >> > > > that calls devm_platform_ioremap_resource() will crash when writing >> > > > to its MMIO registers)?!? >> > > >> > > At the very least the SPI driver will fail since the GPIO that is used to >> > > toggle the D/C pin is allocated with devm_gpiod_get_optional(), but also >> > > the regulator, backlight device, etc. >> > > >> > > But in any case, as mentioned it is only relevant if the data_array buffer >> > > is allocated at probe time, and from Maxime's explanation is more correct >> > > to do it in the .atomic_check handler. >> > >> > You need (at least) data_array for clear_screen, too, which is called >> > from .atomic_disable(). >> >> I'm not sure I get what your concern is? >> >> Even if we entirely disable the plane, the state will not have been >> destroyed yet so you still have at least access to the data_array from >> the old state. > > Currently, clearing the screen is done from the plane's .atomic_disable() > callback, so the plane's state should be fine. > > But IIUIC, DRM allows the user to enable/disable the display without > creating any plane first, so clearing should be handled in the CRTC's But it's needed to be clared in this case? Currently the power on/off is done in the encoder's .atomic_{en,dis}able() but the actual panel clear is only done for the plane disable as you mentioned. > .atomic_disable() callback instead? Then, would we still have access > to valid plane state? > If the clear has to be moved to the CRTC atomic enable/disable, then the driver should be reworked to not use the data_array and instead just allocate a zero'ed buffer and pass that as you proposed. But again that's something that could be done later as another patch. > Thanks! > > Gr{oetje,eeting}s, >
Hi Javier, this patch is completely broken. It's easy to fix though. Am 21.07.23 um 09:09 schrieb Javier Martinez Canillas: > Geert reports that the following NULL pointer dereference happens for him > after commit 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each > plane update"): > > [drm] Initialized ssd130x 1.0.0 20220131 for 0-003c on minor 0 > ssd130x-i2c 0-003c: [drm] surface width(128), height(32), bpp(1) > and format(R1 little-endian (0x20203152)) > Unable to handle kernel NULL pointer dereference at virtual address 00000000 > Oops [#1] > CPU: 0 PID: 1 Comm: swapper Not tainted > 6.5.0-rc1-orangecrab-02219-g0a529a1e4bf4 #565 > epc : ssd130x_update_rect.isra.0+0x13c/0x340 > ra : ssd130x_update_rect.isra.0+0x2bc/0x340 > ... > status: 00000120 badaddr: 00000000 cause: 0000000f > [<c0303d90>] ssd130x_update_rect.isra.0+0x13c/0x340 > [<c0304200>] ssd130x_primary_plane_helper_atomic_update+0x26c/0x284 > [<c02f8d54>] drm_atomic_helper_commit_planes+0xfc/0x27c > [<c02f9314>] drm_atomic_helper_commit_tail+0x5c/0xb4 > [<c02f94fc>] commit_tail+0x190/0x1b8 > [<c02f99fc>] drm_atomic_helper_commit+0x194/0x1c0 > [<c02c5d00>] drm_atomic_commit+0xa4/0xe4 > [<c02cce40>] drm_client_modeset_commit_atomic+0x244/0x278 > [<c02ccef0>] drm_client_modeset_commit_locked+0x7c/0x1bc > [<c02cd064>] drm_client_modeset_commit+0x34/0x64 > [<c0301a78>] __drm_fb_helper_restore_fbdev_mode_unlocked+0xc4/0xe8 > [<c0303424>] drm_fb_helper_set_par+0x38/0x58 > [<c027c410>] fbcon_init+0x294/0x534 > ... > > The problem is that fbcon calls fbcon_init() which triggers a DRM modeset > and this leads to drm_atomic_helper_commit_planes() attempting to commit > the atomic state for all planes, even the ones whose CRTC is not enabled. > > Since the primary plane buffer is allocated in the encoder .atomic_enable > callback, this happens after that initial modeset commit and leads to the > mentioned NULL pointer dereference. > > Fix this by having custom helpers to allocate, duplicate and destroy the > plane state, that will take care of allocating and freeing these buffers. > > Instead of doing it in the encoder atomic enabled and disabled callbacks, > since allocations must not be done in an .atomic_enable handler. Because > drivers are not allowed to fail after drm_atomic_helper_swap_state() has > been called and the new atomic state is stored into the current sw state. > > Fixes: 49d7d581ceaf ("drm/ssd130x: Don't allocate buffers on each plane update") > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> > Suggested-by: Maxime Ripard <mripard@kernel.org> > Signed-off-by: Javier Martinez Canillas <javierm@redhat.com> > --- > > Changes in v4: > - Move buffers allocation/free to plane .reset/.destroy helpers (Maxime Ripard). > > Changes in v3: > - Move the buffers allocation to the plane helper funcs .begin_fb_access > and the free to .end_fb_access callbacks. > - Always allocate intermediate buffer because is use in ssd130x_clear_screen(). > - Don't allocate the buffers as device managed resources. > > Changes in v2: > - Move the buffers allocation to .fb_create instead of preventing atomic > updates for disable planes. > - Don't allocate the intermediate buffer when using the native R1 format. > - Allocate buffers as device managed resources. > > drivers/gpu/drm/solomon/ssd130x.c | 134 ++++++++++++++++++++++++------ > drivers/gpu/drm/solomon/ssd130x.h | 3 - > 2 files changed, 108 insertions(+), 29 deletions(-) > > diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c > index afb08a8aa9fc..21b2afe40b13 100644 > --- a/drivers/gpu/drm/solomon/ssd130x.c > +++ b/drivers/gpu/drm/solomon/ssd130x.c > @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { > }; > EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); > > +struct ssd130x_plane_state { > + struct drm_plane_state base; You need to use a struct drm_shadow_plane_state here. > + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ > + u8 *buffer; > + /* Buffer that contains R1 pixels to be written to the panel */ > + u8 *data_array; > +}; > + > +static inline struct ssd130x_plane_state *to_ssd130x_plane_state(struct drm_plane_state *state) > +{ > + return container_of(state, struct ssd130x_plane_state, base); > +} > + > static inline struct ssd130x_device *drm_to_ssd130x(struct drm_device *drm) > { > return container_of(drm, struct ssd130x_device, drm); > } > > -static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > +static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x, > + struct ssd130x_plane_state *ssd130x_state) > { > unsigned int page_height = ssd130x->device_info->page_height; > unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height); > @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) > > pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); > > - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > - if (!ssd130x->buffer) > + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); > + if (!ssd130x_state->buffer) > return -ENOMEM; > > - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > - if (!ssd130x->data_array) { > - kfree(ssd130x->buffer); > + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); > + if (!ssd130x_state->data_array) { > + kfree(ssd130x_state->buffer); > return -ENOMEM; > } > > return 0; > } > > -static void ssd130x_buf_free(struct ssd130x_device *ssd130x) > +static void ssd130x_buf_free(struct ssd130x_plane_state *ssd130x_state) > { > - kfree(ssd130x->data_array); > - kfree(ssd130x->buffer); > + kfree(ssd130x_state->data_array); > + kfree(ssd130x_state->buffer); > } > > /* > @@ -466,12 +480,14 @@ static int ssd130x_init(struct ssd130x_device *ssd130x) > SSD130X_SET_ADDRESS_MODE_HORIZONTAL); > } > > -static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *rect) > +static int ssd130x_update_rect(struct ssd130x_device *ssd130x, > + struct ssd130x_plane_state *ssd130x_state, > + struct drm_rect *rect) > { > unsigned int x = rect->x1; > unsigned int y = rect->y1; > - u8 *buf = ssd130x->buffer; > - u8 *data_array = ssd130x->data_array; > + u8 *buf = ssd130x_state->buffer; > + u8 *data_array = ssd130x_state->data_array; > unsigned int width = drm_rect_width(rect); > unsigned int height = drm_rect_height(rect); > unsigned int line_length = DIV_ROUND_UP(width, 8); > @@ -567,7 +583,8 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect * > return ret; > } > > -static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) > +static void ssd130x_clear_screen(struct ssd130x_device *ssd130x, > + struct ssd130x_plane_state *ssd130x_state) > { > struct drm_rect fullscreen = { > .x1 = 0, > @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) > .y2 = ssd130x->height, > }; > > - ssd130x_update_rect(ssd130x, &fullscreen); > + ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen); > } > > -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap, > +static int ssd130x_fb_blit_rect(struct drm_plane_state *state, > + const struct iosys_map *vmap, > struct drm_rect *rect) > { > + struct drm_framebuffer *fb = state->fb; > struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); > unsigned int page_height = ssd130x->device_info->page_height; > struct iosys_map dst; > unsigned int dst_pitch; > int ret = 0; > - u8 *buf = ssd130x->buffer; > + u8 *buf = ssd130x_state->buffer; > > if (!buf) > return 0; > @@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m > > drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); > > - ssd130x_update_rect(ssd130x, rect); > + ssd130x_update_rect(ssd130x, ssd130x_state, rect); > > return ret; > } > > +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane, > + struct drm_atomic_state *state) > +{ > + struct drm_device *drm = plane->dev; > + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); > + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state); > + int ret; > + > + ret = drm_plane_helper_atomic_check(plane, state); > + if (ret) > + return ret; > + > + return ssd130x_buf_alloc(ssd130x, ssd130x_state); > +} > + > static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane, > struct drm_atomic_state *state) > { > @@ -634,7 +670,7 @@ static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane, > if (!drm_rect_intersect(&dst_clip, &damage)) > continue; > > - ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip); > + ssd130x_fb_blit_rect(plane_state, &shadow_plane_state->data[0], &dst_clip); > } > > drm_dev_exit(idx); > @@ -645,19 +681,68 @@ static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane, > { > struct drm_device *drm = plane->dev; > struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane->state); > int idx; > > if (!drm_dev_enter(drm, &idx)) > return; > > - ssd130x_clear_screen(ssd130x); > + ssd130x_clear_screen(ssd130x, ssd130x_state); > > drm_dev_exit(idx); > } > > +/* Called during init to allocate the plane's atomic state. */ > +static void ssd130x_primary_plane_reset(struct drm_plane *plane) > +{ > + struct ssd130x_plane_state *ssd130x_state; > + > + WARN_ON(plane->state); > + > + ssd130x_state = kzalloc(sizeof(*ssd130x_state), GFP_KERNEL); > + if (!ssd130x_state) > + return; > + > + __drm_atomic_helper_plane_reset(plane, &ssd130x_state->base); Use __drm_gem_reset_shadow_plane() here > +} > + > +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) > +{ > + struct ssd130x_plane_state *old_ssd130x_state; > + struct ssd130x_plane_state *ssd130x_state; > + > + if (WARN_ON(!plane->state)) > + return NULL; > + > + old_ssd130x_state = to_ssd130x_plane_state(plane->state); > + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); > + if (!ssd130x_state) > + return NULL; > + > + /* The buffers are not duplicated and are allocated in .atomic_check */ > + ssd130x_state->buffer = NULL; > + ssd130x_state->data_array = NULL; > + > + __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base); Use __drm_gem_duplicate_shadow_plane_state() here. > + > + return &ssd130x_state->base; > +} > + > +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane, > + struct drm_plane_state *state) > +{ > + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); > + > + ssd130x_buf_free(ssd130x_state); > + > + __drm_atomic_helper_plane_destroy_state(&ssd130x_state->base); Use __drm_gem_destroy_shadow_plane_state() here. > + > + kfree(ssd130x_state); > +} > + > static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = { > DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, This macro sets the callbacks that vmap/vunmap the GEM buffer during the display update. They expect to upcast from drm_plane_state to drm_shadow_plane_state. And hence, your driver's plane state needs to inherit from drm_shadow_plane_state. > - .atomic_check = drm_plane_helper_atomic_check, > + .atomic_check = ssd130x_primary_plane_helper_atomic_check, > .atomic_update = ssd130x_primary_plane_helper_atomic_update, > .atomic_disable = ssd130x_primary_plane_helper_atomic_disable, > }; > @@ -665,6 +750,9 @@ static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = > static const struct drm_plane_funcs ssd130x_primary_plane_funcs = { > .update_plane = drm_atomic_helper_update_plane, > .disable_plane = drm_atomic_helper_disable_plane, > + .reset = ssd130x_primary_plane_reset, > + .atomic_duplicate_state = ssd130x_primary_plane_duplicate_state, > + .atomic_destroy_state = ssd130x_primary_plane_destroy_state, > .destroy = drm_plane_cleanup, > DRM_GEM_SHADOW_PLANE_FUNCS, Arnd already mentioned that this macro now needs to go away. Best regards Thomas > }; > @@ -719,10 +807,6 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder, > if (ret) > goto power_off; > > - ret = ssd130x_buf_alloc(ssd130x); > - if (ret) > - goto power_off; > - > ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); > > backlight_enable(ssd130x->bl_dev); > @@ -744,8 +828,6 @@ static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder, > > ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF); > > - ssd130x_buf_free(ssd130x); > - > ssd130x_power_off(ssd130x); > } > > diff --git a/drivers/gpu/drm/solomon/ssd130x.h b/drivers/gpu/drm/solomon/ssd130x.h > index 161588b1cc4d..87968b3e7fb8 100644 > --- a/drivers/gpu/drm/solomon/ssd130x.h > +++ b/drivers/gpu/drm/solomon/ssd130x.h > @@ -89,9 +89,6 @@ struct ssd130x_device { > u8 col_end; > u8 page_start; > u8 page_end; > - > - u8 *buffer; > - u8 *data_array; > }; > > extern const struct ssd130x_deviceinfo ssd130x_variants[]; -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg)
Thomas Zimmermann <tzimmermann@suse.de> writes: Hello Thomas, Thanks a lot for the feedback! > Hi Javier, > > this patch is completely broken. It's easy to fix though. > > Am 21.07.23 um 09:09 schrieb Javier Martinez Canillas: [...] >> +struct ssd130x_plane_state { >> + struct drm_plane_state base; > > You need to use a struct drm_shadow_plane_state here. > Yes, I already noticed this when testing Arnd's patch to drop DRM_GEM_SHADOW_PLANE_FUNCS. I already have a patch ready. [...] + >> static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = { >> DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, > > This macro sets the callbacks that vmap/vunmap the GEM buffer during the > display update. They expect to upcast from drm_plane_state to > drm_shadow_plane_state. And hence, your driver's plane state needs to > inherit from drm_shadow_plane_state. > Yup. I missed that. I'm now testing my patch and will post it.
diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c index afb08a8aa9fc..21b2afe40b13 100644 --- a/drivers/gpu/drm/solomon/ssd130x.c +++ b/drivers/gpu/drm/solomon/ssd130x.c @@ -141,12 +141,26 @@ const struct ssd130x_deviceinfo ssd130x_variants[] = { }; EXPORT_SYMBOL_NS_GPL(ssd130x_variants, DRM_SSD130X); +struct ssd130x_plane_state { + struct drm_plane_state base; + /* Intermediate buffer to convert pixels from XRGB8888 to R1 */ + u8 *buffer; + /* Buffer that contains R1 pixels to be written to the panel */ + u8 *data_array; +}; + +static inline struct ssd130x_plane_state *to_ssd130x_plane_state(struct drm_plane_state *state) +{ + return container_of(state, struct ssd130x_plane_state, base); +} + static inline struct ssd130x_device *drm_to_ssd130x(struct drm_device *drm) { return container_of(drm, struct ssd130x_device, drm); } -static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) +static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x, + struct ssd130x_plane_state *ssd130x_state) { unsigned int page_height = ssd130x->device_info->page_height; unsigned int pages = DIV_ROUND_UP(ssd130x->height, page_height); @@ -159,23 +173,23 @@ static int ssd130x_buf_alloc(struct ssd130x_device *ssd130x) pitch = drm_format_info_min_pitch(fi, 0, ssd130x->width); - ssd130x->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); - if (!ssd130x->buffer) + ssd130x_state->buffer = kcalloc(pitch, ssd130x->height, GFP_KERNEL); + if (!ssd130x_state->buffer) return -ENOMEM; - ssd130x->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); - if (!ssd130x->data_array) { - kfree(ssd130x->buffer); + ssd130x_state->data_array = kcalloc(ssd130x->width, pages, GFP_KERNEL); + if (!ssd130x_state->data_array) { + kfree(ssd130x_state->buffer); return -ENOMEM; } return 0; } -static void ssd130x_buf_free(struct ssd130x_device *ssd130x) +static void ssd130x_buf_free(struct ssd130x_plane_state *ssd130x_state) { - kfree(ssd130x->data_array); - kfree(ssd130x->buffer); + kfree(ssd130x_state->data_array); + kfree(ssd130x_state->buffer); } /* @@ -466,12 +480,14 @@ static int ssd130x_init(struct ssd130x_device *ssd130x) SSD130X_SET_ADDRESS_MODE_HORIZONTAL); } -static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect *rect) +static int ssd130x_update_rect(struct ssd130x_device *ssd130x, + struct ssd130x_plane_state *ssd130x_state, + struct drm_rect *rect) { unsigned int x = rect->x1; unsigned int y = rect->y1; - u8 *buf = ssd130x->buffer; - u8 *data_array = ssd130x->data_array; + u8 *buf = ssd130x_state->buffer; + u8 *data_array = ssd130x_state->data_array; unsigned int width = drm_rect_width(rect); unsigned int height = drm_rect_height(rect); unsigned int line_length = DIV_ROUND_UP(width, 8); @@ -567,7 +583,8 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, struct drm_rect * return ret; } -static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) +static void ssd130x_clear_screen(struct ssd130x_device *ssd130x, + struct ssd130x_plane_state *ssd130x_state) { struct drm_rect fullscreen = { .x1 = 0, @@ -576,18 +593,21 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x) .y2 = ssd130x->height, }; - ssd130x_update_rect(ssd130x, &fullscreen); + ssd130x_update_rect(ssd130x, ssd130x_state, &fullscreen); } -static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_map *vmap, +static int ssd130x_fb_blit_rect(struct drm_plane_state *state, + const struct iosys_map *vmap, struct drm_rect *rect) { + struct drm_framebuffer *fb = state->fb; struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev); + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); unsigned int page_height = ssd130x->device_info->page_height; struct iosys_map dst; unsigned int dst_pitch; int ret = 0; - u8 *buf = ssd130x->buffer; + u8 *buf = ssd130x_state->buffer; if (!buf) return 0; @@ -607,11 +627,27 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); - ssd130x_update_rect(ssd130x, rect); + ssd130x_update_rect(ssd130x, ssd130x_state, rect); return ret; } +static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane, + struct drm_atomic_state *state) +{ + struct drm_device *drm = plane->dev; + struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane_state); + int ret; + + ret = drm_plane_helper_atomic_check(plane, state); + if (ret) + return ret; + + return ssd130x_buf_alloc(ssd130x, ssd130x_state); +} + static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane, struct drm_atomic_state *state) { @@ -634,7 +670,7 @@ static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane, if (!drm_rect_intersect(&dst_clip, &damage)) continue; - ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &dst_clip); + ssd130x_fb_blit_rect(plane_state, &shadow_plane_state->data[0], &dst_clip); } drm_dev_exit(idx); @@ -645,19 +681,68 @@ static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane, { struct drm_device *drm = plane->dev; struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(plane->state); int idx; if (!drm_dev_enter(drm, &idx)) return; - ssd130x_clear_screen(ssd130x); + ssd130x_clear_screen(ssd130x, ssd130x_state); drm_dev_exit(idx); } +/* Called during init to allocate the plane's atomic state. */ +static void ssd130x_primary_plane_reset(struct drm_plane *plane) +{ + struct ssd130x_plane_state *ssd130x_state; + + WARN_ON(plane->state); + + ssd130x_state = kzalloc(sizeof(*ssd130x_state), GFP_KERNEL); + if (!ssd130x_state) + return; + + __drm_atomic_helper_plane_reset(plane, &ssd130x_state->base); +} + +static struct drm_plane_state *ssd130x_primary_plane_duplicate_state(struct drm_plane *plane) +{ + struct ssd130x_plane_state *old_ssd130x_state; + struct ssd130x_plane_state *ssd130x_state; + + if (WARN_ON(!plane->state)) + return NULL; + + old_ssd130x_state = to_ssd130x_plane_state(plane->state); + ssd130x_state = kmemdup(old_ssd130x_state, sizeof(*ssd130x_state), GFP_KERNEL); + if (!ssd130x_state) + return NULL; + + /* The buffers are not duplicated and are allocated in .atomic_check */ + ssd130x_state->buffer = NULL; + ssd130x_state->data_array = NULL; + + __drm_atomic_helper_plane_duplicate_state(plane, &ssd130x_state->base); + + return &ssd130x_state->base; +} + +static void ssd130x_primary_plane_destroy_state(struct drm_plane *plane, + struct drm_plane_state *state) +{ + struct ssd130x_plane_state *ssd130x_state = to_ssd130x_plane_state(state); + + ssd130x_buf_free(ssd130x_state); + + __drm_atomic_helper_plane_destroy_state(&ssd130x_state->base); + + kfree(ssd130x_state); +} + static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = { DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, - .atomic_check = drm_plane_helper_atomic_check, + .atomic_check = ssd130x_primary_plane_helper_atomic_check, .atomic_update = ssd130x_primary_plane_helper_atomic_update, .atomic_disable = ssd130x_primary_plane_helper_atomic_disable, }; @@ -665,6 +750,9 @@ static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = static const struct drm_plane_funcs ssd130x_primary_plane_funcs = { .update_plane = drm_atomic_helper_update_plane, .disable_plane = drm_atomic_helper_disable_plane, + .reset = ssd130x_primary_plane_reset, + .atomic_duplicate_state = ssd130x_primary_plane_duplicate_state, + .atomic_destroy_state = ssd130x_primary_plane_destroy_state, .destroy = drm_plane_cleanup, DRM_GEM_SHADOW_PLANE_FUNCS, }; @@ -719,10 +807,6 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder, if (ret) goto power_off; - ret = ssd130x_buf_alloc(ssd130x); - if (ret) - goto power_off; - ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); backlight_enable(ssd130x->bl_dev); @@ -744,8 +828,6 @@ static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder, ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF); - ssd130x_buf_free(ssd130x); - ssd130x_power_off(ssd130x); } diff --git a/drivers/gpu/drm/solomon/ssd130x.h b/drivers/gpu/drm/solomon/ssd130x.h index 161588b1cc4d..87968b3e7fb8 100644 --- a/drivers/gpu/drm/solomon/ssd130x.h +++ b/drivers/gpu/drm/solomon/ssd130x.h @@ -89,9 +89,6 @@ struct ssd130x_device { u8 col_end; u8 page_start; u8 page_end; - - u8 *buffer; - u8 *data_array; }; extern const struct ssd130x_deviceinfo ssd130x_variants[];