[v3] block: sed-opal: kmalloc the cmd/resp buffers
Commit Message
In accordance with [1] the DMA-able memory buffers must be
cacheline-aligned otherwise the cache writing-back and invalidation
performed during the mapping may cause the adjacent data being lost. It's
specifically required for the DMA-noncoherent platforms [2]. Seeing the
opal_dev.{cmd,resp} buffers are implicitly used for DMAs in the NVME and
SCSI/SD drivers in framework of the nvme_sec_submit() and sd_sec_submit()
methods respectively they must be cacheline-aligned to prevent the denoted
problem. One of the option to guarantee that is to kmalloc the buffers
[2]. Let's explicitly allocate them then instead of embedding into the
opal_dev structure instance.
Note this fix was inspired by the commit c94b7f9bab22 ("nvme-hwmon:
kmalloc the NVME SMART log buffer").
[1] Documentation/core-api/dma-api.rst
[2] Documentation/core-api/dma-api-howto.rst
Fixes: 455a7b238cd6 ("block: Add Sed-opal library")
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
Folks the NVME-part of the patchset has already been merged in
Link: https://lore.kernel.org/linux-nvme/20220929224648.8997-1-Sergey.Semin@baikalelectronics.ru/
This modification is only leftover of the original series. So I've resent
it as a separate patch.
Link: https://lore.kernel.org/linux-nvme/20220929224648.8997-4-Sergey.Semin@baikalelectronics.ru/
Changelog v3:
- Convert to allocating the cmd-/resp-buffers instead of cache-aligning
them. (@Jonathan)
- Resubmit the patch separately from the original series.
- Rebase onto the kernel 6.1-rc3
---
block/sed-opal.c | 32 ++++++++++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)
Comments
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Mon, 7 Nov 2022 23:39:44 +0300, Serge Semin wrote:
> In accordance with [1] the DMA-able memory buffers must be
> cacheline-aligned otherwise the cache writing-back and invalidation
> performed during the mapping may cause the adjacent data being lost. It's
> specifically required for the DMA-noncoherent platforms [2]. Seeing the
> opal_dev.{cmd,resp} buffers are implicitly used for DMAs in the NVME and
> SCSI/SD drivers in framework of the nvme_sec_submit() and sd_sec_submit()
> methods respectively they must be cacheline-aligned to prevent the denoted
> problem. One of the option to guarantee that is to kmalloc the buffers
> [2]. Let's explicitly allocate them then instead of embedding into the
> opal_dev structure instance.
>
> [...]
Applied, thanks!
[1/1] block: sed-opal: kmalloc the cmd/resp buffers
(no commit info)
Best regards,
@@ -87,8 +87,8 @@ struct opal_dev {
u64 lowest_lba;
size_t pos;
- u8 cmd[IO_BUFFER_LENGTH];
- u8 resp[IO_BUFFER_LENGTH];
+ u8 *cmd;
+ u8 *resp;
struct parsed_resp parsed;
size_t prev_d_len;
@@ -2175,6 +2175,8 @@ void free_opal_dev(struct opal_dev *dev)
return;
clean_opal_dev(dev);
+ kfree(dev->resp);
+ kfree(dev->cmd);
kfree(dev);
}
EXPORT_SYMBOL(free_opal_dev);
@@ -2187,6 +2189,18 @@ struct opal_dev *init_opal_dev(void *data, sec_send_recv *send_recv)
if (!dev)
return NULL;
+ /*
+ * Presumably DMA-able buffers must be cache-aligned. Kmalloc makes
+ * sure the allocated buffer is DMA-safe in that regard.
+ */
+ dev->cmd = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
+ if (!dev->cmd)
+ goto err_free_dev;
+
+ dev->resp = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
+ if (!dev->resp)
+ goto err_free_cmd;
+
INIT_LIST_HEAD(&dev->unlk_lst);
mutex_init(&dev->dev_lock);
dev->flags = 0;
@@ -2194,11 +2208,21 @@ struct opal_dev *init_opal_dev(void *data, sec_send_recv *send_recv)
dev->send_recv = send_recv;
if (check_opal_support(dev) != 0) {
pr_debug("Opal is not supported on this device\n");
- kfree(dev);
- return NULL;
+ goto err_free_resp;
}
return dev;
+
+err_free_resp:
+ kfree(dev->resp);
+
+err_free_cmd:
+ kfree(dev->cmd);
+
+err_free_dev:
+ kfree(dev);
+
+ return NULL;
}
EXPORT_SYMBOL(init_opal_dev);