[v1,0/2] nvme-pci: Fix dma-iommu mapping failures when PAGE_SIZE=64KB

Message ID cover.1707851466.git.nicolinc@nvidia.com
Headers
Series nvme-pci: Fix dma-iommu mapping failures when PAGE_SIZE=64KB |

Message

Nicolin Chen Feb. 13, 2024, 9:53 p.m. UTC
  It's observed that an NVME device is causing timeouts when Ubuntu boots
with a kernel configured with PAGE_SIZE=64KB due to failures in swiotlb:
    systemd[1]: Started Journal Service.
 => nvme 0000:00:01.0: swiotlb buffer is full (sz: 327680 bytes), total 32768 (slots), used 32 (slots)
    note: journal-offline[392] exited with irqs disabled
    note: journal-offline[392] exited with preempt_count 1

An NVME device under a PCIe bus can be behind an IOMMU, so dma mappings
going through dma-iommu might be also redirected to swiotlb allocations.
Similar to dma_direct_max_mapping_size(), dma-iommu should implement its
dma_map_ops->max_mapping_size to return swiotlb_max_mapping_size() too.

Though an iommu_dma_max_mapping_size() is a must, it alone can't fix the
issue. The swiotlb_max_mapping_size() returns 252KB, calculated from the
default pool 256KB subtracted by min_align_mask NVME_CTRL_PAGE_SIZE=4KB,
while dma-iommu can roundup a 252KB mapping to 256KB at its "alloc_size"
when PAGE_SIZE=64KB via iova->granule that is often set to PAGE_SIZE. So
this mismatch between NVME_CTRL_PAGE_SIZE=4KB and PAGE_SIZE=64KB results
in a similar failure, though its signature has a fixed size "256KB":
    systemd[1]: Started Journal Service.
 => nvme 0000:00:01.0: swiotlb buffer is full (sz: 262144 bytes), total 32768 (slots), used 128 (slots)
    note: journal-offline[392] exited with irqs disabled
    note: journal-offline[392] exited with preempt_count 1

Both failures above occur to NVME behind IOMMU when PAGE_SIZE=64KB. They
were likely introduced for the security feature by:
commit 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers"),

So, this series bundles two fixes together against that. They should be
taken at the same time to entirely fix the mapping failures.

Thanks
Nicolin

Nicolin Chen (2):
  iommu/dma: Force swiotlb_max_mapping_size on an untrusted device
  nvme-pci: Fix iommu map (via swiotlb) failures when PAGE_SIZE=64KB

 drivers/iommu/dma-iommu.c | 8 ++++++++
 drivers/nvme/host/pci.c   | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)
  

Comments

Will Deacon Feb. 14, 2024, 4:41 p.m. UTC | #1
Hi Nicolin,

On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> It's observed that an NVME device is causing timeouts when Ubuntu boots
> with a kernel configured with PAGE_SIZE=64KB due to failures in swiotlb:
>     systemd[1]: Started Journal Service.
>  => nvme 0000:00:01.0: swiotlb buffer is full (sz: 327680 bytes), total 32768 (slots), used 32 (slots)
>     note: journal-offline[392] exited with irqs disabled
>     note: journal-offline[392] exited with preempt_count 1
> 
> An NVME device under a PCIe bus can be behind an IOMMU, so dma mappings
> going through dma-iommu might be also redirected to swiotlb allocations.
> Similar to dma_direct_max_mapping_size(), dma-iommu should implement its
> dma_map_ops->max_mapping_size to return swiotlb_max_mapping_size() too.
> 
> Though an iommu_dma_max_mapping_size() is a must, it alone can't fix the
> issue. The swiotlb_max_mapping_size() returns 252KB, calculated from the
> default pool 256KB subtracted by min_align_mask NVME_CTRL_PAGE_SIZE=4KB,
> while dma-iommu can roundup a 252KB mapping to 256KB at its "alloc_size"
> when PAGE_SIZE=64KB via iova->granule that is often set to PAGE_SIZE. So
> this mismatch between NVME_CTRL_PAGE_SIZE=4KB and PAGE_SIZE=64KB results
> in a similar failure, though its signature has a fixed size "256KB":
>     systemd[1]: Started Journal Service.
>  => nvme 0000:00:01.0: swiotlb buffer is full (sz: 262144 bytes), total 32768 (slots), used 128 (slots)
>     note: journal-offline[392] exited with irqs disabled
>     note: journal-offline[392] exited with preempt_count 1
> 
> Both failures above occur to NVME behind IOMMU when PAGE_SIZE=64KB. They
> were likely introduced for the security feature by:
> commit 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers"),
> 
> So, this series bundles two fixes together against that. They should be
> taken at the same time to entirely fix the mapping failures.

It's a bit of a shot in the dark, but I've got a pending fix to some of
the alignment handling in swiotlb. It would be interesting to know if
patch 1 has any impact at all on your NVME allocations:

https://lore.kernel.org/r/20240205190127.20685-1-will@kernel.org

Cheers,

Will
  
Will Deacon Feb. 15, 2024, 2:22 p.m. UTC | #2
On Wed, Feb 14, 2024 at 11:57:32AM -0800, Nicolin Chen wrote:
> On Wed, Feb 14, 2024 at 04:41:38PM +0000, Will Deacon wrote:
> > On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> > > It's observed that an NVME device is causing timeouts when Ubuntu boots
> > > with a kernel configured with PAGE_SIZE=64KB due to failures in swiotlb:
> > >     systemd[1]: Started Journal Service.
> > >  => nvme 0000:00:01.0: swiotlb buffer is full (sz: 327680 bytes), total 32768 (slots), used 32 (slots)
> > >     note: journal-offline[392] exited with irqs disabled
> > >     note: journal-offline[392] exited with preempt_count 1
> > >
> > > An NVME device under a PCIe bus can be behind an IOMMU, so dma mappings
> > > going through dma-iommu might be also redirected to swiotlb allocations.
> > > Similar to dma_direct_max_mapping_size(), dma-iommu should implement its
> > > dma_map_ops->max_mapping_size to return swiotlb_max_mapping_size() too.
> > >
> > > Though an iommu_dma_max_mapping_size() is a must, it alone can't fix the
> > > issue. The swiotlb_max_mapping_size() returns 252KB, calculated from the
> > > default pool 256KB subtracted by min_align_mask NVME_CTRL_PAGE_SIZE=4KB,
> > > while dma-iommu can roundup a 252KB mapping to 256KB at its "alloc_size"
> > > when PAGE_SIZE=64KB via iova->granule that is often set to PAGE_SIZE. So
> > > this mismatch between NVME_CTRL_PAGE_SIZE=4KB and PAGE_SIZE=64KB results
> > > in a similar failure, though its signature has a fixed size "256KB":
> > >     systemd[1]: Started Journal Service.
> > >  => nvme 0000:00:01.0: swiotlb buffer is full (sz: 262144 bytes), total 32768 (slots), used 128 (slots)
> > >     note: journal-offline[392] exited with irqs disabled
> > >     note: journal-offline[392] exited with preempt_count 1
> > >
> > > Both failures above occur to NVME behind IOMMU when PAGE_SIZE=64KB. They
> > > were likely introduced for the security feature by:
> > > commit 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers"),
> > >
> > > So, this series bundles two fixes together against that. They should be
> > > taken at the same time to entirely fix the mapping failures.
> > 
> > It's a bit of a shot in the dark, but I've got a pending fix to some of
> > the alignment handling in swiotlb. It would be interesting to know if
> > patch 1 has any impact at all on your NVME allocations:
> > 
> > https://lore.kernel.org/r/20240205190127.20685-1-will@kernel.org
> 
> I applied these three patches locally for a test.

Thank you!

> Though I am building with a v6.6 kernel, I see some warnings:
>                  from kernel/dma/swiotlb.c:26:
> kernel/dma/swiotlb.c: In function ‘swiotlb_area_find_slots’:
> ./include/linux/minmax.h:21:35: warning: comparison of distinct pointer types lacks a cast
>    21 |         (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
>       |                                   ^~
> ./include/linux/minmax.h:27:18: note: in expansion of macro ‘__typecheck’
>    27 |                 (__typecheck(x, y) && __no_side_effects(x, y))
>       |                  ^~~~~~~~~~~
> ./include/linux/minmax.h:37:31: note: in expansion of macro ‘__safe_cmp’
>    37 |         __builtin_choose_expr(__safe_cmp(x, y), \
>       |                               ^~~~~~~~~~
> ./include/linux/minmax.h:75:25: note: in expansion of macro ‘__careful_cmp’
>    75 | #define max(x, y)       __careful_cmp(x, y, >)
>       |                         ^~~~~~~~~~~~~
> kernel/dma/swiotlb.c:1007:26: note: in expansion of macro ‘max’
>  1007 |                 stride = max(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
>       |                          ^~~
> 
> Replacing with a max_t() can fix these.

Weird, I haven't seen that. I can fix it as you suggest, but please can
you also share your .config so I can look into it further?

> And it seems to get worse, as even a 64KB mapping is failing:
> [    0.239821] nvme 0000:00:01.0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots)
> 
> With a printk, I found the iotlb_align_mask isn't correct:
>    swiotlb_area_find_slots:alloc_align_mask 0xffff, iotlb_align_mask 0x800
> 
> But fixing the iotlb_align_mask to 0x7ff still fails the 64KB
> mapping..

Hmm. A mask of 0x7ff doesn't make a lot of sense given that the slabs
are 2KiB aligned. I'll try plugging in some of the constants you have
here, as something definitely isn't right...

Will
  
Will Deacon Feb. 15, 2024, 4:35 p.m. UTC | #3
On Thu, Feb 15, 2024 at 02:22:09PM +0000, Will Deacon wrote:
> On Wed, Feb 14, 2024 at 11:57:32AM -0800, Nicolin Chen wrote:
> > On Wed, Feb 14, 2024 at 04:41:38PM +0000, Will Deacon wrote:
> > > On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> > And it seems to get worse, as even a 64KB mapping is failing:
> > [    0.239821] nvme 0000:00:01.0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots)
> > 
> > With a printk, I found the iotlb_align_mask isn't correct:
> >    swiotlb_area_find_slots:alloc_align_mask 0xffff, iotlb_align_mask 0x800
> > 
> > But fixing the iotlb_align_mask to 0x7ff still fails the 64KB
> > mapping..
> 
> Hmm. A mask of 0x7ff doesn't make a lot of sense given that the slabs
> are 2KiB aligned. I'll try plugging in some of the constants you have
> here, as something definitely isn't right...

Sorry, another ask: please can you print 'orig_addr' in the case of the
failing allocation?

Thanks!

Will
  
Nicolin Chen Feb. 16, 2024, 12:29 a.m. UTC | #4
On Thu, Feb 15, 2024 at 02:22:09PM +0000, Will Deacon wrote:

> > Though I am building with a v6.6 kernel, I see some warnings:
> >                  from kernel/dma/swiotlb.c:26:
> > kernel/dma/swiotlb.c: In function ‘swiotlb_area_find_slots’:
> > ./include/linux/minmax.h:21:35: warning: comparison of distinct pointer types lacks a cast
> >    21 |         (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
> >       |                                   ^~
> > ./include/linux/minmax.h:27:18: note: in expansion of macro ‘__typecheck’
> >    27 |                 (__typecheck(x, y) && __no_side_effects(x, y))
> >       |                  ^~~~~~~~~~~
> > ./include/linux/minmax.h:37:31: note: in expansion of macro ‘__safe_cmp’
> >    37 |         __builtin_choose_expr(__safe_cmp(x, y), \
> >       |                               ^~~~~~~~~~
> > ./include/linux/minmax.h:75:25: note: in expansion of macro ‘__careful_cmp’
> >    75 | #define max(x, y)       __careful_cmp(x, y, >)
> >       |                         ^~~~~~~~~~~~~
> > kernel/dma/swiotlb.c:1007:26: note: in expansion of macro ‘max’
> >  1007 |                 stride = max(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
> >       |                          ^~~
> >
> > Replacing with a max_t() can fix these.
> 
> Weird, I haven't seen that. I can fix it as you suggest, but please can
> you also share your .config so I can look into it further?

I attached it in my previous reply, yet forgot to mention before
hitting the send key that here is my gcc info:

# gcc -dumpmachine
aarch64-linux-gnu
# gcc --version
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
  
Will Deacon Feb. 16, 2024, 4:13 p.m. UTC | #5
Hi Nicolin,

Thanks for sharing all the logs, .config etc.

On Thu, Feb 15, 2024 at 04:26:23PM -0800, Nicolin Chen wrote:
> On Thu, Feb 15, 2024 at 04:35:45PM +0000, Will Deacon wrote:
> > On Thu, Feb 15, 2024 at 02:22:09PM +0000, Will Deacon wrote:
> > > On Wed, Feb 14, 2024 at 11:57:32AM -0800, Nicolin Chen wrote:
> > > > On Wed, Feb 14, 2024 at 04:41:38PM +0000, Will Deacon wrote:
> > > > > On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> > > > And it seems to get worse, as even a 64KB mapping is failing:
> > > > [    0.239821] nvme 0000:00:01.0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots)
> > > >
> > > > With a printk, I found the iotlb_align_mask isn't correct:
> > > >    swiotlb_area_find_slots:alloc_align_mask 0xffff, iotlb_align_mask 0x800
> > > >
> > > > But fixing the iotlb_align_mask to 0x7ff still fails the 64KB
> > > > mapping..
> > >
> > > Hmm. A mask of 0x7ff doesn't make a lot of sense given that the slabs
> > > are 2KiB aligned. I'll try plugging in some of the constants you have
> > > here, as something definitely isn't right...
> > 
> > Sorry, another ask: please can you print 'orig_addr' in the case of the
> > failing allocation?
> 
> I added nvme_print_sgl() in the nvme-pci driver before its
> dma_map_sgtable() call, so the orig_addr isn't aligned with
> PAGE_SIZE=64K or NVME_CTRL_PAGE_SIZE=4K:
>  sg[0] phys_addr:0x0000000105774600 offset:17920 length:512 dma_address:0x0000000000000000 dma_length:0
> 
> Also attaching some verbose logs, in case you'd like to check:
>    nvme 0000:00:01.0: swiotlb_area_find_slots: dma_get_min_align_mask 0xfff, IO_TLB_SIZE 0xfffff7ff
>    nvme 0000:00:01.0: swiotlb_area_find_slots: alloc_align_mask 0xffff, iotlb_align_mask 0x7ff
>    nvme 0000:00:01.0: swiotlb_area_find_slots: stride 0x20, max 0xffff
>    nvme 0000:00:01.0: swiotlb_area_find_slots: tlb_addr=0xbd830000, iotlb_align_mask=0x7ff, alloc_align_mask=0xffff
> => nvme 0000:00:01.0: swiotlb_area_find_slots: orig_addr=0x105774600, iotlb_align_mask=0x7ff

With my patches, I think 'iotlb_align_mask' will be 0x800 here, so this
particular allocation might be alright, however I think I'm starting to
see the wider problem. The IOMMU code is asking for a 64k-aligned
allocation so that it can map it safely, but at the same time
dma_get_min_align_mask() is asking for congruence in the 4k NVME page
offset. Now, because we're going to allocate a 64k-aligned mapping and
offset it, I think the NVME alignment will just fall out in the wash and
checking the 'orig_addr' (which includes the offset) is wrong.

So perhaps this diff (which I'm sadly not able to test) will help? You'll
want to apply it on top of my other patches. The idea is to ignore the
bits of 'orig_addr' which will be aligned automatically by offseting from
the aligned allocation. I fixed the max() thing too, although that's only
an issue for older kernels.

Cheers,

Will

--->8

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 283eea33dd22..4a000d97f568 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
        dma_addr_t tbl_dma_addr =
                phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
        unsigned long max_slots = get_max_slots(boundary_mask);
-       unsigned int iotlb_align_mask =
-               dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
+       unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
        unsigned int nslots = nr_slots(alloc_size), stride;
        unsigned int offset = swiotlb_align_offset(dev, orig_addr);
        unsigned int index, slots_checked, count = 0, i;
@@ -993,6 +992,9 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
        BUG_ON(!nslots);
        BUG_ON(area_index >= pool->nareas);

+       alloc_align_mask |= (IO_TLB_SIZE - 1);
+       iotlb_align_mask &= ~alloc_align_mask;
+
        /*
         * For mappings with an alignment requirement don't bother looping to
         * unaligned slots once we found an aligned one.
@@ -1004,7 +1006,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
         * allocations.
         */
        if (alloc_size >= PAGE_SIZE)
-               stride = max(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
+               stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);

        spin_lock_irqsave(&area->lock, flags);
        if (unlikely(nslots > pool->area_nslabs - area->used))
  
Nicolin Chen Feb. 17, 2024, 5:19 a.m. UTC | #6
Hi Will,

On Fri, Feb 16, 2024 at 04:13:12PM +0000, Will Deacon wrote:
> On Thu, Feb 15, 2024 at 04:26:23PM -0800, Nicolin Chen wrote:
> > On Thu, Feb 15, 2024 at 04:35:45PM +0000, Will Deacon wrote:
> > > On Thu, Feb 15, 2024 at 02:22:09PM +0000, Will Deacon wrote:
> > > > On Wed, Feb 14, 2024 at 11:57:32AM -0800, Nicolin Chen wrote:
> > > > > On Wed, Feb 14, 2024 at 04:41:38PM +0000, Will Deacon wrote:
> > > > > > On Tue, Feb 13, 2024 at 01:53:55PM -0800, Nicolin Chen wrote:
> > > > > And it seems to get worse, as even a 64KB mapping is failing:
> > > > > [    0.239821] nvme 0000:00:01.0: swiotlb buffer is full (sz: 65536 bytes), total 32768 (slots), used 0 (slots)
> > > > >
> > > > > With a printk, I found the iotlb_align_mask isn't correct:
> > > > >    swiotlb_area_find_slots:alloc_align_mask 0xffff, iotlb_align_mask 0x800
> > > > >
> > > > > But fixing the iotlb_align_mask to 0x7ff still fails the 64KB
> > > > > mapping..
> > > >
> > > > Hmm. A mask of 0x7ff doesn't make a lot of sense given that the slabs
> > > > are 2KiB aligned. I'll try plugging in some of the constants you have
> > > > here, as something definitely isn't right...
> > >
> > > Sorry, another ask: please can you print 'orig_addr' in the case of the
> > > failing allocation?
> >
> > I added nvme_print_sgl() in the nvme-pci driver before its
> > dma_map_sgtable() call, so the orig_addr isn't aligned with
> > PAGE_SIZE=64K or NVME_CTRL_PAGE_SIZE=4K:
> >  sg[0] phys_addr:0x0000000105774600 offset:17920 length:512 dma_address:0x0000000000000000 dma_length:0
> >
> > Also attaching some verbose logs, in case you'd like to check:
> >    nvme 0000:00:01.0: swiotlb_area_find_slots: dma_get_min_align_mask 0xfff, IO_TLB_SIZE 0xfffff7ff
> >    nvme 0000:00:01.0: swiotlb_area_find_slots: alloc_align_mask 0xffff, iotlb_align_mask 0x7ff
> >    nvme 0000:00:01.0: swiotlb_area_find_slots: stride 0x20, max 0xffff
> >    nvme 0000:00:01.0: swiotlb_area_find_slots: tlb_addr=0xbd830000, iotlb_align_mask=0x7ff, alloc_align_mask=0xffff
> > => nvme 0000:00:01.0: swiotlb_area_find_slots: orig_addr=0x105774600, iotlb_align_mask=0x7ff
> 
> With my patches, I think 'iotlb_align_mask' will be 0x800 here, so this

Oops, my bad. I forgot to revert the part that I mentioned in
my previous reply.

> particular allocation might be alright, however I think I'm starting to
> see the wider problem. The IOMMU code is asking for a 64k-aligned
> allocation so that it can map it safely, but at the same time
> dma_get_min_align_mask() is asking for congruence in the 4k NVME page
> offset. Now, because we're going to allocate a 64k-aligned mapping and
> offset it, I think the NVME alignment will just fall out in the wash and
> checking the 'orig_addr' (which includes the offset) is wrong.
> 
> So perhaps this diff (which I'm sadly not able to test) will help? You'll
> want to apply it on top of my other patches. The idea is to ignore the
> bits of 'orig_addr' which will be aligned automatically by offseting from
> the aligned allocation. I fixed the max() thing too, although that's only
> an issue for older kernels.
 
Yea, I tested all 4 patches. They still failed at some large
mapping, until I added on top of them my PATCH-1 implementing
the max_mapping_size op. IOW, with your patches it looks like
252KB max_mapping_size is working :)

Though we seem to have a solution now, I hope we can make it
applicable to older kernels too. The mapping failure on arm64
with PAGE_SIZE=64KB looks like a regression to me, since dma-
iommu started to use swiotlb bounce buffer.

Thanks
Nicolin

> --->8
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 283eea33dd22..4a000d97f568 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
>         dma_addr_t tbl_dma_addr =
>                 phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
>         unsigned long max_slots = get_max_slots(boundary_mask);
> -       unsigned int iotlb_align_mask =
> -               dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
> +       unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
>         unsigned int nslots = nr_slots(alloc_size), stride;
>         unsigned int offset = swiotlb_align_offset(dev, orig_addr);
>         unsigned int index, slots_checked, count = 0, i;
> @@ -993,6 +992,9 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
>         BUG_ON(!nslots);
>         BUG_ON(area_index >= pool->nareas);
> 
> +       alloc_align_mask |= (IO_TLB_SIZE - 1);
> +       iotlb_align_mask &= ~alloc_align_mask;
> +
>         /*
>          * For mappings with an alignment requirement don't bother looping to
>          * unaligned slots once we found an aligned one.
> @@ -1004,7 +1006,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
>          * allocations.
>          */
>         if (alloc_size >= PAGE_SIZE)
> -               stride = max(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
> +               stride = umax(stride, PAGE_SHIFT - IO_TLB_SHIFT + 1);
> 
>         spin_lock_irqsave(&area->lock, flags);
>         if (unlikely(nslots > pool->area_nslabs - area->used))
>