[RFC,V1,4/5] x86: CVMs: Allow allocating all DMA memory from SWIOTLB

Message ID 20240112055251.36101-5-vannapurve@google.com
State New
Headers
Series x86: CVMs: Align memory conversions to 2M granularity |

Commit Message

Vishal Annapurve Jan. 12, 2024, 5:52 a.m. UTC
  Changes include:
1) Allocate all DMA memory from SWIOTLB buffers.
2) Increase the size of SWIOTLB region to accommodate dma_alloc_*
   invocations.
3) Align SWIOLTB regions to 2M size.

Signed-off-by: Vishal Annapurve <vannapurve@google.com>
---
 arch/x86/kernel/pci-dma.c | 2 +-
 arch/x86/mm/mem_encrypt.c | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)
  

Comments

Dave Hansen Jan. 31, 2024, 4:17 p.m. UTC | #1
On 1/11/24 21:52, Vishal Annapurve wrote:
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -112,10 +112,14 @@ void __init mem_encrypt_setup_arch(void)
>  	 * The percentage of guest memory used here for SWIOTLB buffers
>  	 * is more of an approximation of the static adjustment which
>  	 * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6%
> +	 *
> +	 * Extra 2% is added to accommodate the requirement of DMA allocations
> +	 * done using dma_alloc_* APIs.
>  	 */
> -	size = total_mem * 6 / 100;
> -	size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G);
> +	size = total_mem * 8 / 100;
> +	size = clamp_val(size, IO_TLB_DEFAULT_SIZE, (SZ_1G + SZ_256M));
>  	swiotlb_adjust_size(size);
> +	swiotlb_adjust_alignment(SZ_2M);

FWIW, this appears superficially to just be fiddling with random
numbers.  The changelog basically says: "did stuff".

What *are* "the requirement of DMA allocations done using dma_alloc_* APIs"?
  
Vishal Annapurve Feb. 1, 2024, 3:41 a.m. UTC | #2
On Wed, Jan 31, 2024 at 9:47 PM Dave Hansen <dave.hansen@intel.com> wrote:
>
> On 1/11/24 21:52, Vishal Annapurve wrote:
> > --- a/arch/x86/mm/mem_encrypt.c
> > +++ b/arch/x86/mm/mem_encrypt.c
> > @@ -112,10 +112,14 @@ void __init mem_encrypt_setup_arch(void)
> >        * The percentage of guest memory used here for SWIOTLB buffers
> >        * is more of an approximation of the static adjustment which
> >        * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6%
> > +      *
> > +      * Extra 2% is added to accommodate the requirement of DMA allocations
> > +      * done using dma_alloc_* APIs.
> >        */
> > -     size = total_mem * 6 / 100;
> > -     size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G);
> > +     size = total_mem * 8 / 100;
> > +     size = clamp_val(size, IO_TLB_DEFAULT_SIZE, (SZ_1G + SZ_256M));
> >       swiotlb_adjust_size(size);
> > +     swiotlb_adjust_alignment(SZ_2M);
>
> FWIW, this appears superficially to just be fiddling with random
> numbers.  The changelog basically says: "did stuff".
>
> What *are* "the requirement of DMA allocations done using dma_alloc_* APIs"?

dma_alloc_* invocations depend on the devices used and may change with
time, so it's difficult to calculate the memory required for such
allocations.

Though one could note following points about memory allocations done
using dma_alloc_* APIs:
1) They generally happen during early setup of device drivers.
2) They should be relatively smaller in size compared to runtime
memory allocation done by dma_map_* APIs.

This change increases the SWIOTLB memory area by 30% based on the
above observations. Strategy here would be to take a safe enough
heuristic and let dynamic SWIOTLB allocations to handle any spillover.
  

Patch

diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index f323d83e40a7..3dcc3104b2a8 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -61,7 +61,7 @@  static void __init pci_swiotlb_detect(void)
 	 */
 	if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
 		x86_swiotlb_enable = true;
-		x86_swiotlb_flags |= SWIOTLB_FORCE;
+		x86_swiotlb_flags |= (SWIOTLB_FORCE | SWIOTLB_ALLOC);
 	}
 }
 #else
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c290c55b632b..0cf3365b051f 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -112,10 +112,14 @@  void __init mem_encrypt_setup_arch(void)
 	 * The percentage of guest memory used here for SWIOTLB buffers
 	 * is more of an approximation of the static adjustment which
 	 * 64MB for <1G, and ~128M to 256M for 1G-to-4G, i.e., the 6%
+	 *
+	 * Extra 2% is added to accommodate the requirement of DMA allocations
+	 * done using dma_alloc_* APIs.
 	 */
-	size = total_mem * 6 / 100;
-	size = clamp_val(size, IO_TLB_DEFAULT_SIZE, SZ_1G);
+	size = total_mem * 8 / 100;
+	size = clamp_val(size, IO_TLB_DEFAULT_SIZE, (SZ_1G + SZ_256M));
 	swiotlb_adjust_size(size);
+	swiotlb_adjust_alignment(SZ_2M);
 
 	/* Set restricted memory access for virtio. */
 	virtio_set_mem_acc_cb(virtio_require_restricted_mem_acc);