[03/12] KVM: arm64: Block unsafe FF-A calls from the host
Commit Message
From: Will Deacon <will@kernel.org>
When KVM is initialised in protected mode, we must take care to filter
certain FFA calls from the host kernel so that the integrity of guest
and hypervisor memory is maintained and is not made available to the
secure world.
As a first step, intercept and block all memory-related FF-A SMC calls
from the host to EL3. This puts the framework in place for handling them
properly.
Co-developed-by: Andrew Walbran <qwandor@google.com>
Signed-off-by: Andrew Walbran <qwandor@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Quentin Perret <qperret@google.com>
---
arch/arm64/kvm/hyp/include/nvhe/ffa.h | 16 ++++
arch/arm64/kvm/hyp/nvhe/Makefile | 2 +-
arch/arm64/kvm/hyp/nvhe/ffa.c | 113 ++++++++++++++++++++++++++
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 3 +
4 files changed, 133 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kvm/hyp/include/nvhe/ffa.h
create mode 100644 arch/arm64/kvm/hyp/nvhe/ffa.c
Comments
On Wed, Nov 16, 2022 at 05:03:26PM +0000, Quentin Perret wrote:
> From: Will Deacon <will@kernel.org>
>
> When KVM is initialised in protected mode, we must take care to filter
> certain FFA calls from the host kernel so that the integrity of guest
> and hypervisor memory is maintained and is not made available to the
> secure world.
>
> As a first step, intercept and block all memory-related FF-A SMC calls
> from the host to EL3. This puts the framework in place for handling them
> properly.
Shouldn't FFA_FEATURES interception actually precede this patch? At this
point in the series we're outright lying about the supported features to
the host.
--
Thanks,
Oliver
Sorry, hit send a bit too early. Reviewing the patch itself:
On Wed, Nov 16, 2022 at 05:03:26PM +0000, Quentin Perret wrote:
[...]
> +static bool ffa_call_unsupported(u64 func_id)
> +{
> + switch (func_id) {
> + /* Unsupported memory management calls */
> + case FFA_FN64_MEM_RETRIEVE_REQ:
> + case FFA_MEM_RETRIEVE_RESP:
> + case FFA_MEM_RELINQUISH:
> + case FFA_MEM_OP_PAUSE:
> + case FFA_MEM_OP_RESUME:
> + case FFA_MEM_FRAG_RX:
> + case FFA_FN64_MEM_DONATE:
> + /* Indirect message passing via RX/TX buffers */
> + case FFA_MSG_SEND:
> + case FFA_MSG_POLL:
> + case FFA_MSG_WAIT:
> + /* 32-bit variants of 64-bit calls */
> + case FFA_MSG_SEND_DIRECT_REQ:
> + case FFA_MSG_SEND_DIRECT_RESP:
> + case FFA_RXTX_MAP:
> + case FFA_MEM_DONATE:
> + case FFA_MEM_RETRIEVE_REQ:
> + return true;
> + }
> +
> + return false;
> +}
Wouldn't an allowlist behave better in this case? While unlikely, you
wouldn't want EL3 implementing some FFA_BACKDOOR_PVM SMC that falls
outside of the denylist and is passed through.
> +bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt)
> +{
> + DECLARE_REG(u64, func_id, host_ctxt, 0);
> + struct arm_smccc_res res;
> +
> + if (!is_ffa_call(func_id))
> + return false;
> +
> + switch (func_id) {
> + /* Memory management */
> + case FFA_FN64_RXTX_MAP:
> + case FFA_RXTX_UNMAP:
> + case FFA_MEM_SHARE:
> + case FFA_FN64_MEM_SHARE:
> + case FFA_MEM_LEND:
> + case FFA_FN64_MEM_LEND:
> + case FFA_MEM_RECLAIM:
> + case FFA_MEM_FRAG_TX:
> + break;
> + }
What is the purpose of this switch?
> +
> + if (!ffa_call_unsupported(func_id))
> + return false; /* Pass through */
Another (tiny) benefit of implementing an allowlist is that it avoids
the use of double-negative logic like this.
--
Thanks,
Oliver
On Wed, Nov 16, 2022 at 05:40:48PM +0000, Oliver Upton wrote:
> On Wed, Nov 16, 2022 at 05:03:26PM +0000, Quentin Perret wrote:
> > From: Will Deacon <will@kernel.org>
> >
> > When KVM is initialised in protected mode, we must take care to filter
> > certain FFA calls from the host kernel so that the integrity of guest
> > and hypervisor memory is maintained and is not made available to the
> > secure world.
> >
> > As a first step, intercept and block all memory-related FF-A SMC calls
> > from the host to EL3. This puts the framework in place for handling them
> > properly.
>
> Shouldn't FFA_FEATURES interception actually precede this patch? At this
> point in the series we're outright lying about the supported features to
> the host.
FF-A is in a pretty sorry state after this patch as we block all the memory
transactions, but I take your point that we should be consistent and not
advertise the features that we're blocking.
I'll return FFA_RET_NOT_SUPPORTED for all FFA_FEATURES calls until the
interception patch comes in later and does something smarter.
Will
On Wed, Nov 16, 2022 at 05:48:42PM +0000, Oliver Upton wrote:
> Sorry, hit send a bit too early. Reviewing the patch itself:
>
> On Wed, Nov 16, 2022 at 05:03:26PM +0000, Quentin Perret wrote:
>
> [...]
>
> > +static bool ffa_call_unsupported(u64 func_id)
> > +{
> > + switch (func_id) {
> > + /* Unsupported memory management calls */
> > + case FFA_FN64_MEM_RETRIEVE_REQ:
> > + case FFA_MEM_RETRIEVE_RESP:
> > + case FFA_MEM_RELINQUISH:
> > + case FFA_MEM_OP_PAUSE:
> > + case FFA_MEM_OP_RESUME:
> > + case FFA_MEM_FRAG_RX:
> > + case FFA_FN64_MEM_DONATE:
> > + /* Indirect message passing via RX/TX buffers */
> > + case FFA_MSG_SEND:
> > + case FFA_MSG_POLL:
> > + case FFA_MSG_WAIT:
> > + /* 32-bit variants of 64-bit calls */
> > + case FFA_MSG_SEND_DIRECT_REQ:
> > + case FFA_MSG_SEND_DIRECT_RESP:
> > + case FFA_RXTX_MAP:
> > + case FFA_MEM_DONATE:
> > + case FFA_MEM_RETRIEVE_REQ:
> > + return true;
> > + }
> > +
> > + return false;
> > +}
>
> Wouldn't an allowlist behave better in this case? While unlikely, you
> wouldn't want EL3 implementing some FFA_BACKDOOR_PVM SMC that falls
> outside of the denylist and is passed through.
Given that we're not intercepting all SMCs (rather, only those in the
FF-A service range), I think the denylist works much better because the
default action is "allow". We _have_ to trust EL3 regardless, as it
could just use any allowed SMC for a backdoor if it wanted. Ultimately,
EL3 runs the show in the hierarchical security model of the architecture
and we sadly can't do much about that.
> > +bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt)
> > +{
> > + DECLARE_REG(u64, func_id, host_ctxt, 0);
> > + struct arm_smccc_res res;
> > +
> > + if (!is_ffa_call(func_id))
> > + return false;
> > +
> > + switch (func_id) {
> > + /* Memory management */
> > + case FFA_FN64_RXTX_MAP:
> > + case FFA_RXTX_UNMAP:
> > + case FFA_MEM_SHARE:
> > + case FFA_FN64_MEM_SHARE:
> > + case FFA_MEM_LEND:
> > + case FFA_FN64_MEM_LEND:
> > + case FFA_MEM_RECLAIM:
> > + case FFA_MEM_FRAG_TX:
> > + break;
> > + }
>
> What is the purpose of this switch?
As of this patch, it's not serving any functional purpose. The idea is
that later patches hook in here to provide handling at EL2. I'll remove
it and introduce it bit by bit.
>
> > +
> > + if (!ffa_call_unsupported(func_id))
> > + return false; /* Pass through */
>
> Another (tiny) benefit of implementing an allowlist is that it avoids
> the use of double-negative logic like this.
I should just rework this to be ffa_call_supported().
Will
new file mode 100644
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2022 - Google LLC
+ * Author: Andrew Walbran <qwandor@google.com>
+ */
+#ifndef __KVM_HYP_FFA_H
+#define __KVM_HYP_FFA_H
+
+#include <asm/kvm_host.h>
+
+#define FFA_MIN_FUNC_NUM 0x60
+#define FFA_MAX_FUNC_NUM 0x7F
+
+bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt);
+
+#endif /* __KVM_HYP_FFA_H */
@@ -22,7 +22,7 @@ lib-objs := $(addprefix ../../../lib/, $(lib-objs))
hyp-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \
- cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o
+ cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o
hyp-obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
../fpsimd.o ../hyp-entry.o ../exception.o ../pgtable.o
hyp-obj-$(CONFIG_DEBUG_LIST) += list_debug.o
new file mode 100644
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * FF-A v1.0 proxy to filter out invalid memory-sharing SMC calls issued by
+ * the host. FF-A is a slightly more palatable abbreviation of "Arm Firmware
+ * Framework for Arm A-profile", which is specified by Arm in document
+ * number DEN0077.
+ *
+ * Copyright (C) 2022 - Google LLC
+ * Author: Andrew Walbran <qwandor@google.com>
+ *
+ * This driver hooks into the SMC trapping logic for the host and intercepts
+ * all calls falling within the FF-A range. Each call is either:
+ *
+ * - Forwarded on unmodified to the SPMD at EL3
+ * - Rejected as "unsupported"
+ * - Accompanied by a host stage-2 page-table check/update and reissued
+ *
+ * Consequently, any attempts by the host to make guest memory pages
+ * accessible to the secure world using FF-A will be detected either here
+ * (in the case that the memory is already owned by the guest) or during
+ * donation to the guest (in the case that the memory was previously shared
+ * with the secure world).
+ *
+ * To allow the rolling-back of page-table updates and FF-A calls in the
+ * event of failure, operations involving the RXTX buffers are locked for
+ * the duration and are therefore serialised.
+ */
+
+#include <linux/arm-smccc.h>
+#include <linux/arm_ffa.h>
+#include <nvhe/ffa.h>
+#include <nvhe/trap_handler.h>
+
+static void ffa_to_smccc_error(struct arm_smccc_res *res, u64 ffa_errno)
+{
+ *res = (struct arm_smccc_res) {
+ .a0 = FFA_ERROR,
+ .a2 = ffa_errno,
+ };
+}
+
+static void ffa_set_retval(struct kvm_cpu_context *ctxt,
+ struct arm_smccc_res *res)
+{
+ cpu_reg(ctxt, 0) = res->a0;
+ cpu_reg(ctxt, 1) = res->a1;
+ cpu_reg(ctxt, 2) = res->a2;
+ cpu_reg(ctxt, 3) = res->a3;
+}
+
+static bool is_ffa_call(u64 func_id)
+{
+ return ARM_SMCCC_IS_FAST_CALL(func_id) &&
+ ARM_SMCCC_OWNER_NUM(func_id) == ARM_SMCCC_OWNER_STANDARD &&
+ ARM_SMCCC_FUNC_NUM(func_id) >= FFA_MIN_FUNC_NUM &&
+ ARM_SMCCC_FUNC_NUM(func_id) <= FFA_MAX_FUNC_NUM;
+}
+
+static bool ffa_call_unsupported(u64 func_id)
+{
+ switch (func_id) {
+ /* Unsupported memory management calls */
+ case FFA_FN64_MEM_RETRIEVE_REQ:
+ case FFA_MEM_RETRIEVE_RESP:
+ case FFA_MEM_RELINQUISH:
+ case FFA_MEM_OP_PAUSE:
+ case FFA_MEM_OP_RESUME:
+ case FFA_MEM_FRAG_RX:
+ case FFA_FN64_MEM_DONATE:
+ /* Indirect message passing via RX/TX buffers */
+ case FFA_MSG_SEND:
+ case FFA_MSG_POLL:
+ case FFA_MSG_WAIT:
+ /* 32-bit variants of 64-bit calls */
+ case FFA_MSG_SEND_DIRECT_REQ:
+ case FFA_MSG_SEND_DIRECT_RESP:
+ case FFA_RXTX_MAP:
+ case FFA_MEM_DONATE:
+ case FFA_MEM_RETRIEVE_REQ:
+ return true;
+ }
+
+ return false;
+}
+
+bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt)
+{
+ DECLARE_REG(u64, func_id, host_ctxt, 0);
+ struct arm_smccc_res res;
+
+ if (!is_ffa_call(func_id))
+ return false;
+
+ switch (func_id) {
+ /* Memory management */
+ case FFA_FN64_RXTX_MAP:
+ case FFA_RXTX_UNMAP:
+ case FFA_MEM_SHARE:
+ case FFA_FN64_MEM_SHARE:
+ case FFA_MEM_LEND:
+ case FFA_FN64_MEM_LEND:
+ case FFA_MEM_RECLAIM:
+ case FFA_MEM_FRAG_TX:
+ break;
+ }
+
+ if (!ffa_call_unsupported(func_id))
+ return false; /* Pass through */
+
+ ffa_to_smccc_error(&res, FFA_RET_NOT_SUPPORTED);
+ ffa_set_retval(host_ctxt, &res);
+ return true;
+}
@@ -13,6 +13,7 @@
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
+#include <nvhe/ffa.h>
#include <nvhe/mem_protect.h>
#include <nvhe/mm.h>
#include <nvhe/pkvm.h>
@@ -373,6 +374,8 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
bool handled;
handled = kvm_host_psci_handler(host_ctxt);
+ if (!handled)
+ handled = kvm_host_ffa_handler(host_ctxt);
if (!handled)
default_host_smc_handler(host_ctxt);