[2/3] x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
Commit Message
Since the following commit:
commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
may be in the optimizing or unoptimizing state when op.kp->flags
has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
The __recover_optprobed_insn check logic is incorrect, a kprobe in the
unoptimizing state may be incorrectly determined as unoptimizing.
As a result, incorrect instructions are copied.
The optprobe_queued_unopt function needs to be exported for invoking in
arch directory.
Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
---
arch/x86/kernel/kprobes/opt.c | 4 ++--
include/linux/kprobes.h | 1 +
kernel/kprobes.c | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
Comments
On Wed, 15 Feb 2023 19:54:29 +0800
Yang Jihong <yangjihong1@huawei.com> wrote:
> Since the following commit:
>
> commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
>
> modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
> may be in the optimizing or unoptimizing state when op.kp->flags
> has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
>
> The __recover_optprobed_insn check logic is incorrect, a kprobe in the
> unoptimizing state may be incorrectly determined as unoptimizing.
> As a result, incorrect instructions are copied.
Ah, good catch!
>
> The optprobe_queued_unopt function needs to be exported for invoking in
> arch directory.
>
> Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
Cc: stable@vger.kernel.org
> Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
> ---
> arch/x86/kernel/kprobes/opt.c | 4 ++--
> include/linux/kprobes.h | 1 +
> kernel/kprobes.c | 2 +-
> 3 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
> index e57e07b0edb6..3718d6863555 100644
> --- a/arch/x86/kernel/kprobes/opt.c
> +++ b/arch/x86/kernel/kprobes/opt.c
> @@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
> /* This function only handles jump-optimized kprobe */
> if (kp && kprobe_optimized(kp)) {
> op = container_of(kp, struct optimized_kprobe, kp);
> - /* If op->list is not empty, op is under optimizing */
> - if (list_empty(&op->list))
> + /* If op is [un]optimized or under unoptimizing */
Hmm, this is a bit confusing comment. If it is unoptimized, the kprobe_optimized() returns false.
Thus the comment should be /* If op is optimized or under unoptimizing */.
Thank you,
> + if (list_empty(&op->list) || optprobe_queued_unopt(op))
> goto found;
> }
> }
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index a0b92be98984..ab39285f71a6 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -378,6 +378,7 @@ extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
> DEFINE_INSN_CACHE_OPS(optinsn);
>
> extern void wait_for_kprobe_optimizer(void);
> +bool optprobe_queued_unopt(struct optimized_kprobe *op);
> #else /* !CONFIG_OPTPROBES */
> static inline void wait_for_kprobe_optimizer(void) { }
> #endif /* CONFIG_OPTPROBES */
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 0730e595f4c1..bf60eb26c873 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -661,7 +661,7 @@ void wait_for_kprobe_optimizer(void)
> mutex_unlock(&kprobe_mutex);
> }
>
> -static bool optprobe_queued_unopt(struct optimized_kprobe *op)
> +bool optprobe_queued_unopt(struct optimized_kprobe *op)
> {
> struct optimized_kprobe *_op;
>
> --
> 2.30.GIT
>
Hello Masami,
On 2023/2/15 23:08, Masami Hiramatsu (Google) wrote:
> On Wed, 15 Feb 2023 19:54:29 +0800
> Yang Jihong <yangjihong1@huawei.com> wrote:
>
>> Since the following commit:
>>
>> commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
>>
>> modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
>> may be in the optimizing or unoptimizing state when op.kp->flags
>> has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
>>
>> The __recover_optprobed_insn check logic is incorrect, a kprobe in the
>> unoptimizing state may be incorrectly determined as unoptimizing.
>> As a result, incorrect instructions are copied.
>
> Ah, good catch!
>
>>
>> The optprobe_queued_unopt function needs to be exported for invoking in
>> arch directory.
>>
>> Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
>
> Cc: stable@vger.kernel.org
OK, will add in next version.
>
>> Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
>> ---
>> arch/x86/kernel/kprobes/opt.c | 4 ++--
>> include/linux/kprobes.h | 1 +
>> kernel/kprobes.c | 2 +-
>> 3 files changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
>> index e57e07b0edb6..3718d6863555 100644
>> --- a/arch/x86/kernel/kprobes/opt.c
>> +++ b/arch/x86/kernel/kprobes/opt.c
>> @@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
>> /* This function only handles jump-optimized kprobe */
>> if (kp && kprobe_optimized(kp)) {
>> op = container_of(kp, struct optimized_kprobe, kp);
>> - /* If op->list is not empty, op is under optimizing */
>> - if (list_empty(&op->list))
>> + /* If op is [un]optimized or under unoptimizing */
>
> Hmm, this is a bit confusing comment. If it is unoptimized, the kprobe_optimized() returns false.
> Thus the comment should be /* If op is optimized or under unoptimizing */.
>
OK, will fix in next version.
Thanks,
Yang.
@@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
/* This function only handles jump-optimized kprobe */
if (kp && kprobe_optimized(kp)) {
op = container_of(kp, struct optimized_kprobe, kp);
- /* If op->list is not empty, op is under optimizing */
- if (list_empty(&op->list))
+ /* If op is [un]optimized or under unoptimizing */
+ if (list_empty(&op->list) || optprobe_queued_unopt(op))
goto found;
}
}
@@ -378,6 +378,7 @@ extern void opt_pre_handler(struct kprobe *p, struct pt_regs *regs);
DEFINE_INSN_CACHE_OPS(optinsn);
extern void wait_for_kprobe_optimizer(void);
+bool optprobe_queued_unopt(struct optimized_kprobe *op);
#else /* !CONFIG_OPTPROBES */
static inline void wait_for_kprobe_optimizer(void) { }
#endif /* CONFIG_OPTPROBES */
@@ -661,7 +661,7 @@ void wait_for_kprobe_optimizer(void)
mutex_unlock(&kprobe_mutex);
}
-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
+bool optprobe_queued_unopt(struct optimized_kprobe *op)
{
struct optimized_kprobe *_op;