[1/2] nvme-apple: Do not try to shut down the controller twice

Message ID 20230111043614.27087-2-marcan@marcan.st
State New
Headers
Series nvme-apple: Fix suspend-resume regression |

Commit Message

Hector Martin Jan. 11, 2023, 4:36 a.m. UTC
  The blamed commit stopped explicitly disabling the controller when we do
a controlled shutdown, but apple_nvme_reset_work was only checking for
the disable bit before deciding to issue another disable. Check for the
shutdown state too, to avoid breakage.

This issue does not affect nvme-pci, since it only issues controller
shutdowns when the system is actually shutting down anyway.

Fixes: c76b8308e4c9 ("nvme-apple: fix controller shutdown in apple_nvme_disable")
Signed-off-by: Hector Martin <marcan@marcan.st>
---
 drivers/nvme/host/apple.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
  

Comments

Christoph Hellwig Jan. 11, 2023, 4:54 a.m. UTC | #1
On Wed, Jan 11, 2023 at 01:36:13PM +0900, Hector Martin wrote:
> The blamed commit stopped explicitly disabling the controller when we do
> a controlled shutdown, but apple_nvme_reset_work was only checking for
> the disable bit before deciding to issue another disable. Check for the
> shutdown state too, to avoid breakage.
> 
> This issue does not affect nvme-pci, since it only issues controller
> shutdowns when the system is actually shutting down anyway.

There's a few other places where nvme-pci does a shutdown like
probe/reset failure and most notably and mostly notably various
power management scenarios.

What path is causing a problem here for nvme-apple?  I fear we're
missing some highler level check here and getting further out of
sync.
  
Hector Martin Jan. 11, 2023, 5:10 a.m. UTC | #2
On 11/01/2023 13.54, Christoph Hellwig wrote:
> On Wed, Jan 11, 2023 at 01:36:13PM +0900, Hector Martin wrote:
>> The blamed commit stopped explicitly disabling the controller when we do
>> a controlled shutdown, but apple_nvme_reset_work was only checking for
>> the disable bit before deciding to issue another disable. Check for the
>> shutdown state too, to avoid breakage.
>>
>> This issue does not affect nvme-pci, since it only issues controller
>> shutdowns when the system is actually shutting down anyway.
> 
> There's a few other places where nvme-pci does a shutdown like
> probe/reset failure and most notably and mostly notably various
> power management scenarios.
> 
> What path is causing a problem here for nvme-apple?  I fear we're
> missing some highler level check here and getting further out of
> sync.
> 

OK, so the first question is who is responsible for resetting the
controller after a shutdown? The spec requires a reset in order to bring
it back up from that state. Indeed the PCIe code does an explicit
disable right now (though, judging by the comment, it probably wasn't
done with the intent of resetting after a shutdown, it just happens to
work for that too :))

Right now, apple_nvme_reset_work() tries to check for the condition of
an enabled controller (under the assumption that it's coming from a live
controller, not considering shutdown/sleep) and issue an
apple_nvme_disable(). However, this doesn't work when resuming because
at that point the firmware coprocessor is shut down, so the device isn't
usable (can't even get a disable command to complete properly). Perhaps
a better conditional here would be to check for
apple_rtkit_is_running(), since apple_nvme_disable() can't work otherwise.

But then, even if we get past that, once apple_nvme_reset_work actually
resets the firmware CPU and kicks things back online, the controller is
still logically in the shutdown state. So something has to disable it in
order for nvme_enable_ctrl() to work.

An alternate patch would be to change the gate for apple_nvme_disable()
in apple_nvme_reset_work() to check for apple_rtkit_is_running() on top
of the controller enable state, and then add a further direct call to
nvme_disable_ctrl(..., false) later in apple_nvme_reset_work, once the
firmware is back up, to ensure we can enable it after. Thoughts?

Alternatively, we could just revert to the prior behavior of always
issuing a disable after a shutdown. We need to disable at some point to
come back anyway, so it might as well be done early (before we shut down
firmware, so it works).

- Hector
  
Christoph Hellwig Jan. 11, 2023, 5:18 a.m. UTC | #3
On Wed, Jan 11, 2023 at 02:10:42PM +0900, Hector Martin wrote:
> OK, so the first question is who is responsible for resetting the
> controller after a shutdown? The spec requires a reset in order to bring
> it back up from that state. Indeed the PCIe code does an explicit
> disable right now (though, judging by the comment, it probably wasn't
> done with the intent of resetting after a shutdown, it just happens to
> work for that too :))

We need to do the reset before banging the registers to make sure
the controller is in a sane state before starting to set it up.

> Right now, apple_nvme_reset_work() tries to check for the condition of
> an enabled controller (under the assumption that it's coming from a live
> controller, not considering shutdown/sleep) and issue an
> apple_nvme_disable(). However, this doesn't work when resuming because
> at that point the firmware coprocessor is shut down, so the device isn't
> usable (can't even get a disable command to complete properly). Perhaps
> a better conditional here would be to check for
> apple_rtkit_is_running(), since apple_nvme_disable() can't work otherwise.

So on a resume the controller should have previously been shutdown
properly, and this shouldn't be an issue.  Does the apple implementation
leave some weird state after a shut down?  In that case the resume
callback might want to do an explicit controller disable before doing
the reset.

> Alternatively, we could just revert to the prior behavior of always
> issuing a disable after a shutdown. We need to disable at some point to
> come back anyway, so it might as well be done early (before we shut down
> firmware, so it works).

So the disable before shutdown doesn't really make sense from the
NVMe POV - the shutdown is to cleanly bring the device into a state
where it can quickly recover.  While a disable is an abprupt shutdown,
which can have effects on recover time, and might also use way more
P/E cycles than nessecary.  So if you always want to do a disable,
it should be after shutdown, and in doubt in the resume / setup path
instead of the remove / suspend one.
  
Hector Martin Jan. 11, 2023, 5:44 a.m. UTC | #4
On 11/01/2023 14.18, Christoph Hellwig wrote:
> On Wed, Jan 11, 2023 at 02:10:42PM +0900, Hector Martin wrote:
>> OK, so the first question is who is responsible for resetting the
>> controller after a shutdown? The spec requires a reset in order to bring
>> it back up from that state. Indeed the PCIe code does an explicit
>> disable right now (though, judging by the comment, it probably wasn't
>> done with the intent of resetting after a shutdown, it just happens to
>> work for that too :))
> 
> We need to do the reset before banging the registers to make sure
> the controller is in a sane state before starting to set it up.
> 
>> Right now, apple_nvme_reset_work() tries to check for the condition of
>> an enabled controller (under the assumption that it's coming from a live
>> controller, not considering shutdown/sleep) and issue an
>> apple_nvme_disable(). However, this doesn't work when resuming because
>> at that point the firmware coprocessor is shut down, so the device isn't
>> usable (can't even get a disable command to complete properly). Perhaps
>> a better conditional here would be to check for
>> apple_rtkit_is_running(), since apple_nvme_disable() can't work otherwise.
> 
> So on a resume the controller should have previously been shutdown
> properly, and this shouldn't be an issue.  Does the apple implementation
> leave some weird state after a shut down?  In that case the resume
> callback might want to do an explicit controller disable before doing
> the reset.

The controller is *shut down* but it's not *disabled*, and the existing
code was only checking whether the controller is enabled to decide to
issue another disable.

The higher-level resume path can't do a disable since the firmware isn't
up at that point, and the subsequent reset (which is shared with other
conditions that cause a reset) is what brings the firmware back up. So
the disable has to either happen in the suspend path, or in the shared
reset path after we know the firmware is running.

A shutdown but enabled controller is in "limbo"; the only way to know
it's nonfunctional is explicitly checking the shutdown status bits.
Other than that, it looks like a live controller that plays dead. This
is documented in the spec as such.
>> Alternatively, we could just revert to the prior behavior of always
>> issuing a disable after a shutdown. We need to disable at some point to
>> come back anyway, so it might as well be done early (before we shut down
>> firmware, so it works).
> 
> So the disable before shutdown doesn't really make sense from the
> NVMe POV - the shutdown is to cleanly bring the device into a state
> where it can quickly recover.  While a disable is an abprupt shutdown,
> which can have effects on recover time, and might also use way more
> P/E cycles than nessecary.

That's only if you issue a disable *in lieu* of a shutdown (and in fact
if you do that on Apple controllers under some conditions, they crash).
Issuing a disable *after* a shutdown is required by the NVMe spec if you
want to use the controller again (and should basically do nothing at
that point, since the controller is already cleanly shut down, but it is
required to set EN to 0 such that the subsequent 0->1 transition
actually kickstarts the controller again). If you don't do that, the
controller never leaves the shutdown state (how would it know?).

To be clear, the sequence I was attempting to describe (which is what we
were doing before the patch that regressed this) was:

(on sleep)
- NVMe shutdown
- NVMe disable
- Firmware shutdown

After the firmware shutdown, we can't do anything with NVMe again until
we start firmware back up, which requires going through the reset flow.

Right now we're doing:

(on sleep)
- NVMe shutdown
- Firmware shutdown
(wakeup)
- Oops, NVMe is enabled, let's disable it! (times out due to FW being
down but failure isn't propagated)
- Firmware startup
- NVMe enable (thinks it succeeds but actually the controller is still
in the shutdown state since it was never disabled and this persists
across the firmware cycle!)
- I/O (never completes)

- Hector
  
Christoph Hellwig Jan. 11, 2023, 6:41 a.m. UTC | #5
On Wed, Jan 11, 2023 at 02:44:42PM +0900, Hector Martin wrote:
> The higher-level resume path can't do a disable since the firmware isn't
> up at that point, and the subsequent reset (which is shared with other
> conditions that cause a reset) is what brings the firmware back up. So
> the disable has to either happen in the suspend path, or in the shared
> reset path after we know the firmware is running.

Ok, that's the weird part where nvme-apply really isn't nvme at all.
Because for actual NVMe devices the register access must work all the
time.

> That's only if you issue a disable *in lieu* of a shutdown (and in fact
> if you do that on Apple controllers under some conditions, they crash).
> Issuing a disable *after* a shutdown is required by the NVMe spec if you
> want to use the controller again (and should basically do nothing at
> that point, since the controller is already cleanly shut down, but it is
> required to set EN to 0 such that the subsequent 0->1 transition
> actually kickstarts the controller again). If you don't do that, the
> controller never leaves the shutdown state (how would it know?).

Yes.  Although I would not call this a disable after shutdown, but a
disable (or rather reset) before using it again.

> To be clear, the sequence I was attempting to describe (which is what we
> were doing before the patch that regressed this) was:
> 
> (on sleep)
> - NVMe shutdown
> - NVMe disable
> - Firmware shutdown
> 
> After the firmware shutdown, we can't do anything with NVMe again until
> we start firmware back up, which requires going through the reset flow.
> 
> Right now we're doing:
> 
> (on sleep)
> - NVMe shutdown
> - Firmware shutdown
> (wakeup)
> - Oops, NVMe is enabled, let's disable it! (times out due to FW being
> down but failure isn't propagated)
> - Firmware startup
> - NVMe enable (thinks it succeeds but actually the controller is still
> in the shutdown state since it was never disabled and this persists
> across the firmware cycle!)
> - I/O (never completes)

Yes, so I guess due to the weird firmware issues doing the disable
after shutdown instead of before setting up might be the right
thing for nvme-apple, unlike real NVMe.  So I guess we need to do
that in the driver, and add a big fat comment explaining why.
  

Patch

diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
index e13a992b6096..1961376447dc 100644
--- a/drivers/nvme/host/apple.c
+++ b/drivers/nvme/host/apple.c
@@ -1023,7 +1023,8 @@  static void apple_nvme_reset_work(struct work_struct *work)
 		goto out;
 	}
 
-	if (anv->ctrl.ctrl_config & NVME_CC_ENABLE)
+	if (anv->ctrl.ctrl_config & NVME_CC_ENABLE &&
+	    !(anv->ctrl.ctrl_config & NVME_CC_SHN_MASK))
 		apple_nvme_disable(anv, false);
 
 	/* RTKit must be shut down cleanly for the (soft)-reset to work */