Elevated API Errors
Incident Report for Minds
Postmortem

Date

January 8, 2023

Incident attendees:

  • Zack

Summary of Issue:

This week, we rolled out some additional components that were intended to increase the resiliance of Redis to failure (namely Sentinel and HA Proxy). As a part of this effort, we also now take backups of our cache that can be restored on restart. As we've now learned, larger datasets (such as our production cache) require much more CPU resource to be allotted for these backup processes than we had planned. During one of these backups, the CPU for our container began to throttle. Eventually, this caused a Kubernetes liveness probe to fail, and the container restarted. Replication was seemingly unable to recover (likely due to resource constraints) after the restart, eventually leading to subsequent probe failures and restarts.

Another component of interest would be the Minds backend, which can also be refactored to fallback to requesting from the origin server if the cache is unavailable. This would mean that in the event of Redis failing, users would experience slowness while the application remains usable. In current state, Redis is a critical dependency that breaks the application if down.

In closing, CPU constraints seem to be the root cause here. That said, there's much that can be done to both harden our caching layer and the application itself to be more resilient in the face of such failures. Please see the "Follow up actions" section for more details.

How did this affect end users? (Link Severity/Priority)

  • Failing logins.
  • Generalized latency.
  • Error messages (NOREPLICAS)

How was the incident discovered?

Alarm was triggered on API latency and Redis master status.

What steps were taken to resolve the incident?

  1. Attempted restart of statefulset.
  2. Flushed Redis cache by recreating pods w/o RDB file present.

Issue Timeline (UTC)

  1. [02:22] - Alarm triggered for high API latency.
  2. [02:29] - Issue identified as Redis being downed. Kubernetes liveness probes failed, and containers were repeatedly restarting.
  3. [02:30] - Attempted rolling restart of Redis stateful set, however replication was unable to recover.
  4. [03:00] - Recreated Redis cache, flushing existing keys.
  5. [03:03] - After flushing the cache, latency spiked and eventually returned to a normal state.
  6. [03:21] - Closed incident.

Root Cause (5 Whys)

  1. Enabling Redis backups is likely causing more processes to be running, as Redis will fork backup operations to a background process. see here
  2. This would increase overall CPU utlization, consistent with our metrics.
  3. With the CPU being throttled, this could potentially starve the Redis exporter. This may cause slowness when responding to the liveness probe.
  4. Liveness probe fails, and Redis restarts. Replication is then unable to recover (more testing required to reproduce).

Follow up actions

  1. Increase CPU request for Redis.
  2. Enable latency monitoring on the Redis side. This will compliment our Grafana metrics and provide more useful debug info in the future.
  3. Attempt to reproduce replication failure w/Litmus tests. We should confirm our above suspicions regarding CPU restraints being the culprit here.
  4. Introduce a timeout when retrieving things from cache, rather than failing the request we can instead request the origin if Redis is down.
Posted Jan 10, 2023 - 01:48 UTC

Resolved
This incident has been resolved.
Posted Jan 08, 2023 - 03:21 UTC
Monitoring
A fix has been implemented and we are monitoring the results.
Posted Jan 08, 2023 - 03:03 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted Jan 08, 2023 - 02:29 UTC
Investigating
We're experiencing an elevated level of API errors and are currently looking into the issue.
Posted Jan 08, 2023 - 02:22 UTC
This incident affected: Web, API, and Search / Feeds.