This code tells a lie:

const fs = require('fs/promises');

async function saveFavoriteColor(favoriteColor) {
  await fs.writeFile('favoriteColor.txt', favoriteColor, 'utf8');
  console.log('Saved!');
}

Take a moment to try and spot the lie. Is it obvious?

As a hint, this story:

Some time ago at Red Planet Labs, I was ramping up our chaos testing of Rama, a distributed database and stream processing framework, in advance of its launch. We’d long been injecting random process kills:

kill -9 <worker-pid>

And when I added a forceful instance kill disturbance, I thought I was walking well-trod territory:

aws ec2 stop-instances --instance-ids <instance-id> --force

But immediately, I started seeing data verification failures in our distributed quality testing environments.

Reveal explanation
In hindsight, it seems obvious: there's a buffer between the application and the physical disk, and the operating system, by default, flushes it asynchronously for reasons of efficiency.

When we want to ensure that our writes are truly durable before proceeding, we must explicitly instruct the OS to flush; for UNIX systems, this happens via the fsync system call.

In Rama's implementation, we often wrote to disk without an explicit fsync, but assumed that what we'd written was durable after the write call returned.

The only way to actually exercise this bug was to terminate machines without giving them the chance to flush their buffers to disk.

Here's a little visualization; try "unplugging" the machine (i.e. forcefully restarting it) after a change is acknowledged as "Saved!", but before the cache is flushed to disk:
Often, as developers, we interact with the disk via databases that by default address this issue for us (see Postgres, MySQL) (though other databases, by default, don't (see MongoDB)).

But, regardless, it's always good to keep this one ine mind.