Published at

Understanding the Node.js Event Loop

Table of Contents

Node.js has a reputation for being “single-threaded, but fast”, which sounds like a paradox.

If you come from a multi-threaded background like Java or C#, this sounds like a recipe for disaster. How can a single thread handle thousands of concurrent requests without freezing? How does it perform database queries, file reads, and network calls without blocking the entire application?

The secret behind this seemingly impossible feat is the event loop — a scheduling mechanism that orchestrates all asynchronous activity in a Node.js process.

While interesting, understanding how the event loop works isn’t just a nice-to-know, it can be the difference between writing code that scales gracefully under load and code that grinds to a halt the moment traffic spikes. This article walks through how the event loop works, what happens at each phase, and how to avoid the most common pitfalls that can trip up a developer.

How the Event Loop Works

To understand the event loop, you first need to understand the three main components that make Node.js tick: V8, libuv, and the event loop itself.

  • V8 is Google’s JavaScript engine. It compiles and executes your JavaScript code, manages the call stack, and handles memory allocation. On its own, V8 knows nothing about file systems, networks, or timers; it just runs JavaScript.
  • libuv is a C library that provides Node.js with asynchronous I/O capabilities. It abstracts away the differences between operating systems (Linux’s epoll, macOS’s kqueue, Windows’ IOCP) and exposes a unified API for file system operations, networking, timers, child processes, and more. It also maintains a thread pool (four threads by default, configurable via UV_THREADPOOL_SIZE) for operations that don’t have native async OS support, like file system reads and DNS lookups.
  • The event loop is the coordination layer between V8 and libuv. When your code calls an asynchronous function like fs.readFile(), Node hands the operation to libuv, which either delegates it to the OS or to its thread pool. When the operation completes, libuv places the callback in the appropriate queue. The event loop then picks up that callback and pushes it onto V8’s call stack for execution.

The event loop is a continuous cycle that keeps running as long as there are pending callbacks, open handles (like servers or timers), or active requests. Each iteration of this cycle is called a “tick”. During each tick, the loop walks through a fixed sequence of phases, running callbacks that are ready at each one.

Phases of the Event Loop

Below is a diagram that visualizes the different phases of the event loop.

Each phase has a FIFO (first-in, first-out) queue of callbacks to execute. When the event loop enters a given phase, it runs callbacks in that phase’s queue until the queue is exhausted or a maximum number of callbacks have been executed, then moves on to the next phase.

   ┌───────────────────────────┐
┌─>           Timers
  └─────────────┬─────────────┘
  ┌─────────────┴─────────────┐
     Pending Callbacks
  └─────────────┬─────────────┘
  ┌─────────────┴─────────────┐
       Idle / Prepare
  └─────────────┬─────────────┘
  ┌─────────────┴─────────────┐
           Poll
  └─────────────┬─────────────┘
  ┌─────────────┴─────────────┐
           Check
  └─────────────┬─────────────┘
  ┌─────────────┴─────────────┐
      Close Callbacks
  └─────────────┬─────────────┘

└────────────────┘

After each individual callback completes, Node drains the microtask queues before moving on — first the process.nextTick queue, then the Promise microtask queue (e.g. resolved Promise handlers). This happens before the event loop continues, whether that means running another callback in the same phase or advancing to the next phase.

Timers

The timers phase executes callbacks scheduled by setTimeout() and setInterval(). A timer specifies a threshold after which the callback may be executed, not the exact time it will be executed. The event loop will run timer callbacks as soon as it can once the threshold has passed, but other callbacks or system activity may delay them.

setTimeout(() => {
  console.log('This runs after at least 100ms');
}, 100);

The key word is “at least”. If the poll phase is busy processing I/O callbacks when the timer expires, the timer callback won’t fire until the loop comes back around to the timers phase on the next tick. This is why setTimeout(fn, 100) doesn’t guarantee execution at exactly 100ms — it guarantees a minimum delay of 100ms.

Pending Callbacks

This phase executes callbacks for certain system operations, such as TCP error types. For example, if a TCP socket receives ECONNREFUSED when attempting to connect, some Unix systems queue the error report for this phase. Most developers never interact with this phase directly.

Idle/Prepare

This phase is used internally by Node.js and libuv for housekeeping. It isn’t directly exposed to user code and can safely be ignored for the purposes of understanding how your application behaves.

Poll

The poll phase is the heart of the event loop. It has two primary responsibilities: calculating how long it should block and wait for I/O, and then processing events in the poll queue.

When the event loop enters the poll phase, it checks whether there are callbacks in the poll queue. If there are, it executes them synchronously, one by one, until the queue is drained or the system-dependent hard limit is reached. If the poll queue is empty, the loop checks whether any setImmediate() callbacks are scheduled. If they are, the loop moves on to the check phase. If no setImmediate() callbacks are scheduled, the loop will wait at the poll phase for new I/O events to arrive, up to a calculated timeout based on the nearest pending timer.

This is what makes Node.js efficient: rather than burning CPU cycles by spinning in a tight loop, the event loop parks itself in the poll phase, letting the operating system wake it up when there’s something to do.

Check

The check phase runs callbacks registered with setImmediate(). This function is designed to execute a callback after the poll phase completes. If the poll phase becomes idle, the loop will move to the check phase rather than waiting for new poll events.

setImmediate(() => {
  console.log('Runs after the poll phase');
});

setImmediate() is particularly useful when you want to execute code after I/O events in the current tick have been processed but before any timers fire on the next tick.

Close Callbacks

If a socket or handle is closed abruptly (e.g., socket.destroy()), the 'close' event will be emitted in this phase. Otherwise, it will be emitted via process.nextTick().

Microtasks vs. Macrotasks

This is where things get subtle — and where most event loop confusion originates.

The callbacks in the six phases described above are often called macrotasks (or simply “tasks”). But there are two additional queues that sit outside the phase structure and get drained between every phase transition: the process.nextTick queue and the Promise microtask queue.

process.nextTick

process.nextTick() schedules a callback to run before the event loop continues to the next phase. It has the highest priority of any asynchronous mechanism in Node.js.

process.nextTick(() => {
  console.log('I run before anything else in the queue');
});

After every individual callback completes (not just after every phase), Node checks the process.nextTick queue and drains it completely. This means if you recursively call process.nextTick, you can starve the event loop — it will never move on to the next phase because it keeps finding more nextTick callbacks to run.

// Don't do this — it starves I/O
function recurse() {
  process.nextTick(recurse);
}
recurse();

Promise Microtasks

When a Promise resolves, its .then() or .catch() handler is placed in the microtask queue. This queue is drained immediately after the process.nextTick queue, and before moving to the next event loop phase.

Promise.resolve().then(() => {
  console.log('Microtask: runs after nextTick, before next phase');
});

Since async/await is syntactic sugar over Promises, await expressions also schedule microtasks.

Execution Order

The priority is always: synchronous code → process.nextTick → Promise microtasks → next event loop phase callback.

Here’s a complete example that demonstrates the ordering:

const fs = require('fs');

setTimeout(() => console.log('1: setTimeout'), 0);
setImmediate(() => console.log('2: setImmediate'));

Promise.resolve().then(() => console.log('3: Promise'));
process.nextTick(() => console.log('4: nextTick'));

fs.readFile(__filename, () => {
  console.log('5: I/O callback');
  setTimeout(() => console.log('6: setTimeout inside I/O'), 0);
  setImmediate(() => console.log('7: setImmediate inside I/O'));
  process.nextTick(() => console.log('8: nextTick inside I/O'));
});

console.log('9: synchronous');

Output:

9: synchronous
4: nextTick
3: Promise
1: setTimeout
2: setImmediate
5: I/O callback
8: nextTick inside I/O
7: setImmediate inside I/O
6: setTimeout inside I/O

A few things worth noting: synchronous code runs first. Then nextTick fires before the Promise. The ordering of setTimeout and setImmediate at the top level is actually non-deterministic (it depends on process performance at startup), but inside an I/O callback, setImmediate always fires before setTimeout because the check phase comes before the timers phase in the next iteration.


Common Pitfalls

Blocking the Event Loop

The single most important rule in Node.js: don’t block the event loop. Because all JavaScript runs on one thread, a single slow operation halts everything — incoming requests queue up, timers miss their deadlines, and your application becomes unresponsive.

Common blockers include synchronous file operations (fs.readFileSync), CPU-intensive computations (parsing large JSON, image processing, cryptographic operations), and tight loops that process large datasets without yielding.

// This blocks the entire event loop for seconds
const data = JSON.parse(fs.readFileSync('huge-file.json', 'utf8'));

Starving I/O with process.nextTick

As mentioned earlier, recursive process.nextTick calls prevent the event loop from advancing. Since the nextTick queue is drained completely between phases, an ever-growing queue means I/O callbacks, timers, and everything else will never get a chance to run.

If you need to break up recursive work and yield to the event loop, use setImmediate instead — it schedules work for the check phase, giving I/O and timers a chance to run first.

Misunderstanding Timer Precision

setTimeout(fn, 0) does not mean “run immediately.” It means “run on the next iteration of the event loop, in the timers phase, after at least 0ms.” In practice, the minimum delay is clamped to 1ms (per the HTML spec that Node follows for consistency). Additionally, the callback only fires when the event loop reaches the timers phase, which might be delayed by other work.

Don’t use timers for precise timing. If you need high-resolution timing, look into process.hrtime.bigint() for measurement or setImmediate for scheduling.

setImmediate vs. setTimeout(fn, 0)

The ordering of these two at the top level of a script is non-deterministic. It depends on the performance of the process — specifically, whether the 1ms timer threshold has elapsed by the time the event loop first reaches the timers phase.

// Order is non-deterministic at the top level
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));

However, inside an I/O callback, setImmediate is always guaranteed to run first, because the loop moves from the poll phase to the check phase before looping back to timers.

const fs = require('fs');

fs.readFile(__filename, () => {
  setTimeout(() => console.log('timeout'), 0);
  setImmediate(() => console.log('immediate'));
});

// Always prints:
// immediate
// timeout

Practical Strategies

Offload CPU Work with Worker Threads

For CPU-intensive operations, use the worker_threads module to move computation off the main thread. Worker threads run in parallel and communicate with the main thread via message passing.

const { Worker, isMainThread, parentPort } = require('worker_threads');

if (isMainThread) {
  const worker = new Worker(__filename);
  worker.on('message', (result) => {
    console.log('Result from worker:', result);
  });
  worker.postMessage({ number: 42 });
} else {
  parentPort.on('message', (data) => {
    // Heavy computation happens here, off the main thread
    const result = fibonacci(data.number);
    parentPort.postMessage(result);
  });
}

For simpler cases, child_process.fork() works too, though it’s heavier since it spawns an entirely new Node.js process.

Break Up Long Tasks with setImmediate

If you need to process a large array or dataset on the main thread, break the work into chunks and yield between them using setImmediate. This lets the event loop process I/O and other callbacks between chunks.

function processChunked(items, processFn, callback) {
  const chunkSize = 100;
  let index = 0;

  function nextChunk() {
    const end = Math.min(index + chunkSize, items.length);
    for (; index < end; index++) {
      processFn(items[index]);
    }
    if (index < items.length) {
      setImmediate(nextChunk); // Yield to the event loop
    } else {
      callback();
    }
  }

  nextChunk();
}

Use async/await Without Accidentally Blocking

async/await makes asynchronous code read like synchronous code, but it’s important to remember that await only yields to the microtask queue — it doesn’t yield to the event loop’s phases. If you await a CPU-intensive synchronous operation wrapped in a Promise, you’re still blocking.

// This still blocks — the heavy work is synchronous
async function bad() {
  const result = await new Promise((resolve) => {
    resolve(heavyComputation()); // Runs synchronously before resolving
  });
}

// This yields properly — the heavy work is in a worker
async function good() {
  const result = await runInWorker(heavyComputation);
}

Also, be mindful of sequential await when operations are independent. Use Promise.all to run them concurrently.

// Sequential: takes 2 seconds
const a = await fetchA(); // 1 second
const b = await fetchB(); // 1 second

// Concurrent: takes 1 second
const [a, b] = await Promise.all([fetchA(), fetchB()]);

Monitor Event Loop Lag in Production

Event loop lag — the delay between when a callback is scheduled and when it actually runs — is a critical health metric for Node.js applications. You can measure it with a simple timer:

let lastCheck = process.hrtime.bigint();

setInterval(() => {
  const now = process.hrtime.bigint();
  const drift = Number(now - lastCheck) / 1_000_000 - 1000; // ms of lag
  console.log(`Event loop lag: ${drift.toFixed(2)}ms`);
  lastCheck = now;
}, 1000);

For production systems, libraries like monitorEventLoopDelay (built into perf_hooks since Node 11) provide histogram-based measurements. Most APM tools (Datadog, New Relic, Dynatrace) also track this automatically.

When Does the Event Loop Exit?

The event loop exits when there is no more work to do. Specifically, it exits when there are no more active handles (like servers, timers, or open file descriptors) and no more active requests (pending I/O operations).

This is why a simple script that reads a file will exit after the callback runs — the file handle is closed, there are no timers, and there’s nothing left to do. Conversely, calling http.createServer() keeps the loop alive because the server handle remains active, listening for new connections.

You can force the loop to exit early with process.exit(), though this is generally discouraged as it skips cleanup. If you want to let the loop exit naturally but have handles keeping it alive, you can call handle.unref() to tell the loop not to count that handle when deciding whether to stay alive. This is commonly used for optional background timers:

const timer = setInterval(collectMetrics, 60000);
timer.unref(); // Don't keep the process alive just for this

Conclusion

The event loop is the engine that drives Node.js. It’s a cycle of phases — timers, pending callbacks, poll, check, and close — with microtask queues (process.nextTick and Promises) drained between each phase. This architecture lets Node handle massive concurrency on a single thread by never waiting for I/O. Instead, it registers callbacks and moves on.

The practical takeaways are simple: keep individual callbacks fast, offload CPU-heavy work to worker threads, use setImmediate to yield when processing large datasets, and monitor event loop lag in production. With a solid mental model of how the loop works, you’ll write Node.js code that’s not just functional but genuinely performant under pressure.