[Faster Loops] for .. of vs for .. i++

Comparing the performance of different ways to iterate over an array

TLDR;

  • for .. of is slow

  • for (let i = 0; i < array.length; i++) is faster

  • should you use for .. i++ everywhere because it’s fast? God please no! sometimes it can be very handy tho

  • use for .. of in most cases

  • use for .. i++ when you work with very large arrays or with a very large amount of smaller arrays or when you need to squeeze the last drop of that perf juice from your code

  • some real-world examples are listed at the end.

Welcome to the first issue of the Faster JavaScript newsletter!

Most of my career I focused on JavaScript tooling and developing data-intensive applications. These projects often involve working with massive amounts of data and performance is critical. CLIs should run fast, and UIs need to be responsive, even when processing gigabytes of data.

In this newsletter I plan to share the patterns and practices i discovered during that time and try to explain what happens under the hood using visualizations

My focus is primarily on low-level performance optimizations rather than high-level architecture.

For the first issue of the newsletter, I want to start with something simple: comparing different ways to iterate over an array of elements and looking at their performance.

For our example use case we have one million random numbers between 0 and 100, and we want to determine how many of these numbers are greater than 50.

You can find the code here

// create an array with random numbers in a range from 0 to 100
// make it big enough to measure it in milliseconds
// but not too big so we don't have to wait for
// the whole minute for it to run

const array = new Array(1000000)
	.fill(null)
	.map((_) => Math.floor(Math.random() * 100));

for (let i = 0; i < array.length; i++)

Let’s start from the plain and simple for loop with a counter.

const THRESHOLD = 50;

const forLoop = () => {
  let result = 0;

  // start the timer named 'for loop'
  console.time('for loop');

  for (let i = 0; i < array.length; i++) {
    if (array[i] > THRESHOLD) {
      result += 1;
    }
  }

  // finish the timer and print elapsed time
  console.timeEnd('for loop');
};

If i run it on M1 MacBook Air it usually finishes in about 5ms

for loop: 4.861ms

for .. of

Now let’s compare it with for .. of loops.

const THRESHOLD = 50;

const forOf = () => {
  let result = 0;

  console.time('forOf');

  for (const value of array) {
    if (value > THRESHOLD) {
      result += 1;
    }
  }

  console.timeEnd('forOf');
};

on M1 MacBook Air it usually take about 16ms 

forOf: 15.837ms

This is about 3x slower!

The results are usually pretty consistent and i generally get similar performance difference when working on real projects.

Why is one some much slower than another one? let’s try to dig into it

First concept we need to understand is the process memory. It’s a chunk of RAM that is allocated to the process we’re running (whether it’s a node script or a browser tab)

You can think of it as a massive sparse array containing all the things our code works with, like the code itself, functions, variable values, stack traces, constants, etc.

Generally elements in memory are indexed with hex numbers that are prefixed with 0x

rough visualisation of process memory

When we create the initial const array the actual variable is a tiny element in the beginning of our memory space (the stack) that is called a reference (or pointer) that points to some other element in memory, usually far in the right side of our memory (the heap), that element is the first element of const array (number 51 in the pic) followed by the rest of const array array contents (numbers: 25, 67, 93, …)

To get an nth element of the array all we need to do is follow that pointer to the first element of the array and then shift right n times. e.g. if we want to get the second element, we follow the pointer to 0x500 and then shift right 2 times, which takes us to 0x502 which gives us 67

js array representation in memory

side note: this is what’s called “passing by reference”. When you pass an array to another function, what you’re actually passing is the small pointer, while the actual data lives somewhere else

Key point here is that working with memory (inserting, accessing and removing values) takes time and the more you have to do it the slower your code is.

Let’s now visualize what memory looks like when we run the for loop with a counter.

There is not much really going on.

  • First, we allocate an i variable with an initial value of 0.

  • We also reserve some memory for our value variable that will hold array elements as we iterate.

  • With every iteration, we access the array by reading the i-th element of it and copy it into the memory space we allocated for value.

for .. i++ loop visualization

Let’s now do the same for for .. of

Symbol.iterator

for..of looks much simpler, but there's hella stuff going on under the hood. These loops use iterators, which have a runtime overhead.

In JavaScript, any object can become iterable by defining the iterator function. calling that function produces another next() function, and every time the next() function is called, it returns an object containing the next value (if present) and a flag that tells us whether there are any elements left.

iterator for an array looks something like this:

const numberArray = [1, 2, 3, 4, 5];

numberArray[Symbol.iterator] = function() {
  let index = 0;
  
  return {
    next: () => {
      if (index < this.length) {
        return { value: this[index++], done: false };
      } else {
        return { done: true };
      }
    }
  };
};

for (const number of numberArray) {
  console.log(number);  // Logs 1, 2, 3, 4, 5
}

There are two main factors that can slow down the execution:

  1. Calling functions for every element.

Functions in JavaScript are not free and come with some overhead. Each time a function is called, a new execution context must be created for that function and stored on the execution context stack. Execution context includes things like local variables, function arguments, and what this refers to. In our case, we will be creating a new execution context for every single element in our array.

  1. It constructs and returns objects.

next() still needs to construct an object with the value and the flag. A new value needs to be created, stored somewhere in memory, and accessed later by the caller, resulting in overhead.

Let’s try to visualize the process of using an iterator.

  • We start off by creating a new iterator (next() function) by calling ARRAY[Symbol.iterator] on our array.

  • Then we reserve some memory for our value.

  • With every iteration:

    • We call the next() function. Calling it creates a new execution context that captures the value of this, as well as the value of the index variable from our ARRAY[Symbol.iterator] function.

    • Then we access the array by getting the element with the index captured in our index variable.

    • Now we have the value, but that's still not it. We still need to return it back to the loop by constructing a {value, done: false} object that we also store in memory.

    • Once the function finishes execution and returns the value, we delete its execution context from the memory and assign that returned value to our initial value variable.

for .. of loop visualization

You can see, there is a lot more happening, which is likely what makes for .. of so much slower.

Disclaimer:

  • This is a super-simplified visualization of what's actually happening.

  • Modern JS engines, such as V8, are capable of optimizing code to the point of zero overhead. Without these optimizations, a for .. of loop would likely run 20 times slower or worse.

  • Comparing code performance by simply running it in a browser or as a Node script is not the most scientific method. There are multiple factors that can affect runtime execution, such as JIT, node warming up, and other processes competing for resources. Although this specific case reflects my experience in real projects.

  • I did not mention .forEach, but it seems like its perf is close to for .. of

Should you not use for .. of?

You should absolutely use for .. of. Most of the time it’s a much cleaner API with less noise. For most use cases the difference we’re gonna be talking about is 1 microsecond vs 3 microseconds and it just doesn’t matter. Especially if the work done inside the loop is orders of magnitude larger than each iteration itself.

Replacing all for .. of with for .. i++ would be an example of premature optimization.

There are some cases, however, where using an i++ loop can come in handy. e.g.:

  • when you work with very large arrays

  • when you work with smaller arrays but very often (O(n^2) complexity algorithms)

Some real world examples that come to mind:

  • Implementations of common algorithms in JS: Think of compression algorithms, audio/video processing, graph/tree algorithms, and DOM traversals. These will likely have a single for loop that is called millions of times per second.

  • Anything compiler-related: parsing and transforming ASTs, codemods, and bundlers (such as Babel, Prettier, Jest, TypeScript, Webpack, Parcel, etc.). Think about running some of these tools on a considerably large project and then think about the total number of characters in all your project files combined that you need to iterate over.

  • Large datasets in the UI: For example, if you have 15 years' worth of stock price data and you want render a price chart and be able to instantly zoom in/out at random timespans, you cannot render 10 million datapoints in the UI. You would need to aggregate them. And to aggregate them, you would likely need to iterate a lot.

  • Data visualization: If you need to display a lot of information frequently, such as charts, diagrams, graph visualizations, or tables with a large amount of data.

Reply

or to participate.