Big O Notation

A way to assess the speed  (performance)   of an algorithm

Measures how quickly the run-time grows relative to the input, as the input increases.

We are concerned with the number of operations a function performs. Not the number of values it returns.

Based on the maximum possible iterations, even though the function could finish early.

A measure of the “worst case scenario”.

Time Complexity vs Space Complexity

Time complexity: the amount of time taken by an algorithm to run relative to the length of the input.

We can also measure the efficiency of algorithms in terms of space complexity: the amount of space or memory used by an algorithm relative to the length of the input.

Both can be described using Big O Notation, and space complexity is bounded by time complexity. In other words, space and time complexity must be equal, or space complexity must be less.

O(1)

Represents an algorithm whose run-time is fixed.

No matter the amount of data you feed in, the algorithm will always take the same amount of time.

Example:

function(array) {
  return array[0];
}

O(N) — Linear Time

As the length of the array increases, the amount of processing required increases at the same rate.

Example:

function logArray(array) {
  for (let i = 0; i < array.length; i++) {
    console.log(array[i]);
  }
}

Big O Notation is based on the maximum possible iterations. Because of that, the following function still has an algorithmic efficiency of O(N), even though the function could finish early; that’s why some people see it as a measure of the “worst case scenario”.

function findTheMeaningOfLife(array){
  for (let i = 0; i < array.length; i++) {
    if (array[i] === 42) {
      return i;
    }
  }
}

O(N²) — Quadratic Time

An algorithm increases at twice the relative rate of O(N).

Let’s say we have an array of three numbers: array = [1, 2, 3] .

If we pass that to the function logArray() above, then it will simply print out the numbers 1, 2 and 3.

But what about this function?

function(array){
  for (let i = 0; i < array.length; i++){
    for (let j = 0; j < array.length; j++){
      console.log(array[j]);
    }
  }
}

O(N³), O(N⁴) — Cubic Time, Quartic Time, and so on

If two nested loops gives us O(N²), it follows that three nested loops gives us O(N³), four nested loops gives us O(N⁴), and so on.

Let’s return to our simple array [1, 2, 3] , and pass it to the following with a performance of O(N³):

function(array){
  for (let i = 0; i < array.length; i++){
    for (let j = 0; j < array.length; j++){
      for (let k = 0; k < array.length; k++){
        console.log(array[k]);
    }
  }
}

And so the pattern continues.

As the powers increase, the run-time of each algorithm increases drastically at scale.

Why We Often Ignore Multiples

Let’s say for every item we put into our function, it processes the data twice over.

Example:

function countUpThenDown(array) {

for (let i = 0; i < array.length; i++) {
    console.log(i);
  };
for (let i = array.length; i > 0; i--) {
    console.log(i);
  };
}

We could call that O(2N).

At small scales, the difference is very important. But Big O Notation is concerned with how an algorithm functions at a very high scale. For that reason, we tend to ignore multiples.

Why We Often Ignore All But the Biggest Power

For the exact same reason, you’ll rarely see a function’s efficiency described as O(N² + N).

In most situations it is enough to reduce expressions to the element that has the biggest impact on the algorithm as it scales.

Example:

O(2N³ + 5N² + 4N + 7) will be reduced to O(N³)

One of the most common and fastest search algorithms, the so-called “Binary Search” , has an efficiency of O(logN).

As more data is processed by a Binary Search function, the amount of processing time increases by a factor of logN.

Every time you double the data-set, an algorithm with performance O(logN) only has to add in 1 more step.

Last updated