Time Complexity and The Big-O

Alex Rios
4 min readNov 24, 2020

--

For a newcomer, the concept of the Big-O Notation and Time Complexity can seem a little intimidating. When I first encountered these terms, I felt discouraged because I didn’t understand their impact or how to apply their use. It wasn’t until I started coding daily when I realized how fundamental they are and how my solutions can improve if I follow it’s rules. The goal is to this post is to educate and illustrate how Big-O affects our daily programming lives.

What is Big-O Notation?

In short, Big-O Notation describes how fast functions grow. The letter O is used because in mathematics the rate of growth of a function is called its Order. It a mathematical expression that reflects the time (or the number of steps) it takes to complete a problem of size n.

Analogy

Big-O Notation is important to consider when working with big data sets.
As an example, say you have a phone-book and you want to search for a particular name. You could go from page-to-page, reading line-by-line trying to find a match. In this analogy, the phone-book is our working data set and the number of names inside are of size n. As you can see with this approach the time to solve this problem is proportional to the number of names in the phone book. What if the phone book had a hundred entries, or a thousand, or a million?! Is it possible to design an approach or method that would take less time to solve? This is the question that Big-O tries to answer.

Big-O Illustrated

Below is a Big-O Notation chart that visualizes how the operations are reflected with respect to time.

Below is a chart that corresponds the complexity operations with their name.

Examples with Ruby

In the following section is a demonstration of how these operations are represented in Ruby code. If you are a practicing programmer, chances are that you have encountered, and even implemented, these algorithms in your coding journey.

O(1)

O(1) is an expression that operates in constant time. This means that the operation is independent, or does not rely, on the number of input data to work.

O(n)

O(n) is an expression that represents linear time. This means that for every element n, the amount of time to solve the solution scales up with n. You can think of it as a one-to-one relationship. You can easily identify this whenever you apply a for-loop in your solution. This is how it would represented in Ruby code:

O(n²)

O(n²) is an expression that describes quadratic time, also known as O of n squared. The performance of this operation is directly proportional to the squared size of the input data set. An example of this is when implementing nested for-loops in a function:

O(logn)

O(logn) is an expression that represents logarithmic time. An operation is said to have a complexity of logarithmic time when it reduces the size of the input data in each step. As a result, it’s not necessary to look at all values of the input data. This operation takes advantage of memory space to solve the problem.

Resources

https://web.mit.edu/16.070/www/lecture/big_o.pdf
https://www.bigocheatsheet.com/
https://www.honeybadger.io/blog/a-rubyist-s-guide-to-big-o-notation/#authorDetails

--

--