Table of contents
Hello everyone, welcome back to my blog😍.
Today we're going to briefly discuss what big O notation means. I wrote about The Art of Writing Good Code: Scalability, Readability, and Efficiency not quite long ago and in lieu of that I recently started learning about big O notation. I believe big O notation is one of the most important topics for any software developer.
When writing a computer program, we must guarantee that it works efficiently, particularly when dealing with big volumes of data. Big O notation is a method of expressing how fast an algorithm will execute as the size of the input increases. It is a mathematical notation that specifies the upper bound of an algorithm's growth rate. In layman's words, it shows us how quickly and efficiently an algorithm will grow.
What exactly is Big O Notation?
Big O notation is used to express an algorithm's time complexity. It offers us an indication of how long an algorithm takes to complete given the size of the input. It is denoted by the letter "O" followed by a function representing the algorithm's growth rate.
For example, if an algorithm takes 1 second to process a 10-byte input and 10 seconds to process a 100-byte input, the time complexity of the algorithm is O(n), where n is the size of the input. This means that the algorithm's time grows linearly with the size of the input.
Types of Big O Notation
There are several types of Big O notation that we can use to describe the time complexity of an algorithm. The most common ones are:
O(1): This is an algorithm that takes a fixed amount of time to complete regardless of the size of the input. Simple arithmetic operations, accessing an array element by index, or accessing an object's attribute are all examples of O(1) algorithms. A big O notation of O(1) is considered to be good.
O(n): This describes an algorithm that takes linear time to finish, with the time increasing according to the size of the input. Linear search, counting the occurrences of an element in an array, and iterating through a list are all examples of O(n) algorithms. A big O notation of O(n) is considered to be fair.
O(n^2): This depicts a quadratic time algorithm, with the time taken increasing exponentially with the amount of the input. A big O notation of O(n^2) is considered to be horrible (not good!)
O(log n): This depicts a logarithmic time algorithm, where the time required gradually grows with the amount of input.
O(n log n): This is an algorithm that takes n log n time to execute, with the time increasing faster than O(n) but slower than O(n2).
Big O notation is a simple and powerful tool for analyzing the efficiency of algorithms. By understanding the time complexity of an algorithm, we can make informed decisions about which algorithm to use for a given problem.
When assessing an algorithm, we should concentrate on the worst-case situation, which is the largest input size that takes the longest to process and choose the algorithm with the best time complexity for the problem at hand. In worst-case situations, an algorithm that requires O(n2) time may be inefficient for large inputs even if it performs well for small inputs.
It is okay if you haven't gotten the big O gist yet, it took me a little while to catch it too. You can dive deep into it by watching this totally free resource (it's a nice one!).
See you in my next blog. Byeeee😘.