The today over lesson is about the BIG O Rules I know right? It Definetly Rules!. I hope this lesson is useful for you.

### Recap: Big O

Big O notaion is used to represent how the time taken or space consumed by an algorithm

**Increases as the lenght of input increases**

### 1: Always Worst Case

Big O always considers the worst case of algorithm. Let’s give an example rather than me giving some complicated explanations.

Eg: Suppose you got a list of names nad have to check if the list contains your name. You would loop through it and if you find the name you will stop the loop and return **True or False**

**What would be the Big O here?**

The big O here would be O(n). But wait! Waht if your name was at the very beginning? Then that will be an O(1) right?

Wrong. Big O always considers the worst case (here the end of the list). Considering the worst case the time taken won’t be slower than the worst case.

### 2: Remove Constants

Take the name searching algorithm from last time. But this time right before you start searching you store the first 5 names (if exists) in a seperate list (for whatever reason). What will the big O be? **O(n+5) right?**

Wrong. With Big O we remove all the constans beacuse as the 1 st rule stated ‘Alwyas worst case’. And at the worst case you might have a million names and a constant time like 5 just doesn’t add up any significane to total time takeen. **Just 0.0005% of the time taken by 1 million inputs**

### 3: Different Inputs Should Have Diffrent Variables

We need to consider the big O for each variable input. Eg: if we now have to loop through two lists of lenght two a & b, the big O would be O(a+b).

### 4. Drop Non-Dominant Terms

Suppose you get an input n (an integer). You now have to print all the numbers from 1-n, n times. And suppose now you have to add each of these numbers.

The former would be an O(n*2) task while the latter being O(n). Then the time complexity would be O(n*2 + n) right?

Wrong. In big O we consider only the dominant terms with same varibles. That is at a larger scale input n (take 100) **would be much lower than n*2 (100*2= 10000). Here n is just 1% of n*2**