52

Can someone give me example of an algorithm that has square root(n) time complexity. What does square root time complexity even mean?

vaibhav9225
  • 523
  • 1
  • 4
  • 6

6 Answers6

50
  • Square root time complexity means that the algorithm requires O(N^(1/2)) evaluations where the size of input is N.
  • As an example for an algorithm which takes O(sqrt(n)) time, Grover's algorithm is one which takes that much time. Grover's algorithm is a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n)) time.
  • Let us take an example to understand how can we arrive at O(sqrt(N)) runtime complexity, given a problem. This is going to be elaborate, but is interesting to understand. (The following example, in the context for answering this question, is taken from Coding Contest Byte: The Square Root Trick , very interesting problem and interesting trick to arrive at O(sqrt(n)) complexity)

    • Given A, containing an n elements array, implement a data structure for point updates and range sum queries.

      • update(i, x)-> A[i] := x (Point Updates Query)
      • query(lo, hi)-> returns A[lo] + A[lo+1] + .. + A[hi]. (Range Sum Query)
    • The naive solution uses an array. It takes O(1) time for an update (array-index access) and O(hi - lo) = O(n) for the range sum (iterating from start index to end index and adding up).

    • A more efficient solution splits the array into length k slices and stores the slice sums in an array S.
    • The update takes constant time, because we have to update the value for A and the value for the corresponding S. In update(6, 5) we have to change A[6] to 5 which results in changing the value of S1 to keep S up to date. Updating element
    • The range-sum query is interesting. The elements of the first and last slice (partially contained in the queried range) have to be traversed one by one, but for slices completely contained in our range we can use the values in S directly and get a performance boost. Range-sum query
    • In query(2, 14) we get,

 query(2, 14) = A[2] + A[3]+ (A[4] + A[5] + A[6] + A[7]) + (A[8] + A[9] + A[10] + A[11]) + A[12] + A[13] + A[14] ; 
 query(2, 14) = A[2] + A[3] + S[1] + S[2] + A[12] + A[13] + A[14] ;
 query(2, 14) = 0 + 7 + 11 + 9 + 5 + 2 + 0;
 query(2, 14) = 34;
  • The code for update and query is:

  def update(S, A, i, k, x):
      S[i/k] = S[i/k] - A[i] + x
      A[i] = x

  def query(S, A, lo, hi, k):
      s = 0
      i = lo
      //Section 1 (Getting sum from Array A itself, starting part)
      while (i + 1) % k != 0 and i <= hi:
          s += A[i]
          i += 1
      //Section 2 (Getting sum from Slices directly, intermediary part)
      while i + k <= hi:
          s += S[i/k]
          i += k
      //Section 3 (Getting sum from Array A itself, ending part)
      while i <= hi:
          s += A[i]
          i += 1
  return s
  • Let us now determine the complexity.
  • Each query takes on average
    • Section 1 takes k/2 time on average. (you might iterate atmost k/2)
    • Section 2 takes n/k time on average, basically number of slices
    • Section 3 takes k/2 time on average. (you might iterate atmost k/2)
    • So, totally, we get k/2 + n/k + k/2 = k + n/k time.
  • And, this is minimized for k = sqrt(n). sqrt(n) + n/sqrt(n) = 2*sqrt(n)
  • So we get a O(sqrt(n)) time complexity query.
joanolo
  • 6,028
  • 1
  • 29
  • 37
a3.14_Infinity
  • 5,653
  • 7
  • 42
  • 66
10

Prime numbers

As mentioned in some other answers, some basic things related to prime numbers take O(sqrt(n)) time:

  1. Find number of divisors
  2. Find sum of divisors
  3. Find Euler's totient

Below I mention two advanced algorithms which also bear sqrt(n) term in their complexity.

MO's Algorithm

try this problem: Powerful array

My solution:

#include <bits/stdc++.h>
using namespace std;
const int N = 1E6 + 10, k = 500;

struct node {
    int l, r, id;
    bool operator<(const node &a) {
        if(l / k == a.l / k) return r < a.r;
        else return l < a.l;
    }
} q[N];

long long a[N], cnt[N], ans[N], cur_count;
void add(int pos) {
    cur_count += a[pos] * cnt[a[pos]];
    ++cnt[a[pos]];
    cur_count += a[pos] * cnt[a[pos]];
}
void rm(int pos) {
    cur_count -= a[pos] * cnt[a[pos]];
    --cnt[a[pos]];
    cur_count -= a[pos] * cnt[a[pos]];
}

int main() {
    int n, t;
    cin >> n >> t;
    for(int i = 1; i <= n; i++) {
        cin >> a[i];
    }
    for(int i = 0; i < t; i++) {
        cin >> q[i].l >> q[i].r;
        q[i].id = i;
    }
    sort(q, q + t);
    memset(cnt, 0, sizeof(cnt));
    memset(ans, 0, sizeof(ans));

    int curl(0), curr(0), l, r;
    for(int i = 0; i < t; i++) {
        l = q[i].l;
        r = q[i].r;

/* This part takes O(n * sqrt(n)) time */
        while(curl < l)
            rm(curl++);
        while(curl > l)
            add(--curl);
        while(curr > r)
            rm(curr--);
        while(curr < r)
            add(++curr);

        ans[q[i].id] = cur_count;
    }
    for(int i = 0; i < t; i++) {
        cout << ans[i] << '\n';
    }
    return 0;
}

Query Buffering

try this problem: Queries on a Tree

My solution:

#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, k = 333;

vector<int> t[N], ht;
int tm_, h[N], st[N], nd[N];
inline int hei(int v, int p) {
    for(int ch: t[v]) {
        if(ch != p) {
            h[ch] = h[v] + 1;
            hei(ch, v);
        }
    }
}
inline void tour(int v, int p) {
    st[v] = tm_++;
    ht.push_back(h[v]);
    for(int ch: t[v]) {
        if(ch != p) {
            tour(ch, v);
        }
    }
    ht.push_back(h[v]);
    nd[v] = tm_++;
}

int n, tc[N];
vector<int> loc[N];
long long balance[N];
vector<pair<long long,long long>> buf;
inline long long cbal(int v, int p) {
    long long ans = balance[h[v]];
    for(int ch: t[v]) {
        if(ch != p) {
            ans += cbal(ch, v);
        }
    }
    tc[v] += ans;
    return ans;
}
inline void bal() {
    memset(balance, 0, sizeof(balance));
    for(auto arg: buf) {
        balance[arg.first] += arg.second;
    }
    buf.clear();
    cbal(1,1);
}

int main() {
    int q;
    cin >> n >> q;
    for(int i = 1; i < n; i++) {
        int x, y; cin >> x >> y;
        t[x].push_back(y); t[y].push_back(x);
    }
    hei(1,1);
    tour(1,1);
    for(int i = 0; i < ht.size(); i++) {
        loc[ht[i]].push_back(i);
    }
    vector<int>::iterator lo, hi;
    int x, y, type;
    for(int i = 0; i < q; i++) {
        cin >> type;
        if(type == 1) {
            cin >> x >> y;
            buf.push_back(make_pair(x,y));
        }
        else if(type == 2) {
            cin >> x;
            long long ans(0);
            for(auto arg: buf) {
                hi = upper_bound(loc[arg.first].begin(), loc[arg.first].end(), nd[x]);
                lo = lower_bound(loc[arg.first].begin(), loc[arg.first].end(), st[x]);
                ans += arg.second * (hi - lo);
            }
            cout << tc[x] + ans/2 << '\n';
        }
        else assert(0);

        if(i % k == 0) bal();
    }
}
kaushal agrawal
  • 360
  • 3
  • 21
5

There are many cases. These are the few problems which can be solved in root(n) complexity [better may be possible also].

  • Find if a number is prime or not.
  • Grover's Algorithm: allows search (in quantum context) on unsorted input in time proportional to the square root of the size of the input.link
  • Factorization of the number.

There are many problems that you will face which will demand use of sqrt(n) complexity algorithm.

As an answer to second part:

sqrt(n) complexity means if the input size to your algorithm is n then there approximately sqrt(n) basic operations ( like **comparison** in case of sorting). Then we can say that the algorithm has sqrt(n) time complexity.

Let's analyze the 3rd problem and it will be clear.

let's n= positive integer. Now there exists 2 positive integer x and y such that
     x*y=n;
     Now we know that whatever be the value of x and y one of them will be less than sqrt(n). As if both are greater than sqrt(n) 
  x>sqrt(n) y>sqrt(n) then x*y>sqrt(n)*sqrt(n) => n>n--->contradiction.

So if we check 2 to sqrt(n) then we will have all the factors considered ( 1 and n are trivial factors).

Code snippet:

   int n;
   cin>>n;
   print 1,n;
   for(int i=2;i<=sqrt(n);i++) // or for(int i=2;i*i<=n;i++)
     if((n%i)==0)
       cout<<i<<" ";

Note: You might think that not considering the duplicate we can also achieve the above behaviour by looping from 1 to n. Yes that's possible but who wants to run a program which can run in O(sqrt(n)) in O(n).. We always look for the best one.

Go through the book of Cormen Introduction to Algorithms.

I will also request you to read following stackoverflow question and answers they will clear all the doubts for sure :)

  1. Are there any O(1/n) algorithms?
  2. Plain english explanation Big-O
  3. Which one is better?
  4. How do you calculte big-O complexity?
Community
  • 1
  • 1
user2736738
  • 30,591
  • 5
  • 42
  • 56
  • 1
    This answer is actually very-very wrong. Time complexity is measured relative to the _size_ of the input, not its _value_. If your function takes a number, the _size_ of its input is log of that number. Therefore, the time complexity of factorization is `O(sqrt(2^n))` = `O(2^(n/2))`, it is exponential. – kirelagin Jun 28 '17 at 13:54
  • @kirelagin.: I didn't mention that it depends on value but regarding the input size, I would like to get some elaboration. I am open to learning. So you are saying this is exponential complexity right? – user2736738 Jun 29 '17 at 06:33
  • 1
    Yes, your function takes a number `n`. Numbers in computer are represented using binary, therefore you need `log2(n)` bits to represent `n`, so the _size_ of the input of your function is `s = log2(n)`, which means that `n ≈ 2^s`. As a result its time complexity is `O(sqrt(n)) = O(sqrt(2^s)) = O(2^(s/2))`, where `s` is the _size_ of the input, which is exponential. – kirelagin Jun 29 '17 at 11:46
  • @kirelagin.: Most of the time we see this kind of discussion with n as the size of the number. That's why I said that. If you feel like can you suggest an edit? (regarding the wordings). Here I have assumed it. – user2736738 Jun 29 '17 at 12:03
3

This link provides a very basic beginner understanding of O(\sqrt{n}) i.e., O(sqrt n) time complexity. It is the last example in the video, but I would suggest that you watch the whole video.

https://www.youtube.com/watch?v=9TlHvipP5yA&list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O&index=6

The simplest example of an O(\sqrt{n}) i.e., O(sqrt n) time complexity algorithm in the video is:

p = 0;

for(i = 1; p <= n; i++) {
    p = p + i;
}

Mr. Abdul Bari is reknowned for his simple explanations of data structures and algorithms.

myverdict
  • 622
  • 8
  • 24
2

Primality test

Solution in JavaScript

 

const isPrime = n => {
    for(let i = 2; i <= Math.sqrt(n); i++) {
        if(n % i === 0) return false;
    }
    return true;
};

Complexity

O(N^1/2) Because, for a given value of n, you only need to find if its divisible by numbers from 2 to its root.

Siri
  • 1,117
  • 6
  • 13
0

JS Primality Test

O(sqrt(n))

A slightly more performant version, thanks to Samme Bae, for enlightening me with this.

function isPrime(n) {
  if (n <= 1)
      return false;
  if (n <= 3)
      return true;

  // Skip 4, 6, 8, 9, and 10
  if (n % 2 === 0 || n % 3 === 0)
      return false;

  for (let i = 5; i * i <= n; i += 6) {
      if (n % i === 0 || n % (i + 2) === 0)
          return false;
  }
  return true;
}

isPrime(677);
Andy Young
  • 33
  • 5