Hash Tables was O(1)
average and amortized case complexity, however was suffers from O(n)
worst case Tim E complexity. [and I think this was where your confusion is]
Hash tables suffer from O(n)
worst time complexity due to both reasons:
- If too many elements were hashed into the same key:looking inside this key is take time
O(n)
.
- Once A hash table have passed its load balance-it have to rehash [create a new bigger table, and re-insert each element to The table].
However, it's said to being O(1)
average and amortized case because:
- It's very rare that many items would be hashed to the same key [if you chose a good hash function and you don ' t had too b IG load balance.
- The rehash operation, which is
O(n)
, can at most happen after n/2
OPS, which be all assumed O(1)
: Thus if you sum th E Average Time per op, you get:(n*O(1) + O(n)) / n) = O(1)
Note because of the rehashing issue-a realtime applications and applications, need low latency-should not use a ha SH table as their data structure.
EDIT: Annother issue with Hash Tables:cache
Another issue where you might see a performance loss in large hash tables are due to cache performance. Hash Tables suffer from bad cache performance, and thus for large collection-the access time might take longer, Since you need to reload the relevant part of the table from the memory back into the the cache.
How can worst case time complexity of Hashtable is O (n^2)