1

Please look if did right in my answers :

1 Suppose that you need to maintain a collection of data whose contents are fixed— i.e., you need to search for and retrieve existing items, but never need to add or delete items. Although the collection of data may be quite large, you may assume that it can fit in the computer’s memory. Which of the following data structures is the most efficient one to use for this task?

A. a sorted array

B. a linked list

C. a binary search tree

D. a queue

E. all of the above perform the same in this case

My answer is : a binary search tree because its access (get) and search time complexity is O(log n)

2 Which of the following data structures is most appropriate for situations in which you need to efficiently manage (key, value) pairs that are stored on disk?

A. an array

B. a linked list

C. a binary search tree

D. a Map

E. a hash table

My answer is : a hash table because its access (get) is O(n/2) and search time complexity is O(1)

3 A police department wants to maintain a database of up to 1800 license-plate numbers of people who receive frequent tickets so that it can be determined very quicklywhether or not a given license plate is in the database. Speed of response is very important; efficient use of memory is also important, but not as important as speed of response. Which of the following data structures would be most appropriate for this task?

A. a sorted linked list

B. a sorted array with 1800 entries

C. a hash table using open addressing with 1800 entries

D. a hash table using open addressing with 3600 entries

E. a hash table using open addressing with 10000 entries

My answer is : I think a hash table but I do not know the meaning of entries

4 With a poorly chosen hash function, it is possible to have a situation in which the search time in a hash table of N items goes to

A. O(N)

B. O(N!)

C. O(log N)

D. O(N2)

E. O(1)

My answer is : O(1)

Andy Turner
  • 137,514
  • 11
  • 162
  • 243
Joe
  • 159
  • 1
  • 3
  • 11
  • 1
    1. No, that's not the most memory-efficient. – Andy Turner Nov 03 '16 at 13:51
  • 4 is probably wrong, O(1) is constant or the most efficient order, so I don't think a poorly chosen hash function can give this efficiency (worst case) – dahui Nov 03 '16 at 13:54
  • The last one is also wrong. It's talking about a *poorly* chosen hash function. – Kayaman Nov 03 '16 at 13:54
  • Certainly not O(1) for a bad hashing algorithm (Q4) – d.j.brown Nov 03 '16 at 13:54
  • Q3, you'll want to reduce collisions to help ensure O(1) time complexity. – d.j.brown Nov 03 '16 at 13:56
  • Q1: BST is not NECESSARILY O(log n); a badly-balanced BST can be O(n). – Scott Hunter Nov 03 '16 at 13:56
  • What you suggest for correct answers ,, thanks – Joe Nov 03 '16 at 13:56
  • For 4 think about what happens if every item you put into the hashtable has the same hash. How do you search the hashtable then? That will give you a clue to the answer to 3. – JimNicholson Nov 03 '16 at 13:57
  • @ JimNicholson I do not know how to do Q3 .. please help – Joe Nov 03 '16 at 14:06
  • @ Andy Turner what you suggest for Q1? – Joe Nov 03 '16 at 14:07
  • I wouid go for 1. A (memory considerations, and binary search for searching) 2. C (should be B-tree, but since this is missing, let's go for a binary tree.) 3. D ("efficient use of memory is also important, but not as important as speed of response" :) ) 4. A, if I am not mistaken. – John Donn Nov 03 '16 at 14:12
  • @John Donn why you say D for Q3 ? what is meaning of entries? thank you – Joe Nov 03 '16 at 14:17
  • http://stackoverflow.com/questions/10901752/what-is-the-significance-of-load-factor-in-hashmap ; and it should be theta and not big O in 4. – John Donn Nov 03 '16 at 14:19
  • @John Donn Isn't C for Q3 is a better answer? Why you duplicate the size of the hash table ? does that make it faster to search an entry ? thanks – Joe Nov 03 '16 at 14:26

0 Answers0