3

Update: Search for the first solution.

for a normal Depth First Search it is simple, just use a hashset

    bool DFS (currentState) =
    {
          if (myHashSet.Contains(currentState))
          {
               return;
          }
          else
          {
                myHashSet.Add(currentState);
          }
          if (IsSolution(currentState) return true;         
          else
          {
              for (var nextState in GetNextStates(currentState))
                   if (DFS(nextState)) return true;
          }
          return false;
     }

However, when it becomes depth limited, i cannot simply do this

    bool DFS (currentState, maxDepth) =
    {
          if (maxDepth = 0) return false;
          if (myHashSet.Contains(currentState))
          {
               return;
          }
          else
          {
                myHashSet.Add(currentState);
          }
          if (IsSolution(currentState) return true;         
          else
          {
              for (var nextState in GetNextStates(currentState))
                   if (DFS(nextState, maxDepth - 1)) return true;
          }
          return false;
     }

Because then it is not going to do a complete search (in a sense of always be able to find a solution if there is any) before maxdepth

How should I fix it? Would it add more space complexity to the algorithm?

Or it just doesn't require to memoize the state at all.

Update:

for example, a decision tree is the following:

   A - B - C - D - E - A
                   |
                   F - G (Goal)

Starting from state A. and G is a goal state. Clearly there is a solution under depth 3.

However, using my implementation under depth 4, if the direction of search happens to be

A(0) -> B(1) -> C(2) -> D(3) -> E(4) -> F(5) exceeds depth, then it would do back track to A, however E is visited, it would ignore the check direction A - E - F - G

colinfang
  • 20,909
  • 19
  • 90
  • 173
  • 1
    Why wouldn't it do a complete search? – Shahbaz Sep 26 '12 at 11:30
  • You do realize that your solution has the same problem with DFS, don't you? In both solutions, it's not `DFS(child)` in the end, but should be `for (each child) DFS(child)`. Otherwise, you are never backtracking. – Shahbaz Sep 26 '12 at 14:58
  • @Shahbaz Yes i forgot to add the whole branches. Updated – colinfang Sep 26 '12 at 15:31
  • Now if you pay close attention, for the same reason that DFS continues with `A` after backtracking from `F` to `E`, the max_depth will also do the same. The point is that, backtracking and continuing to the next child is done in the `for` loop on the bottom, where `myHashSet.Contains(E)` is not checked after each backtrack. – Shahbaz Sep 26 '12 at 16:01
  • @Shahbaz `myHashSet.Contains(E)` will be checked, as it is invoked from `DFS(E,maxDepth-1)` in the for loop under the currentState = A. So, when it backtracks to A, and check its child E, since it is visited before, it will auto get ignored. – colinfang Sep 26 '12 at 16:31
  • Oh, sorry I didn't notice there is a loop in your graph – Shahbaz Sep 26 '12 at 22:09

3 Answers3

2

I had the same problem as yours, here's my thread Iterative deepening in common lisp

Basically to solve this problem using hashtables, you can't just check if a node was visited before or not, you have to also consider the depth at which it was previously visited. If the node you're about to examine contains a state that was not previously seen, or it was seen before but at a higher depth, then you should still consider it since it may lead to a shallower solution which is what iterative deepening supposed to do, it returns the same solution that BFS would return, which would be the shallowest. So in the hashtable you can have the state as the key, and the depth as the value. You will need to keep updating the depth value in the hashtable after finding a shallower node though.

An alternative solution for cycle checking would be to backtrack on the path from the current node up to the root, if the node you're about to examine already appears on the path, then it will lead to a cycle. This approach would be more generic, and can be used with any search strategy. It is slower than the hashtable approach though, having O(d) time complexity where d is the depth, but the memory complexity will be greatly reduced.

Community
  • 1
  • 1
turingcomplete
  • 2,128
  • 3
  • 16
  • 25
1

In each step of IDFS, you are actually searching for a path which is shortest, you can't simple use hashSet. HashSet helps only when you are searching for the existence of a path where the length is unlimited.

In this case, you should probably use hashMap to store the minimum step to reach the state and prune the branch only if the map value can't be updated. The time complexity may changed in correspond.

But in fact, IDFS is used in place of BFS when the space is limited. As hashing the state may take almost as many space as BFS, usually you can't store the all the state in IDFS trivially.

The IDFS in wiki dose not have a hash neither. http://en.wikipedia.org/wiki/Iterative_deepening_depth-first_search

So let's drop out the hash and trade time for space!

Update

It's worthwhile to store the state in the current dfs stack, then the search path would not result into a trivial circle. The psudocode implementing this feature would be:

bool DFS (currentState, maxDepth) =
{
      if (maxDepth = 0) return false;
      if (myHashSet.Contains(currentState))
      {
           return;
      }
      else
      {
            myHashSet.Add(currentState);
      }
      if (IsSolution(currentState) return true;         
      else
      {
          for (var nextState in GetNextStates(currentState))
               if (DFS(nextState, maxDepth - 1)) return true;
      }
      myHashSet.Remove(currentState); //the state is pop out from the stack
      return false;
 }
chyx
  • 501
  • 2
  • 10
  • Is is worth adding a copy of hashset at each state, so the hashset of the child state would include all of its predecessors, so that any further search wouldn't result in a cycle. Thus the space complexity would be O(square of depth) – colinfang Sep 26 '12 at 18:49
0

The solution you show is perfectly fine and works for DFSID(depth-first search with iterative deepening). Just do not forget to clear myHashSet before increasing the depth.

Ivaylo Strandjev
  • 69,226
  • 18
  • 123
  • 176