Some of problem-solving properties:

**Soundness**:

A deterministic procedure is sound if, whenever it is invoked on some problem P and returns a value v not equal to failure, v is guaranteed to be a solution for P. A nondeterministic procedure is sound if every successful execution trace returns a value that is guaranteed to be a solution to P.**Completeness**:

A deterministic procedure is complete if, whenever it is invoked on a solvable problem P, it is guaranteed to return a value v not equal to failure. A nondeterministic procedure is complete if, whenever it is invoked on a solvable problem P, at least one of its execution traces will return a value v not equal to failure whenever P is solvable.**Admissibility**:

If there is some measure of optimality for solutions to a problem P, then a deterministic procedure is admissible if it is guaranteed to return an optimal solution whenever P is solvable. Note that if a procedure is admissible, then it is also sound and complete.

A well-known class of *search problems* is *state-space search* problems. The state space is a set of nodes called states, and the objective is to find a state s that satisfies some goal condition g.

The space complexity of breadth first search is b^{d} where b is the number of children for each node and d is the depth of the tree. Whereas the space complexity of depth-first search is d.

Search Procedures**:
**

**Breadth-First Search.
**

**Depth-First Search.
**

**Best First Search**:

In some cases, the objective is to find a goal state s that minimizes some objective function f(s). In this case, a nondeterministic search will not work properly: instead of returning the solution found in a single execution trace, we must look at all of the execution traces to see which one leads to the best solution.

One way to do this deterministically is to use a best-first search. This is like a breadth-first search in the sense that it maintains an *active list* of nodes that have been generated but not yet visited. However, instead of using this list as a queue the way a breadth-first search does a best-first search instead uses the list as a priority queue: the next node chosen from the active list will be the one whose f value is smallest. Best-first search is sound. If *f* is monotonically increasing, i.e., if f(s) <= f(s’) whenever s’ is a child of s, then a best-first search will never return a non-optimal solution and is admissible in finite search spaces. If there is a number Gamma > 0 such that if f(s) + Gamma <= f(s’) whenever s’ is a child of s, then a best-first search is admissible even in infinite search spaces.

The well-known A* search procedure is a special case of best-first state-space search, with some modifications to handle situations where there are multiple paths to the same state.

**Depth-First Branch-and-Bound Search**:

Another search procedure for minimizing an objective function is branch-and-bound. The most general form of branch-and bound is general enough to include nearly all top-down search procedures as special cases. The best-known version of branch-and-bound is a simple depth-first version similar to the procedure shown in Figure 1. In this procedure, s* is a global variable that holds the best solution seen so far, with s* equal to some dummy value and f (s*) = infinity when the procedure is initially invoked. If the state space is finite and acyclic and if *f* is monotonically increasing, then Depth-first-BB is admissible. If f is non-monotonic, then Depth-first-BB may not be admissible, and if the state space is infinite, then Depth-first-BB may fail to terminate.

Fig. 1: A branch-and-bound version of state space search.

**Greedy Search**:

A greedy search is a depth-first search procedure with no backtracking. It works as follows. If s is a solution, then return it; otherwise, repeat the search at the child o(s) whose f value is smallest. There are no guarantees of whether this procedure will find an optimal solution, but it can sometimes save a huge amount of time over what would be needed to find a guaranteed optimal solution.

**Hill-Climbing Search**:

This is similar to greedy search, except that in a hill-climbing problem, every node is a solution, and a hill-climbing procedure will only go from s to o(s) if f(o(s)) < f(s).

Most of the above search procedures can be modified using a variety of heuristic techniques. These can be divided roughly into two classes:

**Pruning techniques:**These are ways to determine that some nodes will not lead to a solution (or to a desirable solution), so that the procedure can prune these nodes (i.e., remove them from the search space).**Node-selection techniques**:

These are ways to guess which nodes will lead to desirable solutions (e.g., solutions that are optimal or near-optimal, or solutions that can be found quickly), so that the procedure can visit these nodes first.

There are a number of special cases in which these properties can be preserved.

*Iterative deepening* is a technique that can be used in conjunction with a depth-first search procedure to make it complete. It can be done in two ways: breadth-first or best-first.

*Breadth-first iterative deepening* does a depth-first search that backtracks whenever it reaches depth i and repeats this search for i = 1, 2 … until a solution is found. Like ordinary breadth-first search, breadth-first iterative deepening is complete but not admissible.

*Best-first iterative deepening* does a depth-first search that backtracks during its i^{th} iteration whenever it reaches a node s such that f(s) >= f_{i}, where f_{0} =f(s_{0}), and for i > 0,

*f _{i}* = min{ f(s) | the search backtracked at s during iteration i – 1 }

The well-known IDA* procedure uses best-first iterative deepening.

Problem-Reduction Search

Another kind of search space is a *problem-reduction space*, in which each state s represents a problem to be solved, and each operator *o(s)* produces not just a single child state *s’* but an entire set of children {S_{i} . . . . . S_{k}} (the number of children may vary from one state to another). The children are called subproblems of *s*, and a solution for one of them represents a portion of the solution to *s*. Thus, to solve *s* it is not sufficient just to find a solution below some descendant of s. Instead, the search space is an AND/OR graph, and a solution for s consists of a set of solutions {V_{l} . . . . . V_{k}} that are the leaf nodes of a solution graph rooted at *s*.

A function f(n) is said to be *logarithmically bounded* if f (n)= O(log n); *polynomially bounded* if there is a constant c such that f(n)= O(n^{C}); and *exponentially bounded* if there is a constant c such that f(n) = O(c^{n}).

Computational Complexity of Problems

Given a character string s, is s belongs to L (i.e. language).

Here are some complexity classes dealing with time:

- P is the set of all languages
*L*such that*L*has a deterministic recognition procedure whose worst-case running time is polynomially bounded. - NP is the set of all languages
*L*such that*L*has a nondeterministic recognition procedure whose worst-case running time is polynomially bounded. - EXPTIME is the set of all languages
*L*such that*L*has a deterministic recognition procedure whose worst-case running time is exponentially bounded. - NEXPTIME is the set of all languages
*L*such that*L*has a nondeterministic recognition procedure whose worst-case running time is exponentially bounded.

Here are some complexity classes dealing with space:

- NLOGSPACE is the set of all languages
*L*such that*L*has a nondeterministic recognition procedure whose worst-case space requirement is logarithmically bounded. - PSPACE is the set of all languages
*L*such that*L*has a recognition procedure whose worst-case space requirement is polynomially bounded. It makes no difference whether the procedure is deterministic or nondeterministic; in either case we will get the same set of languages. - EXPSPACE is the set of all languages
*L*such that*L*has a recognition procedure whose worst-case space requirement is exponentially bounded. It makes no difference whether the procedure is deterministic or nondeterministic; in either case we will get the same set of languages.

If C is one of the complexity classes and L is a language, then L is C-hard if every language in C is reducible to L in polynomial time. L is C-complete if C is L-hard and L belongs to C. Intuitively, if L is C-complete, then L is one of the hardest languages in C: if we can recognize L, then we can also recognize any other language in C with at most a polynomial amount of additional overhead.

Planning Domains as Language-Recognition Problems

Given an alphabet L in which to write statements of planning problems, we can define the following languages:

- PLAN-EXISTENCE is the set of all strings s belongs to L* such that s is the statement of a solvable planning problem.
- PLAN-LENGTH is the set of all strings of the form (s,k) such that s is the statement of a solvable planning problem, k is a nonnegative integer, and s has a solution plan that contains no more than k actions

The definition of PLAN-LENGTH follows the standard procedure for converting optimization problems into language-recognition problems.

What really interests us, of course, is not the problem of determining whether there is a plan of length k or less but the problem of finding the shortest plan. If the length of the shortest plan is polynomially bounded, then it can be shown that the two problems are polynomially reducible to each other. However, if the length of the shortest plan is not polynomially bounded, then finding the shortest plan can be much harder than determining whether there is a plan of length k or less. For example, in the well-known Towers of Hanoi problem and certain generalizations of it, the length of the shortest plan can be found in low-order polynomial time, but actually producing a plan of that length requires exponential time and space because the plan has exponential length.