The advantage of a sorted array for the "target sum" problem is even better: you don't search the array at all. Instead, you start with pointers at the two ends. If the sum is equal to the target, emit that and move both pointers in. If less than the target, increment the lower pointer. Otherwise, decrement the upper pointer. This will find all solutions in O(n) time -- after taking O(n log n) for the sort.
For the case given in the comments, [40, 60, 1, 200, 9, 83, 17]
the process looks like this:
Sort array:
[1, 9, 17, 40, 60, 83, 200]
Start your pointers at the ends, 1 + 200
The sum is 201, too large, so decrement the right pointer.
Now looking at 1 + 83. This is too small; increment the left pointer.
Now looking at 9 + 83. This is too small; increment the left pointer.
Now looking at 17 + 83. This is the target; print (17, 83) as a solution
and move *both* pointers.
Now looking at 40 + 60. This is the target; print (40, 60) as a solution
and move *both* pointers.
The pointers have now met (and passed), so you're done.
This is a lovely example. In general, sorting the array gives you a variety of options for finding things in the array much faster than checking each element in turn. A simple binary search is O(log n), and there is a variety of ways to tune this for a particular application. At worst, a binary (log base 2) search will work nicely.
However, sorting an arbitrary list costs O(n log n) as overhead; you need to figure this one-time payment into your application's needs. For instance, if your array is some sort of data base sorted by a key value (such as a name or ID number), and you have to perform millions of searches from user requests, then it's almost certainly better to sort the data base somehow before you do any searches.
If you want a thorough introduction, research "sorting and searching". One excellent reference is a book by that title by Donald Knuth; it's Vol 2 of "The Art of Computer Programming".