I don't think that currently there exists a simple and robust method to detect rectangles in an image. You have to deal with many problems such as the rectangles not being exactly rectangular but only approximately, partial occlusions, lighting changes, etc.
One possible direction is to do a segmentation of the image and then check how close each segment is to being a rectangle. Since you can't trust your segmentation algorithm, you can run it multiple times with different parameters.
Another direction is to try to parametrically fit a rectangle to the image such that the image gradient magnitude along the contour will be maximized.
If you choose to work on a parametric approach, notice that while the trivial way to parameterize a rectangle is by the locations of it's four corners, which is 8 parameters, there are a few other representations that require less parameters.