The queries you describe are perfectly suited for a relational database. Whilst you will have a large amount of data, the queries lend themselves well to a fairly simple index scheme.
Some commercial databases have geo-spatial extensions which would allow you to extend the queries to "given a date range, tell me which objects have been in within 20 kilometers of location x".
It also seems that whilst you have a large number of rows, the actual data size is fairly limited; it's not unreasonable to expect it to fit into memory on a high-end machine.
Most database systems can handle very large tables - there's no logical limit to the number of records an RDBMS holds, though there are obviously practical limits. Oracle has a solid reputation for performance with large data sets, though it's definitely worth getting an experienced Oracle DBA to help. A common strategy when handling huge amounts of data is "sharding" - putting different records in different tables and/or servers. If all your queries are date-based, you might put the data for each month on different physical servers, for instance.
I'd start with an RDBMS, create a test data set to work out if it meets your scalability needs by running and tuning sample queries. Tune the hardware, and add more if you can afford to.
I don't think you will get much benefit from Hadoop - you're not doing much processing, you're just searching a large dataset.
MongoDB is designed to work with document-style data; your data seems relational in nature, rather than being a document. You could build this in MongoDB, but I'm not sure you'd get much benefit.