I have a table that looks something like the following - essentially containing a timestamp as well as some other columns:
WeatherTable
+---------------------+---------+----------------+ +
| TS | MonthET | InsideHumidity | .... |
+---------------------+---------+----------------+ |
| 2014-10-27 14:24:22 | 0 | 54 | |
| 2014-10-27 14:24:24 | 0 | 54 | |
| 2014-10-27 14:24:26 | 0 | 52 | |
| 2014-10-27 14:24:28 | 0 | 54 | |
| 2014-10-27 14:24:30 | 0 | 53 | |
| 2014-10-27 14:24:32 | 0 | 55 | |
| 2014-10-27 14:24:34 | 9 | 54 | |
.......
I'm trying to formulate a SQL query that returns all rows within a certain timeframe (no problem here) with a certain arbitrary granularity, for instance, every 15 seconds. The number is always specified in seconds but is not limited to values less than 60. To complicate things further, the timestamps don't necessarily fall on the granularity required, so it's not a case of simply selecting the timestamp of 14:24:00, 14:24:15, 14:24:30, etc. - the row with the closest timestamp to each value needs to be included in the result.
For example, if the starting time was given as 14:24:30, the end time as 14:32:00, and the granularity was 130, the ideal times would be:
14:24:30
14:26:40
14:28:50
14:31:00
However, timestamps may not exist for each of those times, in which case the row with the closest timestamp to each of those ideal timestamps should be selected. In the case of two timestamps which are equally far away from the ideal timestamp, the earlier one should be selected.
The database is part of a web service, so presently I'm just ignoring the granularity in the SQL query and filtering the unwanted results out in (Java) code later. However, this seems far from ideal in terms of memory consumption and performance.
Any ideas?