In response to Richard Broersma :
> On Thu, Dec 3, 2009 at 9:26 PM, Tom <tom@cstcomposites.com> wrote:
>
> > I
> > want run a query that skips every nth record and returns a managable
> > dataset that still gives a correct overview of the data without
> > slowing the programme down. Is there an easy way to do this that I
> > have overlooked? I looked at:
>
> I've played with datalogging. It was very easy to find nth records
> when using date_trunc() on a timestamp. The only minor problem with
> data_trunc was that I couldn't create arbitrary granularity. For
> example it is easy to date_trunc() on an year, month, week, day, hour
> or a minute but I wanted 5, 10 and 15 minute increments. I bet there
> could be a solution to this, but I never looked into it.
How about:
test=# select * from data limit 10;
ts
---------------------
2009-12-01 00:00:00
2009-12-01 00:01:00
2009-12-01 00:02:00
2009-12-01 00:03:00
2009-12-01 00:04:00
2009-12-01 00:05:00
2009-12-01 00:06:00
2009-12-01 00:07:00
2009-12-01 00:08:00
2009-12-01 00:09:00
(10 rows)
-- now with 5 miutes increments, using date_trunc and extract:
test=# select * from data where extract(epoch from date_trunc('minute', ts))::int % (5*60) = 0 limit 10;
ts
---------------------
2009-12-01 00:00:00
2009-12-01 00:05:00
2009-12-01 00:10:00
2009-12-01 00:15:00
2009-12-01 00:20:00
2009-12-01 00:25:00
2009-12-01 00:30:00
2009-12-01 00:35:00
2009-12-01 00:40:00
2009-12-01 00:45:00
(10 rows)
Regards, Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99