So, it seems what I am trying to do isn't out of the norm. I will get the column store driver a try. Are there any plans to have this natively support in Postgresql? That would be a great "killer" feature.
This documents a time series database we manage with Postgis, from a research vessel. We use partitions & clustered indexes, as well as a "minute identifier" (of sorts) to allow various intervals to be rapidly identified and extracted. It works well for us, with a Mapserver application and other tools to provide interactive access, as well as SQL queries. We are up to over 600,000,000 records now, and it is still quite responsive.
The new block indexes in PG9.5 might have some application for such databases as well, but I think that while they may be smaller/faster but also more limited than our approach.
To ensure compliance with legal requirements and to maintain cyber security standards, NIWA's IT systems are subject to ongoing monitoring, activity logging and auditing. This monitoring and auditing service may be provided by third parties. Such third parties can access information transmitted to, processed by and stored on NIWA's IT systems.
I have a large timeseries (2TB worth of uncompressed data). I will be doing some queries which change at times. Should I stick with my current approach which is a series of csv files or would it be better to load it into Postgresql and use its TOAST features (which will give me some sort of compression)