Hi Jeff and Wales,
2012/2/26 Jeff Janes <jeff.janes@gmail.com> wrote:
>> The problem is that the initial queries are too slow - and there is no
>> second chance. I do have to trash the buffer every night. There is
>> enough main memory to hold all table contents.
>
> Just that table, or the entire database?
The entire database consisting of only about 5 tables which are
similar but with different geometry types plus a relations table (as
OpenStreetMap calls it).
>> 1. How can I warm up or re-populate shared buffers of Postgres?
>
> Instead, warm the OS cache. Then data will get transferred into the
> postgres shared_buffers pool from the OS cache very quickly.
>
> tar -c $PGDATA/base/ |wc -c
Ok. So with "OS cache" you mean the files which to me are THE database itself?
A cache to me is a second storage with "controlled redudancy" because
of performance reasons.
>> 2. Are there any hints on how to tell Postgres to read in all table
>> contents into memory?
>
> I don't think so, at least not in core. I've wondered if it would
> make sense to suppress ring-buffer strategy when there are buffers on
> the free-list. That way a sequential scan would populate
> shared_buffers after a restart. But it wouldn't help you get the
> indexes into cache.
So, are there any developments going on with PostgreSQL as Stephen
suggested in the former thread?
2012/2/26 Wales Wang <wormwang@yahoo.com>:
> You can try PostgreSQL 9.x master/slave replication, then try run slave
> on persistent RAM Fileystem (tmpfs)
> So, access your all data from slave PostgreSQL that run on tmpfs..
Nice idea.
I do have a single upscaled server and up to now I hesitated to
allocate say 48 Gigabytes (out of 72) to such a RAM Fileystem (tmpfs).
Still, would'nt it be more flexible when I could dynamically instruct
PostgreSQL to behave like an in-memory database?
Yours, Stefan