Re: Tuning Postgres for single user manipulating large amounts of data

Поиск
Список
Период
Сортировка
От Andy Colson
Тема Re: Tuning Postgres for single user manipulating large amounts of data
Дата
Msg-id 4D00ECCA.604@squeakycode.net
обсуждение исходный текст
Ответ на Tuning Postgres for single user manipulating large amounts of data  (Paul Taylor <ijabz@fastmail.fm>)
Ответы Re: Tuning Postgres for single user manipulating large amounts of data  (Andy Colson <andy@squeakycode.net>)
Список pgsql-general
On 12/9/2010 6:25 AM, Paul Taylor wrote:
> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
> I using the database with just one db connection to build a lucene
> search index from some of the data, and Im trying to improve
> performance. The key thing is that I'm only a single user but
> manipulating large amounts of data , i.e processing tables with upto 10
> million rows in them, so I think want to configure Postgres so that it
> can create large temporary tables in memory
>
> I've tried changes various parameters such as shared_buffers, work_mem
> and checkpoint_segments but I don't really understand what they values
> are, and the documentation seems to be aimed towards configuring for
> multiple users, and my changes make things worse. For example my machine
> has 2GB of memory and I read if using as a dedicated server you should
> set shared memory to 40% of total memory, but when I increase to more
> than 30MB Postgres will not start complaining about my SHMMAX limit.
>
> Paul
>

You need to bump up your SHMMAX is your OS.  I'm sure google knows how
to do it.  (in linux use sysctl, so it may be similar in macos).

checkpoint_segments: I've bumped them up to 10, but only when inserting
a huge amount of data, not sure how much it'll help otherwise.

shared_buffers: this is the big one.  Set it big, 1G maybe

work_mem: this is for temp work a query might need to do, like sorting,
merging, etc.  A big value (100Meg or so) would be ok.  Its Per User,
but since there is only one of you, splurge.

There is also an effective_cache_size (or something like that): its the
amount of memory PG can assume is being used for disk cache.  Its not
something that'll be allocated.  So you have 2G, 1G for PG, 300Meg for
os and other stuff, so 700Meg for effective_cache_size?

In Linux I use "free" to see how much is being used for disk cache, and
set it to that.


-Andy

В списке pgsql-general по дате отправления:

Предыдущее
От: "Rob Richardson"
Дата:
Сообщение: How can I create a PgAgent job creation script?
Следующее
От: Andy Colson
Дата:
Сообщение: Re: Tuning Postgres for single user manipulating large amounts of data