Re: Impact of checkpoint_segments under continual load conditions

Поиск
Список
Период
Сортировка
От Christopher Petrilli
Тема Re: Impact of checkpoint_segments under continual load conditions
Дата
Msg-id 59d991c40507190930244ba9bb@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Impact of checkpoint_segments under continual load conditions  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Impact of checkpoint_segments under continual load conditions
Список pgsql-performance
On 7/19/05, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Christopher Petrilli <petrilli@gmail.com> writes:
> > Not sure... my benchmark is designed to represent what the database
> > will do under "typical" circumstances, and unfortunately these are
> > typical for the application.  However, I can see about adding some
> > delays, though multiple minutes would be absurd in the application.
> > Perhaps a 5-10 second day? Would that still be interesting?
>
> I think PFC's question was not directed towards modeling your
> application, but about helping us understand what is going wrong
> (so we can fix it).  It seemed like a good idea to me.

OK, I can modify the code to do that, and I will post it on the web.

> The startup transient probably corresponds to the extra I/O needed to
> repopulate shared buffers with a useful subset of your indexes.  But
> just to be perfectly clear: you tried this, and after the startup
> transient it returned to the *original* trend line?  In particular,
> the performance goes into the tank after about 5000 total iterations,
> and not 5000 iterations after the postmaster restart?

This is correct, the TOTAL is what matters, not the specific instance
count.  I did an earlier run with larger batch sizes, and it hit at a
similar row count, so it's definately row-count/size related.

> I'm suddenly wondering if the performance dropoff corresponds to the
> point where the indexes have grown large enough to not fit in shared
> buffers anymore.  If I understand correctly, the 5000-iterations mark
> corresponds to 2.5 million total rows in the table; with 5 indexes
> you'd have 12.5 million index entries or probably a couple hundred MB
> total.  If the insertion pattern is sufficiently random that the entire
> index ranges are "hot" then you might not have enough RAM.

This is entirely possible, currently:

shared_buffers = 1000
work_mem = 65535
maintenance_work_mem = 16384
max_stack_depth = 2048

> Again, experimenting with different values of shared_buffers seems like
> a very worthwhile thing to do.

I miss-understood shared_buffers then, as I thought work_mem was where
indexes were kept.  If this is where index manipulations happen, then
I can up it quite a bit.  The machine this is running on has 2GB of
RAM.

My concern isn't absolute performance, as this is not representative
hardware, but instead is the evenness of behavior.

Chris
--
| Christopher Petrilli
| petrilli@gmail.com

В списке pgsql-performance по дате отправления:

Предыдущее
От: PFC
Дата:
Сообщение: Re: Impact of checkpoint_segments under continual load conditions
Следующее
От: Christopher Petrilli
Дата:
Сообщение: Re: Impact of checkpoint_segments under continual load conditions