Re: Improving the Performance of Full Table Updates

Поиск
Список
Период
Сортировка
От Gokulakannan Somasundaram
Тема Re: Improving the Performance of Full Table Updates
Дата
Msg-id 9362e74e0709260634q352a6aaejbc503cedc554c7d3@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Improving the Performance of Full Table Updates  ("Heikki Linnakangas" <heikki@enterprisedb.com>)
Ответы Re: Improving the Performance of Full Table Updates  (Heikki Linnakangas <heikki@enterprisedb.com>)
Список pgsql-hackers
Hi Tom/ Heikki,
           Thanks for the suggestion. After profiling i got similar results. So i am thinking of a design like this to get the performance improvement.

a) We can get one page for insert(during update)  and we will hold the write lock on it, till the page gets filled. In this way, RelationGetBufferForTuple will get called only once for one page of inserts.

b) Do you think if we can optimize the XlogInsert in such a way, it will write a page instead of writing all the records in the page.  I think we need to write a recovery routine for the same. Currently the page gets flushed to the WAL, if it gets modified after the checkpoint. So i still need to understand those code pieces. But do you think it is wise to continue working on this line?

Thanks,
Gokul.

On 9/21/07, Heikki Linnakangas <heikki@enterprisedb.com> wrote:
Gokulakannan Somsundaram wrote:
> Again if this full table updates are thought with the OLTP applications in
> mind, then this is not at all a suitable option. This will only benefit the
> people with Data Warehouses.
>
> Expecting some more replies....

Start with profiling.

I just ran a quick oprofile run of a full-table UPDATE on a simple table
with one index, and it looks like RelationGetBufferForTuple uses 4.53%
of the CPU time. Out of that, 2.86 percentage points are spent in
ReadBuffer_common. That means that write-locking the heap pages takes at
most 4.53 - 2.86 = 1.67 % of the total CPU time.

That's the upper limit of the benefit from the scheme you're proposing.
Surely the effort would be better spent on something else. For example,
if you kept the insertion target page just pinned over the calls, which
wouldn't have the problems with triggers etc, you could save that 2.86%.
Which still isn't much. Or take a look at WAL logging. XLogInsert took
16.06% of the CPU time. Earlier tests have suggested that a big chunk of
that time is spent in CRC calculation. Alternative CRC methods have been
suggested in the past, or perhaps that could time could be offloaded to
the WAL writer process, speeding up the UPDATE on a multi-CPU server.

Also, if we're talking about data warehousing, we're talking about big
tables that don't fit in memory. That means that you're likely
bottlenecked by I/O speed, not CPU. If that's the case, saving some CPU
time makes no difference whatsoever. What would help with I/O bottleneck
is to try to make the disk footprint smaller, or make better use of the
I/O bandwidth available.

Three steps to improve throughput:

1. Identify the hardware component that's the bottleneck.
2. Profile the workload to see what's using the bottlenecked resource
the most.
3. Figure out how to make that piece of code cheaper.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Meskes
Дата:
Сообщение: Re: MSVC build scripts status
Следующее
От: Zdenek Kotala
Дата:
Сообщение: Re: pgcrypto & strong ciphers limitation