Re: WAL logging problem in 9.4.3?

Поиск
Список
Период
Сортировка
От Todd A. Cook
Тема Re: WAL logging problem in 9.4.3?
Дата
Msg-id 55AE8E6F.7030504@blackducksoftware.com
обсуждение исходный текст
Ответ на Re: WAL logging problem in 9.4.3?  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: WAL logging problem in 9.4.3?  (Martijn van Oosterhout <kleptog@svana.org>)
Список pgsql-hackers
Hi,

This thread seemed to trail off without a resolution.  Was anything done?
(See more below.)

On 07/09/15 19:06, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
>> On 2015-07-06 11:49:54 -0400, Tom Lane wrote:
>>> Rather than reverting cab9a0656c36739f, which would re-introduce a
>>> different performance problem, perhaps we could have COPY create a new
>>> relfilenode when it does this.  That should be safe if the table was
>>> previously empty.
>
>> I'm not convinced that cab9a0656c36739f needs to survive in that
>> form. To me only allowing one COPY to benefit from the wal_level =
>> minimal optimization has a significantly higher cost than
>> cab9a0656c36739f.
>
> What evidence have you got to base that value judgement on?
>
> cab9a0656c36739f was based on an actual user complaint, so we have good
> evidence that there are people out there who care about the cost of
> truncating a table many times in one transaction.

I'm the complainer mentioned in the cab9a0656c36739f commit message. :)

FWIW, we use a temp table to split a join across 4 largish tables
(10^8 rows or more each) and 2 small tables (10^6 rows each).  We
write the results of joining the 2 largest tables into the temp
table, and then join that to the other 4.  This gave significant
performance benefits because the planner would know the exact row
count of the 2-way join heading into the 4-way join.  After commit
cab9a0656c36739f, we got another noticeable performance improvement
(I did timings before and after, but I can't seem to put my hands
on the numbers right now).

We do millions of these queries every day in batches.  Each batch
reuses a single temp table (truncating it before each pair of joins)
so as to reduce the churn in the system catalogs.  In case it matters,
the temp table is created with ON COMMIT DROP.

This was (and still is) done on 9.2.x.

HTH.

-- todd cook
-- tcook@blackducksoftware.com

> On the other hand,
> I know of no evidence that anyone's depending on multiple sequential
> COPYs, nor intermixed COPY and INSERT, to be fast.  The original argument
> for having this COPY optimization at all was to make restoring pg_dump
> scripts in a single transaction fast; and that use-case doesn't care
> about anything but a single COPY into a virgin table.
>
> I think you're worrying about exactly the wrong case.
>
>> My tentative guess is that the best course is to
>> a) Make heap_truncate_one_rel() create a new relfeilnode. That fixes the
>>     truncation replay issue.
>> b) Force new pages to be used when using the heap_sync mode in
>>     COPY. That avoids the INIT danger you found. It seems rather
>>     reasonable to avoid using pages that have already been the target of
>>     WAL logging here in general.
>
> And what reason is there to think that this would fix all the problems?
> We know of those two, but we've not exactly looked hard for other cases.
> Again, the only known field usage for the COPY optimization is the pg_dump
> scenario; were that not so, we'd have noticed the problem long since.
> So I don't have any faith that this is a well-tested area.
>
>             regards, tom lane
>
>




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [PATCH] postgres_fdw extension support
Следующее
От: Andres Freund
Дата:
Сообщение: Re: [PATCH] postgres_fdw extension support