Re: poor performance when recreating constraints on large tables

Поиск
Список
Период
Сортировка
От Claudio Freire
Тема Re: poor performance when recreating constraints on large tables
Дата
Msg-id BANLkTi=3w8+sDbYZeDz7Mpj_QMEErEURVQ@mail.gmail.com
обсуждение исходный текст
Ответ на poor performance when recreating constraints on large tables  (Mike Broers <mbroers@gmail.com>)
Список pgsql-performance
---------- Forwarded message ----------
From: Claudio Freire <klaussfreire@gmail.com>
Date: Wed, Jun 8, 2011 at 11:57 PM
Subject: Re: [PERFORM] poor performance when recreating constraints on
large tables
To: Samuel Gendler <sgendler@ideasculptor.com>


On Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler
<sgendler@ideasculptor.com> wrote:
> Sure, but if it is a query that is slow enough for a time estimate to be
> useful, odds are good that stats that are that far out of whack would
> actually be interesting to whoever is looking at the time estimate, so
> showing some kind of 'N/A' response once things have gotten out of whack
> wouldn't be unwarranted.  Not that I'm suggesting that any of this is a
> particularly useful exercise.  I'm just playing with the original thought
> experiment suggestion.

There's a trick to get exactly that:

Do an explain, fetch the expected rowcount on the result set, add a
dummy sequence and a dummy field to the resultset "nextval(...) as
progress".

Now, you won't get to read the progress column probably, but that
doesn't matter. Open up another transaction, and query it there.
Sequences are nontransactional.

All the smarts about figuring out the expected resultset's size
remains on the application, which is fine by me.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Samuel Gendler
Дата:
Сообщение: Re: poor performance when recreating constraints on large tables
Следующее
От: Greg Smith
Дата:
Сообщение: Re: poor performance when recreating constraints on large tables