Re: Performance of pg_dump on PGSQL 8.0

Поиск
Список
Период
Сортировка
От John Vincent
Тема Re: Performance of pg_dump on PGSQL 8.0
Дата
Msg-id c841561b0606141004w62d584ake46d598ccb1a4b22@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Performance of pg_dump on PGSQL 8.0  (Scott Marlowe <smarlowe@g2switchworks.com>)
Ответы Re: Performance of pg_dump on PGSQL 8.0  ("A.M." <agentm@themactionfaction.com>)
Re: Performance of pg_dump on PGSQL 8.0  (Scott Marlowe <smarlowe@g2switchworks.com>)
Список pgsql-performance


On 6/14/06, Scott Marlowe <smarlowe@g2switchworks.com> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> -- this is the third time I've tried sending this and I never saw it get
> through to the list. Sorry if multiple copies show up.
>
> Hi all,

BUNCHES SNIPPED

> work_mem = 1048576 ( I know this is high but you should see some of our
> sorts and aggregates)

Ummm.  That's REALLY high.  You might want to consider lowering the
global value here, and then crank it up on a case by case basis, like
during nighttime report generation.  Just one or two queries could
theoretically run your machine out of memory right now.  Just put a "set
work_mem=1000000" in your script before the big query runs.


I know it is but that's what we need for some of our queries. Our ETL tool (informatica) and BI tool (actuate) won't let us set those things as part of our jobs. We need it for those purposes. We have some really nasty queries that will be fixed in our new server.

E.G. we have a table called loan_account_agg_fact that has 200+ million rows and it contains every possible combination of late status for a customer account (i.e. 1 day late, 2 day late, 3 day late) so it gets inserted for new customers but updated for existing records as part of our warehouse load. Part of the new layout is combining late ranges so instead of number of days we have a range of days ( i.e. 1-15,16-30....). Even with work_mem that large, the load of that loan_account_agg_fact table creates over 3GB of temp tables!


That's exactly what we do.  We just do a normal backup, and have a
script that gzips anything in the backup directory that doesn't end in
.gz...  If you've got space to burn, as you say, then use it at least a
few days to see how it affects backup speeds.

Seeing as how you're CPU bound, most likely the problem is just the
compressed backup.

I'm starting to think the same thing. I'll see how this COPY I'm doing of the single largest table does right now and make some judgement based on that.

--
John E. Vincent

В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Which processor runs better for Postgresql?
Следующее
От: "A.M."
Дата:
Сообщение: Re: Performance of pg_dump on PGSQL 8.0