Re: 121+ million record table perf problems

Поиск
Список
Период
Сортировка
От Vivek Khera
Тема Re: 121+ million record table perf problems
Дата
Msg-id FC775E8C-28AE-40FE-99D1-E909A013C1A7@khera.org
обсуждение исходный текст
Ответ на Re: 121+ million record table perf problems  (Andrew Sullivan <ajs@crankycanuck.ca>)
Список pgsql-performance
On May 18, 2007, at 2:30 PM, Andrew Sullivan wrote:

> Note also that your approach of updating all 121 million records in
> one statement is approximately the worst way to do this in Postgres,
> because it creates 121 million dead tuples on your table.  (You've
> created some number of those by killing the query as well.)
>
> All of that said, 17 hours seems kinda long.

I don't think that is too long.  Growing the table one page at a time
takes a long time when you add a lot of pages to a table that big.
Add in the single disk and you're flying the disk head all over the
place so it will just be slow.  No way around it.

And just for good measure, I ran a count on one of my big tables
which consists of two integers and a varchar(7):

db=> select count(*) from mytable;
    count
-----------
311994721
(1 row)

Time: 157689.057 ms

So I'm going to bet $1 that you're I/O starved.

Also, for memory usage, postgres won't use more than you tell it to...


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Jim C. Nasby"
Дата:
Сообщение: Re: Postgres Benchmark Results
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Rewriting DISTINCT and losing performance