Re: two memory-consuming postgres processes

Поиск
Список
Период
Сортировка
От Craig James
Тема Re: two memory-consuming postgres processes
Дата
Msg-id 481B8160.2090106@emolecules.com
обсуждение исходный текст
Ответ на Re: two memory-consuming postgres processes  ("Scott Marlowe" <scott.marlowe@gmail.com>)
Ответы Re: two memory-consuming postgres processes  (Alexy Khrabrov <deliverable@gmail.com>)
Список pgsql-performance
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <deliverable@gmail.com> wrote:
>  I naively thought that if I have a 100,000,000 row table, of the form
> (integer,integer,smallint,date), and add a real coumn to it, it will scroll
> through the memory reasonably fast.

In Postgres, an update is the same as a delete/insert.  That means that changing the data in one column rewrites ALL of
thecolumns for that row, and you end up with a table that's 50% dead space, which you then have to vacuum. 

Sometimes if you have a "volatile" column that goes with several "static" columns, you're far better off to create a
secondtable for the volatile data, duplicating the primary key in both tables.  In your case, it would mean the
differencebetween 10^8 inserts of (int, float), very fast, compared to what you're doing now, which is 10^8 insert and
10^8deletes of (int, int, smallint, date, float), followed by a big vacuum/analyze (also slow). 

The down side of this design is that later on, it requires a join to fetch all the data for each key.

You do have a primary key on your data, right?  Or some sort of index?

Craig

В списке pgsql-performance по дате отправления:

Предыдущее
От: Alexy Khrabrov
Дата:
Сообщение: Re: two memory-consuming postgres processes
Следующее
От: Alexy Khrabrov
Дата:
Сообщение: Re: two memory-consuming postgres processes