Re: moving data between tables causes the db to overwhelm the system

Поиск
Список
Период
Сортировка
От Robert Schnabel
Тема Re: moving data between tables causes the db to overwhelm the system
Дата
Msg-id 4A9D1486.9010603@missouri.edu
обсуждение исходный текст
Ответ на Re: moving data between tables causes the db to overwhelm the system  (Pierre Frédéric Caillaud<lists@peufeu.com>)
Список pgsql-performance
> If you want to partition your huge data set by "time", and the data
> isn't already ordered by "time" on disk, you could do this :
>
> SET work_mem TO something very large like 10GB since you got 32GB RAM,
> check your shared buffers etc first;
> CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY "time"; <- huge
> sort, will take some time
>
> SET maintenance_work_mem TO something very large;
> CREATE INDEX tmp_time ON tmp( "time" );
>
> CREATE TABLE partition1 AS SELECT * FROM tmp WHERE "time" BETWEEN
> beginning AND end;
> (repeat...)
>
> Since tmp is clustered on "time" you'll get a nice fast bitmap-scan,
> and you won't need to seq-scan N times (or randomly index-scan) bigTable.
>
I went through the same exercise a couple months ago with a table that
had ~1.7 billion rows.  I used a similar approach to what's described
above but in my case I didn't create the tmp table and did the ORDER BY
when I did each select on the bigTable to do the insert (I didn't have
many of them).  My data was structured such that this was easier than
doing the huge sort.  In any event, it worked great and my smaller
partitions are much much faster.

Bob

В списке pgsql-performance по дате отправления:

Предыдущее
От: Pierre Frédéric Caillaud
Дата:
Сообщение: Re: moving data between tables causes the db to overwhelm the system
Следующее
От: Hrishikesh (हृषीकेश मेहेंदळे)
Дата:
Сообщение: Re: Performance issues with large amounts of time-series data