Re: Inserts in 'big' table slowing down the database

Поиск
Список
Период
Сортировка
От Stefan Keller
Тема Re: Inserts in 'big' table slowing down the database
Дата
Msg-id CAFcOn2_zpBQ1Tky_vHOkBGMLwjHJFdNSUgyZhkg-i3b8-WWMLA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Inserts in 'big' table slowing down the database  (Ivan Voras <ivoras@freebsd.org>)
Ответы Re: Inserts in 'big' table slowing down the database  (Greg Williamson <gwilliamson39@yahoo.com>)
Re: Inserts in 'big' table slowing down the database  (Shaun Thomas <sthomas@optionshouse.com>)
Re: Inserts in 'big' table slowing down the database  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-performance
Sorry for the delay. I had to sort out the problem (among other things).

It's mainly about swapping.

The table nodes contains about 2^31 entries and occupies about 80GB on
disk space plus index.
If one would store the geom values in a big array (where id is the
array index) it would only make up about 16GB, which means that the
ids are dense (with few deletes).
Then updates come in every hour as bulk insert statements with entries
having ids in sorted manner.
Now PG becomes slower and slower!
CLUSTER could help - but obviously this operation needs a table lock.
And if this operation takes longer than an hour, it delays the next
update.

Any ideas? Partitioning?

Yours, S.

2012/9/3 Ivan Voras <ivoras@freebsd.org>:
> On 03/09/2012 13:03, Stefan Keller wrote:
>> Hi,
>>
>> I'm having performance issues with a simple table containing 'Nodes'
>> (points) from OpenStreetMap:
>>
>>   CREATE TABLE nodes (
>>       id bigint PRIMARY KEY,
>>       user_name text NOT NULL,
>>       tstamp timestamp without time zone NOT NULL,
>>       geom GEOMETRY(POINT, 4326)
>>   );
>>   CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);
>>
>> The number of rows grows steadily and soon reaches one billion
>> (1'000'000'000), therefore the bigint id.
>> Now, hourly inserts (update and deletes) are slowing down the database
>> (PostgreSQL 9.1) constantly.
>> Before I'm looking at non-durable settings [1] I'd like to know what
>> choices I have to tune it while keeping the database productive:
>> cluster index? partition table? use tablespaces? reduce physical block size?
>
> You need to describe in detail what does "slowing down" mean in your
> case. Do the disk drives somehow do more operations per transaction?
> Does the database use more CPU cycles? Is there swapping? What is the
> expected (previous) performance?
>
> At a guess, it is very unlikely that using non-durable settings will
> help you here.
>


В списке pgsql-performance по дате отправления:

Предыдущее
От: Виктор Егоров
Дата:
Сообщение: Re: NestedLoops over BitmapScan question
Следующее
От: Greg Williamson
Дата:
Сообщение: Re: Inserts in 'big' table slowing down the database