Re: insert/update tps slow with indices on table > 1M rows

Поиск
Список
Период
Сортировка
От andrew klassen
Тема Re: insert/update tps slow with indices on table > 1M rows
Дата
Msg-id 761633.30824.qm@web37307.mail.mud.yahoo.com
обсуждение исходный текст
Ответ на insert/update tps slow with indices on table > 1M rows  (andrew klassen <aptklassen@yahoo.com>)
Список pgsql-performance
I am not currently using copy, but  I am using prepared statements  
for table insert/updates so the overhead for the actual data transfer
should be pretty good. I am sending at most  300 inserts/updates
per transaction, but that is just an arbitrary value. When the queue
grows, I could easily send more per transaction. I  did experiment
a little, but it did not seem to help significantly at the time.
 
The system has 4G total memory. Shared memory is locked by the OS,
i.e. not paged so I am only using shared_buffers=28MB.
 
The maximum data per row is 324 bytes assuming maximum expected length of two
text fields. There are 5 total indices: 1 8-byte, 2 4-byte and 2 text fields.
As mentioned all indices are btree.
 

 
----- Original Message ----
From: PFC <lists@peufeu.com>
To: andrew klassen <aptklassen@yahoo.com>; pgsql-performance@postgresql.org
Sent: Tuesday, June 3, 2008 7:15:10 PM
Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows

On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <aptklassen@yahoo.com
wrote:

> Running postgres 8.2.5
>  
> I have a table that has 5 indices, no foreign keys or any
> dependency on any other table. If delete the database and
> start entering entries, everything works very well until I get
> to some point (let's say 1M rows). Basically, I have a somewhat
> constant rate of inserts/updates that go into a work queue and then
> get passed to postgres. The work queue starts filling up as the
> responsiveness slows down. For example at 1.5M
> rows it takes >2 seconds for 300 inserts issued in one transaction.
>  
> Prior to this point I had added regular VACUUM ANALYZE on
> the table and it did help.  I increased maintenance work memory to
> 128M. I also set the fillfactor on the table indices to 50% (not sure
> if that made any difference have to study results more closely). 
>  
> In an effort to figure out the bottleneck, I DROPed 4 of the indices
> on the table and the tps increased to over 1000. I don't really know
> which index removal gave the best performance improvement. I
> dropped 2 32-bit indices and 2 text indices which all using btree.
>  
> The cpu load is not that high, i.e. plenty of idle cpu. I am running an 
> older
> version of freebsd and the iostat output is not very detailed.
> During this time, the number is low < 10Mbs. The system has an
> LSI Logic MegaRAID controller with 2 disks.
>  
> Any ideas on how to find the bottleneck/decrease overhead of index usage.
>  
> Thanks.

    If you are filling an empty table it is generally faster to create the 
indexes after the data import.
    Of course if this is a live table or you need the indexes during the 
import, this is not an option.
    I find it generally faster to lightly preprocess the data and generate 
text files that I then import using COPY, then doing the rest of the 
processing in SQL.

    How much RAM in the box ? size of the data & indexes ?

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

В списке pgsql-performance по дате отправления:

Предыдущее
От: PFC
Дата:
Сообщение: Re: insert/update tps slow with indices on table > 1M rows
Следующее
От: "Scott Marlowe"
Дата:
Сообщение: Re: insert/update tps slow with indices on table > 1M rows