Обсуждение: Postgres performace with large tables.

Поиск
Список
Период
Сортировка

Postgres performace with large tables.

От
Wim
Дата:
Hello All,

 I have a database with the inventory of my backbone routers. The 30
routers are stored in table.
Every 5 minutes, I insert the interface counters into a table for all
the routers.
When my table (counters_table) has less than ,let's say, 100000 records,
the insert is done within the 50 seconds for all the routers.
I noticed that, when my table contains more than 500000 records, the
insert takes about 230 seconds...

My DB runs on a Pentium III 512MB RAM and one CPU 1.13GHz

I used EXPLAIN to test my select queries, but everyting seems fine
(right use of indexes...)


How can I speed up the insert (and improve the performance).



Thanx!

Wim


Re: Postgres performace with large tables.

От
Andrew McMillan
Дата:
On Thu, 2003-02-06 at 04:28, Wim wrote:
> Hello All,
>
>  I have a database with the inventory of my backbone routers. The 30
> routers are stored in table.
> Every 5 minutes, I insert the interface counters into a table for all
> the routers.
> When my table (counters_table) has less than ,let's say, 100000 records,
> the insert is done within the 50 seconds for all the routers.
> I noticed that, when my table contains more than 500000 records, the
> insert takes about 230 seconds...
>
> My DB runs on a Pentium III 512MB RAM and one CPU 1.13GHz
>
> I used EXPLAIN to test my select queries, but everyting seems fine
> (right use of indexes...)
>
>
> How can I speed up the insert (and improve the performance).

Are you telling us the full story?  Is there any parallel process
running against these tables which does updates or deletes?  What
constraints do you have on the table?  Triggers?  Indexes?

If you have (e.g.) a constraint which causes a lookup against a field in
a similarly growing table that is not indexed, you are likely to see
this sort of behaviour.

If you have processes that are updating/deleting within the table in
parallel then you probably want to vacuum the table (much) more often.

50 seconds seems ridiculously long for an insert in such a table - I
have tables with millions of rows and see nothing like that sort of
slowdown.

Cheers,
                    Andrew.
--
---------------------------------------------------------------------
Andrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/         PHYS: Level 2, 150-154 Willis St
DDI: +64(4)916-7201     MOB: +64(21)635-694    OFFICE: +64(4)499-2267
           Survey for nothing with http://survey.net.nz/
---------------------------------------------------------------------


Re: Postgres performace with large tables.

От
Wim
Дата:
Andrew McMillan wrote:

>On Thu, 2003-02-06 at 04:28, Wim wrote:
>
>
>>Hello All,
>>
>> I have a database with the inventory of my backbone routers. The 30
>>routers are stored in table.
>>Every 5 minutes, I insert the interface counters into a table for all
>>the routers.
>>When my table (counters_table) has less than ,let's say, 100000 records,
>>the insert is done within the 50 seconds for all the routers.
>>I noticed that, when my table contains more than 500000 records, the
>>insert takes about 230 seconds...
>>
>>My DB runs on a Pentium III 512MB RAM and one CPU 1.13GHz
>>
>>I used EXPLAIN to test my select queries, but everyting seems fine
>>(right use of indexes...)
>>
>>
>>How can I speed up the insert (and improve the performance).
>>
>>
>
>Are you telling us the full story?  Is there any parallel process
>running against these tables which does updates or deletes?  What
>constraints do you have on the table?  Triggers?  Indexes?
>
>If you have (e.g.) a constraint which causes a lookup against a field in
>a similarly growing table that is not indexed, you are likely to see
>this sort of behaviour.
>
>If you have processes that are updating/deleting within the table in
>parallel then you probably want to vacuum the table (much) more often.
>
I Think I'll try that in the first place,
I do:
BEGIN
    SELECT routers FROM routers_table WHERE blabla;
    UPDATE routers_table SET timestamp=blabla;
    INSERT INTO routers_counters VALUES blablabla;
END
COMMIT

How much should  I vacuum the table? After every run of the script or 2
or 3 times/day?

>
>50 seconds seems ridiculously long for an insert in such a table - I
>have tables with millions of rows and see nothing like that sort of
>slowdown.
>
>Cheers,
>                    Andrew.
>
>




Re: Postgres performace with large tables.

От
Andrew McMillan
Дата:
On Thu, 2003-02-06 at 20:39, Wim wrote:
> Andrew McMillan wrote:
> >
> >If you have processes that are updating/deleting within the table in
> >parallel then you probably want to vacuum the table (much) more often.
> >
> I Think I'll try that in the first place,
> I do:
> BEGIN
>     SELECT routers FROM routers_table WHERE blabla;
>     UPDATE routers_table SET timestamp=blabla;
>     INSERT INTO routers_counters VALUES blablabla;
> END
> COMMIT
>
> How much should  I vacuum the table? After every run of the script or 2
> or 3 times/day?

I would tend to "VACUUM routers_table" after each run of the script,
given that it is a small table (presumably 30 rows) it should be next to
know overhead and will ensure it remains within 1-2 physical pages.

If you continue to experience problems, you are best advised to provide
full schema and EXPLAIN output along with your questions.

Cheers,
                    Andrew.
--
---------------------------------------------------------------------
Andrew @ Catalyst .Net.NZ Ltd, PO Box 11-053, Manners St,  Wellington
WEB: http://catalyst.net.nz/         PHYS: Level 2, 150-154 Willis St
DDI: +64(4)916-7201     MOB: +64(21)635-694    OFFICE: +64(4)499-2267
           Survey for nothing with http://survey.net.nz/
---------------------------------------------------------------------


Re: Postgres performace with large tables.

От
Wim
Дата:
Andrew McMillan wrote:

>On Thu, 2003-02-06 at 20:39, Wim wrote:
>
>
>>Andrew McMillan wrote:
>>
>>
>>>If you have processes that are updating/deleting within the table in
>>>parallel then you probably want to vacuum the table (much) more often.
>>>
>>>
>>>
>>I Think I'll try that in the first place,
>>I do:
>>BEGIN
>>    SELECT routers FROM routers_table WHERE blabla;
>>    UPDATE routers_table SET timestamp=blabla;
>>    INSERT INTO routers_counters VALUES blablabla;
>>END
>>COMMIT
>>
>>How much should  I vacuum the table? After every run of the script or 2
>>or 3 times/day?
>>
>>
>
>I would tend to "VACUUM routers_table" after each run of the script,
>given that it is a small table (presumably 30 rows) it should be next to
>know overhead and will ensure it remains within 1-2 physical pages.
>
>If you continue to experience problems, you are best advised to provide
>full schema and EXPLAIN output along with your questions.
>
>Cheers,
>                    Andrew.
>
>

Thanx,

Let's work on it!


Wim