Обсуждение: Fragmenting tables in postgres

Поиск
Список
Период
Сортировка

Fragmenting tables in postgres

От
karthikg@yahoo.com (Karthik Guruswamy)
Дата:
Hi,
Anyone tried fragmenting tables into multiple sub tables 
transparently through Postgres rewrite rules ? I'm having 
a table with 200,000 rows with varchar columns and noticed 
that updates,inserts take a lot longer time compared to a 
few rows in the same table. I have a lot of memory in my 
machine like 2Gig and 600,000 buffers. 

I really appreciate any pointers.

Karthik Guruswamy


Re: Fragmenting tables in postgres

От
Tom Lane
Дата:
karthikg@yahoo.com (Karthik Guruswamy) writes:
> Anyone tried fragmenting tables into multiple sub tables 
> transparently through Postgres rewrite rules ? I'm having 
> a table with 200,000 rows with varchar columns and noticed 
> that updates,inserts take a lot longer time compared to a 
> few rows in the same table.

That's not a very big table ... there's no reason for inserts to
take a long time, and not much reason for updates to take long either
if you have appropriate indexes to help find the rows to be updated.
Have you VACUUM ANALYZEd this table recently (or ever?)  Have you
tried EXPLAINing the queries to see if they use indexes?

> I have a lot of memory in my 
> machine like 2Gig and 600,000 buffers. 

You mean you set -B to 600000?  That's not a bright idea.  A few
thousand will be plenty, and will probably perform lots better.
        regards, tom lane


Re: Fragmenting tables in postgres

От
Bruce Momjian
Дата:
> karthikg@yahoo.com (Karthik Guruswamy) writes:
> > Anyone tried fragmenting tables into multiple sub tables 
> > transparently through Postgres rewrite rules ? I'm having 
> > a table with 200,000 rows with varchar columns and noticed 
> > that updates,inserts take a lot longer time compared to a 
> > few rows in the same table.
> 
> That's not a very big table ... there's no reason for inserts to
> take a long time, and not much reason for updates to take long either
> if you have appropriate indexes to help find the rows to be updated.
> Have you VACUUM ANALYZEd this table recently (or ever?)  Have you
> tried EXPLAINing the queries to see if they use indexes?
> 
> > I have a lot of memory in my 
> > machine like 2Gig and 600,000 buffers. 
> 
> You mean you set -B to 600000?  That's not a bright idea.  A few
> thousand will be plenty, and will probably perform lots better.

This is a good question.  When does too many buffers become a
performance problem?

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026