pgsql and large tables

Поиск
Список
Период
Сортировка
От Gurupartap Davis
Тема pgsql and large tables
Дата
Msg-id 096901c16c84$bd1dbc50$0f00a8c0@marlows
обсуждение исходный текст
Ответы Practical PostgreSQL over 5,000 copies already!  ("Command Prompt, Inc." <pgsql-general@commandprompt.com>)
Re: pgsql and large tables  (Francisco Reyes <lists@natserv.com>)
Re: pgsql and large tables  ("Andrew G. Hammond" <drew@xyzzy.dhs.org>)
Список pgsql-general
I've got a large database, currently about 40 million rows in the biggest table.  I'm trying to set it up with PostgreSQL 7.1.3, but I've got some questions...
 
I initially created the table and with a primary key (5 fields: char(4), varchar(32), date, int, int) and a couple additional indexes, one that just changes the order of the primary key, and one that sorts on the date.  Then I imported 40 million rows and tried some queries, but discovered that none of my queries were using indexes, causing them to take forever.
 
So I read somewhere in the archives thatyou need to VACUUM a table regularly for indexes to work properly.  I tried that, but aborted after about 5 hours.  I can't use pgsql if I have to take the db down for more than 10-15 minutes a day. 
 
Then I read somewhere else that you should drop your indexes before VACUUMing and re-create them afterwards. I tried that, and VACUUM finished in about 10 minutes.  Kewl... but now I've been trying to recreate my primary key for the last 18 hours...not so good.
 
Should I have dropped all indexes *except* for the primary?  or would VACUUM still take forever that way?  Should I make an artificial primary key with a serial type to simplify things?  Anyone have any hints at all for me?
 
thanks,
Partap Davis
 
 

В списке pgsql-general по дате отправления:

Предыдущее
От: "Command Prompt, Inc."
Дата:
Сообщение: XML/PostgreSQL Application server now available
Следующее
От: "Command Prompt, Inc."
Дата:
Сообщение: Practical PostgreSQL over 5,000 copies already!