Re: Disk Performance Problem on Large DB
| От | Kevin Grittner | 
|---|---|
| Тема | Re: Disk Performance Problem on Large DB | 
| Дата | |
| Msg-id | 4CD2D95702000025000372EC@gw.wicourts.gov обсуждение исходный текст | 
| Ответ на | Disk Performance Problem on Large DB ("Jonathan Hoover" <jhoover@yahoo-inc.com>) | 
| Список | pgsql-admin | 
"Jonathan Hoover" <jhoover@yahoo-inc.com> wrote: > I have a simple database, with one table for now. It has 4 > columns: > > anid serial primary key unique, > time timestamp, > source varchar(5), > unitid varchar(15), > guid varchar(32) > > There is a btree index on each. > > I am loading data 1,000,000 (1M) rows at a time using psql and a > COPY command. Once I hit 2M rows, my performance just drops out Drop the indexes and the primary key before you copy in. Personally, I strongly recommend a VACUUM FREEZE ANALYZE after the bulk load. Then use ALTER TABLE to restore the primary key, and create all the other indexes. Also, if you don't mind starting over from initdb if it crashes partway through you can turn fsync off. You want a big maintenance_work_mem setting during the index builds -- at least 200 MB. -Kevin
В списке pgsql-admin по дате отправления: