Re: poor performance of loading data
От | Mitch Vincent |
---|---|
Тема | Re: poor performance of loading data |
Дата | |
Msg-id | 021701c188bb$5e2ca2d0$0200000a@Mitch обсуждение исходный текст |
Ответ на | poor performance of loading data ("Zhang, Anna" <azhang@verisign.com>) |
Список | pgsql-admin |
Are the rows huge? What kind of machine hardware-wise are we talking about? Did you start the postmaster with fsync disabled? I generally turn fsync off for importing, the improvement is amazing :-) Good luck! -Mitch ----- Original Message ----- From: "Zhang, Anna" <azhang@verisign.com> To: <pgsql-admin@postgresql.org> Sent: Wednesday, December 19, 2001 10:57 AM Subject: [ADMIN] poor performance of loading data > > I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing > research to see how postgres doing, I used copy utility to import data from > a text file which contains 32 mils rows, it has been 26 hours passed, but > still running. My question is how postgres handles such data loading? it > commited every row? or commit point is adjustable? How? Does postgres > provide direct load to disk files like oracle? Other ways to speed up? If > loading performance can't be improved significantly, we have to go back to > oracle. Anybody can help? Thanks! > > Anna Zhang > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org >
В списке pgsql-admin по дате отправления: