Обсуждение: Insertion puzzles
Hello to all, I am new to this group and postgresql. I am working on a project which uses postgresql and project is time critical. We did all optimization in our project but postgresql seems to be a bottle-neck. To solve this we run the database operations in a different thread. But still, with large volume of data in database the insert operation becomes very slow (ie. to insert 100 records in 5 tables, it takes nearly 3minutes). vacuum analyze helps a bit but performance improvement is not much. We are using the default postgres setting (ie. didn't change postgresql.conf). One more point: When we try to upload a pg_dump of nearly 60K records for 7 tables it took more than 10hrs. System config: Redhat Linux7.2 RAM: 256MB postgres: 7.1.3 connection: ODBC Thanks to all, please consider it even if it is silly doubt. Vivek __________________________________ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com
On Nov 13, 2004, at 12:26, vivek singh wrote: > But > still, with large volume of data in database the > insert operation becomes very slow (ie. to insert 100 > records in 5 tables, it takes nearly 3minutes). What are the performance when you use COPY FROM instead of INSERT ? And have you tested the performance with fsync on and off. -- Andreas Åkre Solberg, UNINETT AS Testnett Contact info and Public PGP Key available on: http://andreas.solweb.no/?account=Work
Вложения
Well, the default configuration for postgresql 7.1.3 is *very* conservative. ( ie. very slow) You should seriously consider upgrading to 7.4.6 as server performance has increased; in some cases significantly. If that is not an option, certainly tuning the shared buffers, and effective cache settings would be advisable. dave Andreas Åkre Solberg wrote: > > On Nov 13, 2004, at 12:26, vivek singh wrote: > >> But >> still, with large volume of data in database the >> insert operation becomes very slow (ie. to insert 100 >> records in 5 tables, it takes nearly 3minutes). > > > What are the performance when you use COPY FROM instead of INSERT ? > And have you tested the performance with fsync on and off. > -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561
On Sat, 13 Nov 2004, vivek singh wrote: > I am new to this group and postgresql. I am working on > a project which uses postgresql and project is time > critical. We did all optimization in our project but > postgresql seems to be a bottle-neck. To solve this we > run the database operations in a different thread. But > still, with large volume of data in database the > insert operation becomes very slow (ie. to insert 100 > records in 5 tables, it takes nearly 3minutes). That's pretty bad. What does the schema look like? Are there any foreign keys, triggers or rules being hit? > vacuum analyze helps a bit but performance improvement > is not much. > We are using the default postgres setting (ie. didn't > change postgresql.conf). Hmm, there are a few settings to try to change, although to be honest, I'm not sure which ones beyond shared_buffers (maybe try a couple thousand) are applicable to 7.1.3. You really should upgrade. Alot of serious bug fixes and performance enhancements have been made from 7.1.x to 7.4.x.
Vivek, > Redhat Linux7.2 > RAM: 256MB > postgres: 7.1.3 Um, you do realise that both RH 7.2 and PostgreSQL 7.1 are "no longer supported" but their respective communities? -- Josh Berkus Aglio Database Solutions San Francisco
Actually, the most damning thing in this configuration I had missed earlier 256MB of ram ! Dave Josh Berkus wrote: >Vivek, > > > >>Redhat Linux7.2 >>RAM: 256MB >>postgres: 7.1.3 >> >> > >Um, you do realise that both RH 7.2 and PostgreSQL 7.1 are "no longer >supported" but their respective communities? > > > -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561
Vivek, I ran into the exact same problem you did. I tried many, many changes to the conf file, I tried O.S. tuning but performance stunk. I had a fairly simple job that had a lot of updates and inserts that was taking 4 1/2 hours. I re-wrote it to be more "Postgres friendly" - meaning less database updates and got it down under 2 1/2 hours (still horrible). Understand, the legacy non-postgres ISAM db took about 15 minutes to perform the same task. I assumed it was a system problem that would go away when we upgraded servers but it did not. I converted to MySQL and the exact same java process takes 5 minutes! Postgres is a great DB for some, for our application it was not - you may want to consider other products that are a bit faster and do not require the vacuuming of stale data. Original Message: ----------------- From: vivek singh sing_vivek@yahoo.com Date: Sat, 13 Nov 2004 03:26:09 -0800 (PST) To: pgsql-performance@postgresql.org Subject: [PERFORM] Insertion puzzles Hello to all, I am new to this group and postgresql. I am working on a project which uses postgresql and project is time critical. We did all optimization in our project but postgresql seems to be a bottle-neck. To solve this we run the database operations in a different thread. But still, with large volume of data in database the insert operation becomes very slow (ie. to insert 100 records in 5 tables, it takes nearly 3minutes). vacuum analyze helps a bit but performance improvement is not much. We are using the default postgres setting (ie. didn't change postgresql.conf). One more point: When we try to upload a pg_dump of nearly 60K records for 7 tables it took more than 10hrs. System config: Redhat Linux7.2 RAM: 256MB postgres: 7.1.3 connection: ODBC Thanks to all, please consider it even if it is silly doubt. Vivek __________________________________ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org -------------------------------------------------------------------- mail2web - Check your email from the web at http://mail2web.com/ .
On Sat, 2004-11-13 at 18:00, dentfirst13@earthlink.net wrote: > I ran into the exact same problem you did. I tried many, many changes to > the conf file, I tried O.S. tuning but performance stunk. I had a fairly > simple job that had a lot of updates and inserts that was taking 4 1/2 > hours. I re-wrote it to be more "Postgres friendly" - meaning less > database updates and got it down under 2 1/2 hours (still horrible). > Understand, the legacy non-postgres ISAM db took about 15 minutes to > perform the same task. I assumed it was a system problem that would go > away when we upgraded servers but it did not. I converted to MySQL and the > exact same java process takes 5 minutes! Postgres is a great DB for some, > for our application it was not - you may want to consider other products > that are a bit faster and do not require the vacuuming of stale data. I have to wonder if the difference is in how your job is being chopped up by the different connection mechanisms. The only time I've had performance problems like this, it was the result of pathological and unwelcome behaviors in the way things were being handled in the connector or database design. We have a 15GB OLTP/OLAP database on five spindles with a large insert/update load and >100M rows, and I don't think it takes 2.5 hours to do *anything*. This includes inserts/updates of hundreds of thousands of rows at a shot, which takes very little time. I've gotten really bad performance before under postgres, but once I isolated the reason I've always gotten performance that was comparable to any other commercial RDBMS on the same hardware. J. Andrew Rogers